Independence vs. Mutually Exclusive

I used to tutor undergrad students in statistics when I was in grad school. One question that almost every student asked me was to explain the difference between “independence” and “mutually exclusive”. Of course there’s the probabilistic definitions:

If P(A \cap B) = P(A)P(B), then the events A and B are independent.

If A \cap B = \emptyset then the events A and B are mutually exclusive.

But that can be a little too abstract for a new statistics student. Here’s what I would tell them.

When we talk about independence, we’re talking about a series of events. The classic example is a series of coin flips. The outcome of the first flip (or first 10,000 flips) has no effect on the probability of the next flip. The flips are independent. The probability does not change from flip to flip. The same concept can be applied to answers to a particular question on a survey taken by a 100 people, or a vital sign taken from each patient in a clinical trial treatment group, or weight measurements of a random sample of candy bars rolling off an automated production line. We can think of all those as being independent measurements. Because someone in Chicago said “yes” to a survey question doesn’t change the probability that someone in Dallas will also respond “yes”. Again, we’re talking about a series of events and whether or not the probability of the event outcomes change based on earlier event outcomes. The counter-example to independence is drawing cards. The probability of drawing, say, a King changes from draw to draw if you do not replace the cards. If your first draw is a 10, you have a slightly better chance of drawing a King on the next draw because of the size of the deck decreased by 1 card.

When we talk about events being mutually exclusive, we’re talking about observing one outcome and whether or not some or all of the events can occur at the same time. If I flip a coin once, I will either observe a head or tail, not both. The events are mutually exclusive for that outcome. Same with rolling dice. One roll and I will observe one side. I can’t observe both a 2 and 5 on one roll. The events are mutually exclusive. A baby is born. It will either be a boy or a girl. The events are mutually exclusive. I take a sample of blood from someone to determine the blood type. It will be one type, say O-negative. It can’t be two types.

So that’s the main idea for a high-level distinction between the two:

  1. Independence deals with probability of outcomes over a series of events.
  2. Mutually exclusive deals with the possibility of events in a single outcome.

Now the ultimate gotcha question is can a pair of events be simultaneously mutually exclusive and independent? In other words, if two events are possible (ie, they have a probability greater than 0), then can they both be mutually exclusive and independent? The answer is No.

If A and B are mutually exclusive, then A \cap B = \emptyset, which implies P(A \cap B) = 0. If A and B are independent, then P(A \cap B) = 0, but both A and B have positive probability so that can’t be. So A and B are not independent.

Likewise, if A and B are independent, we have P(A \cap B) = P(A)P(B) > 0 since both events are possible. But as we saw before, if two events are mutually exclusive then P(A \cap B) = 0. So A and B can’t be mutually exclusive if they’re independent.

 

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.