The Monty Hall Problem

Probability can be difficult to wrap your head around. When making a decision, we know that there is a certain amount of risk involved, so we look for best outcome with the favorable odds. That’s game theory in a nutshell. But because we’re not always equipped to understand probability, we’re often making decisions using the wrong information. A perfect example of this is the Monty Hall Problem.

Monty Hall was the host of the television game show Let’s Make a Deal, which aired on NBC and then later ABC in the 60s and 70s. Throughout the show, contestants played games to compete for various prizes. The final game was called The Big Deal, in which the contestants, called traders, picked one of three doors for a chance to win an even bigger prize.

According to Wikipedia:

Monty Hall (or his successors) would begin asking the day’s traders, usually starting with the highest winner, if they wanted to give back what they had managed to win earlier in the show for a chance to choose one of three numbered doors on the stage. The process continued until two traders agreed to play, and the biggest winner of the two got first choice of Door #1, Door #2, or Door #3. The other trader then chose from the remaining two doors.

Behind each door was a prize, so it’s not like the contestants lost everything by choosing the wrong door. But it was possible for a contestant to give up a great prize from earlier in the show for a prize of lesser value.

The Monty Hall Problem

In 1975, statistician Steve Selvin posed a brain teaser based on the Big Deal segment that was picked up by Parade magazine columnist Marilyn vos Savant. It caused a bit of a commotion among statisticians.

It goes like this:

Suppose you’re on a game show, and you’re given the choice of three doors: Behind one door is a car; behind the others, goats. You pick a door, say No. 1, and the host, who knows what’s behind the doors, opens another door, say No. 3, which has a goat. He then says to you, “Do you want to pick door No. 2?” Is it to your advantage to switch your choice?

Vos Savant — who was once listed in the Guinness Book of World Records for having the highest recorded IQ — said in her column that the contestant is better off switching than sticking with his/her original choice.

This resulted in a massive campaign of mansplaining from mathematicians across the country. How could she be correct? When one door is taken away, the odds of one door containing the car and the other containing the goat are 50/50, right? Letters poured in demanding that she issue a correction to her column.

Here is the catch: They were wrong and she was right. And this is how probability can trip up even the experts.

The Solution

Let’s say I put three playing cards down face up and tell you that I have picked one of these cards as my favorite. Now I ask you to pick one as your favorite. What’s the probability you’ll choose the same card as me?

cards-monty-hall

There are three potential options, and I have picked one, so the probability of choosing the same card as me is 1/3. Easy, right?

Now you’ve picked your card, I’m going to remove a card that is neither your favorite card nor mine. Keep in mind I didn’t remove a card at random. I know which card is my favorite and which card is yours, so I’m going to remove the extra one.

Now what’s the probability you chose the same card of me?

Despite the fact that I removed one card, it remains the same. It’s the law of the sample space. In the beginning, there were three possibilities (with 1 representing a chosen card and 0 representing a card that wasn’t chosen): [1,0,0], [1,1,0], and [1,0,1] — with the first set indicating we both chose the same card. Now remove a zero (the extra card) from each possibility: [1,0], [1,1], and [1,1]. (Note: [1,1] and [1,1] are not the same, think about it using the cards above. If I chose 10, it would look like this [10,9] and [10,6]).

Despite the loss of one card, it remains two-thirds likely that the we did not choose the same card, hence the better chance of winning if you switch doors than stick with the original choice.

Simulating the Monty Hall Problem in R

If you’re still skeptical, here’s a couple of functions that will help you simulate the game in R.

The plot below depicts the outcomes of 100, 500, 1,000, 5,000 and 10,000 simulations where the contestant sticks with his/her original choice.

monty-hall-simulations

Thanks to the law of large numbers, we can see that with the strategy of sticking to the first choice, the contestant loses 66% (or 2/3) of the time and wins 33% (or 1/3) of the time.

But the plot above raises another point about why we have a hard time understanding probability. In the first 100 simulations, the strategy of sticking to the original door fares even worse (76% to 24%). With a small sample, there are no guarantees. On a game show, you are a sample of one, so you may just fall into the 33% of outcomes where sticking to your original choice wins the game. Expected values merely tell you the outcome that is most likely, not definite.

This is why some will say that for incredibly important, life-or-death decisions, never settle on an expected value less than 99%.

Or you could take the Han Solo approach.

via GIPHY