A couple friends of mine described the so-called “Monty Hall Problem” to me a few weeks ago. (I’d probably heard about this long ago, but if so I’d forgotten about it.) The problem, named after “Let’s Make a Deal” host Monty Hall (pictured here), is a puzzle of logic and statistics.

Here’s the problem (in my own terms): Imagine a game show where you’re trying to win a car. The game works as follows. There are three doors on stage. Behind one door is the car. Behind the other two doors are goats. (Or you can imagine whatever other prize and booby prize you like.) You get to make an initial selection of one of the doors, but you can’t see what’s behind it. Then the host shows you which of the other two doors opens to the goat. Then you get to stay with your initial choice or switch to the other unopened door. What should you do?

When my friends suggested to me that the correct move is to always change your selection to the other door, I thought they were nuts. I thought they had fallen for a logical trick. After all, once we know that one of the doors opens to a goat, we’re left with only two choices: our original choice or the other unopened door. There’s a fifty-fifty chance of guessing correctly.

I couldn’t quite put my finger on the error that I thought was behind the advice to always switch, but I thought it had something do to with confusing the two time sequences (the initial versus the final choice of doors).

But then I was reading through Sam Harris’s *The Moral Landscape*, and he uses the Monty Hall Problem to illustrate the dangers of always going with our “gut” reaction. How, I wondered, could an intelligent neuroscientist fall for this same trick?

So I decided that, by God, I was going to figure out what was wrong with the standard Monty Hall analysis. What better way to do that, I thought, than by running my own trials? (The Wikipedia entry on the matter suggests that others have run simulations and even let pigeons have a go. Apparently the pigeons tend to switch to the third door.)

I rolled a die to determine which door hid the car and which door I initially selected. Then it’s easy to figure out if, by switching, you get the car or the goat. By always switching, I ended up selecting the car 17 times out of 30 trials, which was not very helpful given it’s about halfway between 15 (fifty-fifty odds) and 20 (two-thirds odds).

But running the trial quickly gave me the idea of what’s going on. In my first trial, I selected Door 1, while the car was behind Door 2. That means that “Monty” reveals a goat behind Door 3. By switching from Door 1 to Door 2, I get the car.

In my third trial, I selected Door 1, and the car was behind Door 1 as well. Thus, when “Monty” reveals a goat behind Door 2 (or Door 3), I switch to the other door and end up with the other goat.

Here’s the general idea. Every time you initially select the door that happens to hide the car, you switch to another door and get a goat. Every time you initially select a door that hides a goat, you switch to the door that reveals the car.

Or, in other words, by switching, one-third of the time you’ll end up with a goat, and two-thirds of the time you’ll end up with the car. (If this is not now obvious to you, I suggest you run your own trials to get the hang of how it works. In rolling the die, I assigned sides 1 and 2 to Door 1, sides 3 and 4 to Door 2, and sides 5 and 6 to Door 3. Or you could set up actual doors if you want to get fancier and more concrete.)

So what’s going on here? When “Monty” reveals one of the remaining doors to contain a goat, he is introducing new information into the process.

To make this more obvious, we can imagine a game with more doors. (Wikipedia suggests this.) What you’re really doing in making your initial selection is forcing “Monty” to reveal additional information about the remaining doors. So let’s say there are more doors, and “Monty” has to reveal the rest of the doors except for one.

Let’s say there are six doors, and you initially select Door 1. Let’s say “Monty” reveals goats behind Doors 2 through 5. The car, then, is behind either Door 1 or Door 6. What do you do?

Your three basic choices are these. Always stick with your initial selection, which, in this case, gives you a one-in-six chance of getting the car. Or you can choose randomly between the remaining two doors, which gives you a fifty-fifty chance of getting the car. Or you can always switch to the remaining door, which is the prudent move.

I actually ran a new trial with six doors (using a six-sided die to determine the door with the car and the initial selection). Out of 18 trials, I got the car 17 times by always switching (which is even better than the statistical prediction). (I didn’t really need to run the trials at this point, but I figured I’d follow through with it.)

Or you can imagine 100 doors. If you want to run trials for this, you might use the real random number generator. The outcome follows the same pattern. If you always stick with your initial selection, you’ll end up with the car about one out of a hundred times. If you always pick randomly between the final two doors, you’ll increase your odds to fifty-fifty. If you always switch to the other door, you’ll increase your odds to 99-in-100. The only time you’ll lose out is if by luck you happen to pick the door with the car in your initial selection, then switch.

Realizing that you can increase your odds by moving away from the strategy of always sticking with your initial selection, to picking randomly between the final two doors, to always switching, should disrupt your initial “intuition” (if you had it) that the odds are always fifty-fifty.

Of course, I’m pretty sure the game shows have figured this sort of thing out by now.

This problem is only perplexing when people treat probabilities as metaphysical, not epistemological. In classical statistics, a probability is something that exists out there, independent of the observer, and it’s up to the observer to discover it. In contrast, in Bayesian statistics a probability is a measure of your ignorance about something that exists in reality, and any new bit of information helps update probabilities. The Monty Hall problem seems paradoxical when seen from a classical statistics perspective, but it makes perfect sense when seen from a Bayesian statistics perspective. Incidentally, I think this is also the reason people get hung up on wrong interpretations of quantum mechanics, because they want to treat probabilities as metaphysical, when they are actually epistemological.

Jim May writes on Twitter: “By revealing one door, Monty essentially allows you to effectively choose *both* of the other doors. 2/3.”

I finally got it. I found myself refusing to accept the conclusion that, in the case of three doors, it always made sense to switch (even though it was clear to me why it made sense with more doors).

I’d categorize this as an example of context dropping. If you drop the context of the fact that your first choice among three was *probably wrong*, then you are acting without the knowledge that you should *probably* switch to the other door of the remaining two. Neat the way that works.

Thanks Ari. I’ve tangled with this one before and never accepted it. But the idea of adding information makes it work for me. Even though it still hurts my head…