The Monty Hall Problem was popular on Twitter this week. It’s worth taking a look at it because the Bayesian logic behind the solution is pertinent to reasoning about uncertainty in general.
Let’s start by turning to Wikipedia:
The Monty Hall problem is a brain teaser, in the form of a probability puzzle, loosely based on the American television game show Let’s Make a Deal and named after its original host, Monty Hall. The problem was originally posed (and solved) in a letter by Steve Selvin to the American Statistician in 1975. It became famous as a question from reader Craig F. Whitaker’s letter quoted in Marilyn vos Savant’s “Ask Marilyn” column in Parade magazine in 1990:
Suppose you’re on a game show, and you’re given the choice of three doors: Behind one door is a car; behind the others, goats. You pick a door, say No. 1, and the host, who knows what’s behind the doors, opens another door, say No. 3, which has a goat. He then says to you, “Do you want to pick door No. 2?” Is it to your advantage to switch your choice?
I’ll give you space to think about it.
So what happened?
Vos Savant’s response was that the contestant should switch to the other door.
She was right.
Man, if only the “You mad, bro?” was a thing back then. If that answer makes you mad, don’t worry, you are in good company:
Many readers of vos Savant’s column refused to believe switching is beneficial and rejected her explanation. After the problem appeared in Parade, approximately 10,000 readers, including nearly 1,000 with PhDs, wrote to the magazine, most of them calling vos Savant wrong. Even when given explanations, simulations, and formal mathematical proofs, many people still did not accept that switching is the best strategy. Paul Erdős, one of the most prolific mathematicians in history, remained unconvinced until he was shown a computer simulation demonstrating vos Savant’s predicted result.
I’m a dude, and even I got the “mansplain” vibe from the bold-faced section. Wikipedia continues:
The problem is a paradox of the veridical type, because the correct choice (that one should switch doors) is so counterintuitive it can seem absurd, but is nevertheless demonstrably true.
So I chimed in on Twitter with my favorite way to understand why you should switch:
You can find a fuller discussion of the problem and its variants by Professor Jeffrey Rosenthal here.
Rosenthal considers my explanation “shaky” because it fails in some of the variants.
The reason it works in this version is that the host is a “trusted actor”. He is 100% to open an empty door. If he opened a door at random, then your reflex that switching shouldn’t matter would be correct.
Problems like this are Bayesian and can be approached using what Rosenthal calls the “proportionality principle”.
The Proportionality Principle: If various alternatives are equally likely, and then some event is observed, the updated probabilities for the alternatives are proportional to the probabilities that the observed event would have occurred under those alternatives.
That’s a mouthful.
Let me start with an example I made up, then map the proportionality principle’s definition, line by line, to the solution. [don’t crucify me for the made-up shooting percentages]
Paul George and Kawhi Leonard are equally likely to have the ball on the last play of the game down by 2. You discover the Clippers won. Paul George is a 50% 3-pt shooter and Kawhi is a 25% 3-pt shooter.
What’s the probability Paul George took the shot?
So the long way of doing this is to map the paths to victory.
I took the liberty:
What does this tell us?
So Paul George took the shot 2/3 of the time. Going into that last play we expect he wins us the game 1/4 of the time (50% chance of getting the ball x 50% chance of making the shot) but once we “condition” the question on “The Clippers won”, the probability that he took the shot jumps to 2/3!
The proportionality principle allows us to make a shortcut. Follow me step-by- step through the definition laid out above.
If you like this kind of probability math, you can look into Bayes Theorem, which is about how we update our “priors”, ie our probability estimates at the get-go, once we get new information (sometimes called “conditioning”, because, well we impose new conditions).
If you don’t like this type of math, perhaps you feel like it’s not relevant. I assure you it is. Just imagine a disease has occurs 1 in 100,000 people but the test for it has a 5% false-positive rate. If you test 100,000 people for it, 5,000 of them will test positive in error yet only 1 person in the population actually has the disease. You’ve got 4,999 doomscrolling WebMD when their odds of actually having the sickness (after the positive test!) are on par with getting struck with lightning at some point in their life.
If this still sounds abstract, then you are my hero for somehow avoiding innumerate covid headlines.
If you use options to hedge or invest, check out the moontower.ai option trading analytics platform
I listened to Founder's podcast episode #345 about the life of George Lucas. The following…
Know-Nothing Sizing We’ve been talking about how the market does follow the fundamentals you are…
Friends, I published a new resource: The Essential Paul Graham (Moontower reading guide) Description Paul Graham…
When a friend asks me what I think of investing in the SP500 I have…
Friends, I saw this chart on LinkedIn and the call of mental math immediately lured…
Friends, I saw this chart on LinkedIn and the call of mental math immediately lured…