# Tails Explained

Left brain issue today, but stick with me. Let’s start with a puzzle.

“You have 100kg of potatoes, which are 99% water by weight. You let them dehydrate until they’re 98% water. How much do they weigh now?”

[Jeopardy music]
.
.
[Keep trying…]
.
.
[Here’s a hint if you’re stuck]

Congrats, you just solved what is known as the Martian Potato Paradox. I came across it on Twitter and I re-tweeted it to explain why I think the problem’s general lesson is important.

Small Probabilities Are Devilish

The potato problem is tricky is because small percents are tricky. The jump between 1% and 2% feels insignificant. I suspect that is an artifact of our additive thinking. But once we point out that the 2% is 100% larger than 1% we get closer to the correct intuition. We can see that a jump from 1% to 2% is more significant than the jump from say 50% to 60%. The solution to the potato problem holds the key…you need to look at the significance in the payoff space not the probability space.

Let’s consider a bet. In 2014, the betting odds of Donald Trump being elected was say 1%. 1% corresponds to 99-1 odds. If his probability increased from 1% to 2% the new odds are 49-1. A person who was long Trump just doubled their equity in the position. 1 to 2 feels small. But 99 to 49, similar to the potato problem, shows how significant that extra 1% truly is. (I have a friend, a world-class gambler actually, who lost \$400k betting against Trump before he was even a nominee. Highlights how dangerous it is to be wrong about the tails of a distribution.)

Meanwhile, a bet that has 33% chance, has fair odds of 2-1. A bet with a 25% chance has fair odds of 3-1. While these are large differences in odds, being wrong about them is less likely to be catastrophic. Beware the small probabilities.

I made this chart as a visual reminder that being miscalibrated by just 1% leads to massive error in payoff space when dealing with probabilities or payoffs below 10%. Again the difference between 1% and 2% is a 100% error in payoff space while the difference between 50% and 51% is a mere 2% error in payoff space.

I Don’t Gamble. Why Should I Care?

If you are a taxpayer, commuter, ever been a patient, or live near natural disasters you are gambling. Whether you want to be an ostrich about that is up to you, but modern society is constantly forced to handicap small probabilities. And miscalibration can have catastrophic payoffs. Let’s move to examples.

Natural Disasters

I’ve covered basic earthquake math before. If the odds of “the big one” in any given year is 1% it’s a 1 in 100-year event. If it’s a 3% event, it’s a 1 in 33-year event. That seemingly small delta is the difference between ignoring the risk and being prepared for one in your homeownership years.

My commute is a good demonstration of how the government’s assessment of risk affects all of us. Last year, SF undertook a multi-year retrofit of the Transbay Tube to withstand a once in 1000 years earthquake. They believe it can currently withstand a 1 in 100 years event. As you can imagine this is an expensive use of public funds predicated on their ability to estimate small probabilities. The direct costs are obvious. A dollar for this project is a taxpayer dollar. Which means it’s also a police force, firefighter, or teacher opportunity cost dollar. Never mind the opaque costs. The retrofit required them to start the trains later every day. I know because I used to take the first train which is now 30 minutes later. This forced many people whose work hours are extremely rigid to now drive to work. BART claims this later start will save 4 months of construction time and \$15mm+ in costs. Well, how many lives can be lost to car accidents before that measly \$15mm is offset? That leads us right to our next topic…

Tort Damages

Remember Ed Norton’s job in Fight Club? He would compute the expected value of lives lost vs the cost of recalling a faulty vehicle. While this sounds callous, this calculation, known as the “value of a statistical life” or VSL, is not the strict domain of evil corporate calculus. It’s the basis of medical damages, workers comp, and various forms of insurance. If you look online you can find ranges of these values based on various countries’ legislation but on average you are talking about a 7-figure sum ascribed to human life.

There’s a rich economic and legal literature dealing with these calculations. We can make inferences by what people pay for insurance or how much they say they be willing to pay to reduce their risk of injury. No method is perfect but pragmatism is such that human life is very much not “priceless”. When I was in college I wrote a paper for an Econ and Law course that tackled this problem by way of revealed preferences. Let’s pretend 2 occupations were the same qualitatively but differed solely in their risk. The riskier job would pay more. The difference in pay can be used to imply how people put a price on their life.

After a quick search, I found that a cab driver’s chance of death on the job was about 2 in 10,000. A logger’s occupational fatality rate is 5x higher, but still just 10 in 10,000 or 1/10th of a percent. Making up numbers now, if a logger makes \$70k per year and the cab driver \$60k, then we can imply a value of \$12.5mm for the value of human life based on this revealed preference method.

Put your distaste for this approach away for a moment and note how sensitive the value of human life is for seemingly small changes in “perceived death risk”. If the logger thought his actual death risk was 1% instead of .10% and he only accepted a \$10k per year premium he’s valuing his life 12x cheaper. Even though we are talking about a seemingly small change in probability, in percent terms we increased the risk by 10x and this is obvious when we see the result in payoff space. The logger is valuing his life at a mere \$1mm.

The broader policy takeaway is that torts and damages are built up from small probabilities making them swing wildly or prone to large deviation based on optically small differences in risk assessments.

Health

Without math, we can see that the decision to do Lasik or eat blowfish is sensitive to very small probabilities. Lasik you would likely only do once in your life but the difference between a bad outcome being 1 in 10,000 or 1 in a 1,000 might matter if your livelihood depended heavily on your vision.

And for a repeated risk, and I’m sorry but taking a helicopter every day falls in this category, the math deserves a visit.

No doubt, accidents are exceedingly rare. In 2019, less than half the accidents were fatal which is even more comforting. Kobe’s fate was awful luck even considering how frequently he flew. And we can see how flying frequently certainly compounds the cumulative risk. But I want to point out that tripling the accident rate shows up proportionally in the “payoffs”, while in probability space it remains invisible. If I told you the accident rate was 1 per 100,000 flight hours or 3 per 100,000 flight hours you probably wouldn’t bat an eye.

The lesson:

You need to look at the payoffs of small probabilities to appreciate the differences.

…To apply this understanding in your investing check out my post How Tails Constrain Investment Allocations