Overfitting the Wrong Lesson

A few observations.

  • In the 1980’s you could drink Gatorade if you wanted to be “like Mike”. Today, I’m told, if you drink Brita, you can be like Steph (too Bay area centric?)
  • Nearly every air traveler in the US takes off their shoes at security and totes smaller-than-3.4 oz bottles in their dopp kit.
  • When I was younger my elders did not read me a book before bed. Today, it’s an obligatory milestone on the bedtime routine odyssey perpetrated by weary parents afraid of being wardens of future troglodyte mongrels.

Let’s play, what do these observations have in common?

I’d argue that with an increasing degree of subtlety from the first proposition to the last, that these prescriptions fall out of our natural human tendency to:

Overfit the data.

It’s fairly obvious to anyone that has ever loved themselves some orange Gatorade that it’s not going to give you those extra 8 coveted inches of fast-twitch verticality. Asking nearly 2mm flyers a day to remove their shoes feels like your fighting yesterday’s war. Closing the proverbial door after the horse already left the barn. Reading to your kids is a specific act which positively correlates with better outcomes. But it’s not clear that it was the reading for its sake or the fact that it signals that you have the means and will to spend time with your kids. If it turned out William Gates Sr played Twister with young Billy, we’d probably have a whole generation of parents self-flagellating for missing a single night of bedside yoga before lights out.

There’s nobody better than Morgan Housel at weaving historical context, counterfactuals, and psychology to reveal the blind spots in our thinking especially in the realm of investing where noise obscures signal like a house of mirrors.

He writes:

The most important lessons from history are the takeaways that are so broad they can apply to other fields, other eras, and other people. That’s where lessons have leverage and are most likely to apply to your own life…the more specific a lesson of history is, the less relevant it becomes. That doesn’t mean it’s irrelevant. But the most important lessons from history are things that are so fundamental to the behaviors of so many people that they’re likely to apply to you and situations you’ll face in your own lifetime.

His most recent work discusses 5 Lessons From History (with my highlights).

You may have heard “anecdotes are not data”, usually and justifiably in offense of extrapolating from small sample sizes. Charlie Munger also notes that “experience does not scale” when promoting his near-legendary appetite for reading. Reading simply allows you to cover a lot of learning ground, but similarly to overfitting conclusions from experience, you must be wary of onboarding the wrong lessons from what you read. We are drawn to stories. As a species, we are a slave to narratives and while this capacity allows us to cooperate across differences it also leaves us prone to learning the wrong lessons. Lessons about extremes tantalize us but also embed themselves in our mental RAM for quick access. This ease of retrieval can be expensive if it distorts our reality. When we remember to slow down and search our mental hard disks, we can recall that being worried about car accidents makes more sense than fearing sharks. While that’s a simple example you probably learned at a museum when you were 8, be on guard, the seduction of stories is ubiquitous and usually more subtle. There’s no magic amulet against their misdirection, but I’ll just point out a few considerations when forming conclusions.

Complexity

Extreme outcomes are the result of complex interactions. They are difficult to reduce and you should resist the urge to attribute them to a single cause. Especially if the cause is first-order. Like a non-biological version of Dollo’s Law. Professor Kahneman reminds us, “What you should learn when you make a mistake because you did not anticipate something is that the world is difficult to anticipate. That’s the correct lesson to learn from surprises: that the world is surprising.”

Counterfactual Thinking

The lesson of winning on a roulette wheel is clearly not “bet on roulette wheels”. But we make that kind of mistake all the time. It’s possible to get lucky and draft Draymond Green (it’s Finals time in the Bay area, I can’t help this) in the second round, but it would be foolish to re-rate the value of second-round draft picks based on this outcome. It’s important to have criteria before the fact help you evaluate whether you are making good or bad judgments. Don’t allow the outcomes to become the absorbing barrier of your post-mortems.

Don’t confuse being successful “because” of a reason when the success is actually “despite” that same reason. Elad Gil makes the point that booming tech companies often adopt silly policies concluding that those policies were at the heart of the success, when in fact the geometric growth of the market they reside in is actually masking how those policies are actually net detractors.

Scott Alexander points out the related mistake “when people use the [reduced] ozone hole as an argument against alarmism, environmentalism is a victim of its own success.”  Taleb makes a similar claim when he points out that counter-terrorism measures are a victim of their own success when people claim that terrorism isn’t something to be alarmed about because its incidence is statistically very low.

Correlation vs Causation

You do not get a swimmer’s body from swimming any more than you get taller from playing basketball. Those naturally endowed with a swimmers body who take up swimming seriously become good swimmers. Just as you would expect any phenotype-sport match to work. While this example is pretty simple, the general category of this mistake is to confuse correlation for causation. Throw in lagging and leading correlations and finding cause can quickly become harder than finding Waldo on a huge mural.

Avi reminded me of the best spurious correlation site. I can’t speak for non-finance, but in our world, we are careful about how we weight correlations. They are notoriously non-stationary. The degree of instability is a function of the subjectivity behind an asset’s narrative drivers. A bond has less narrative subjectivity than bitcoin, so its correlation is more trustworthy, albeit still very noisy. Demonetized covers this idea nicely.

Signal vs Noise

A quote from Mike Mauboussin: “When luck plays a part in determining the consequences of your actions, you don’t want to study success to learn what strategy was used but rather study the strategy to see whether it consistently led to success”. Mauboussin does top-notch research in finance, especially on competitive dynamics. A big takeaway: The closer the skill level of the competition, the more outcome is driven by luck. An application of this is to consider whether to pay an investment manager a hefty fee for attempting to generate superior relative performance (ie alpha). If the playing field is relatively level you should restrain since the fee-to-alpha ratio would be high. More mundane version: you might stake a sharp poker player to play in a patsy tournament but you would not stake that player in a game of Candyland.

Mean Reversion vs Feedback Loop Dynamics

Regression to the mean is a powerful force. Some examples from my recent reading of How Not To Be Wrong:

  • When a rock band’s debut album is a smash hit, you should expect a ‘sophomore slump’.
  • RB’s who regress after signing a big contract after a great season are often labeled ‘unmotivated’, but after a great season, anything other than regression should be surprising. It need not be related to incentives. In fact, you might even expect a competitive person like a pro athlete to try extra hard to prove their worth.
  • We over-attribute company future underperformance to the competition when regression often suffices.
  • Bran is shown to increase digestion speed in slow digestors and slow it in fast digestors. Turns out further analysis has revealed that these effects were not stronger than expected mean reversion when dealing with extreme patients.
  • Diet studies. It is typical for people to go on a diet when the scale hits an extreme end of their range, leaving diets too much credit for the reversion.
  • Scared Straight programs that expose troubled youth to harsh detention center environments. Much of the Scared Straight effect was just mean reversion.

Not all phenomena are mean reverting. Virtuous and vicious cycles use feedback to refuel a recurring pattern so that we end up with runaway divergent effects. It is fashionable to say this is currently occurring with tech firms which use your data to improve their products, which attracts more customers, which spawns more data and repeat. Throw in the power of network effects and you end up with the bulk of the world’s 10 largest companies being data-forward tech firms. The competitive exclusion principle in action.

Binary Thinking Traps

Movies and stories have you associate extreme intelligence with social awkwardness. John Nash and Asperger’s geniuses. Extremes. Contrast that with the street smart meme. Han Solo and Henry Hill. These vivid extremes are very mentally available and lead us to think in false dichotomies. We are fooled into thinking high EQ people and high IQ people are mutually exclusive or inversely related. In fact, EQ and IQ are actually positively correlated and if you winsorize the extremes it probably doesn’t surprise you that IQ and EQ share many underlying factors. Why? Who knows, maybe environmental examples could be a safe upbringing and food to eat while genetic factors could be as simple as memory. The point is that extreme anecdotes can easily distract you from the far more common observation that IQ and EQ are more coincident than incompatible.

Another example of a binary thinking trap is to forget that individual differences between people are much wider than population level differences. What does this mean in practice? Let’s consider the stereotype that Asians are better than whites at math. At a population level, this may or may not be true, but even if it were, it’s unlikely to be by a large margin. Let’s assign a mean score to whites as 100 and Asians 101. Let’s say the standard deviation of these scores is 20 for each race. If you randomly drew an Asian and a white from the population the chance the Asian were better at math would be so slight that it would be meaningless. It’s about 51%, the reason for this is that the distribution of the difference has a variance that is the sum of the individual distribution variances. In other words, even if you were confident of the population variation of an attribute, it doesn’t mean that you would draft a Math Olympiad team with that knowledge especially when you have more predictive data like the kids’ test scores, their performance in class, and on homework. The information from these sources dominates the information you have from the population but we often overweight high variance differences in mathematically unjustifiable ways.

So next time someone says “these people are on average better than these people”, besides being suspicious, it’s also likely that the weak signal in that information (if it were even true) most likely subordinates it to the context you have at hand.

Finally…

When we draw conclusions or lessons from stories and cases it pays to inoculate ourselves against gut reactions. Calibrating to reality is not easy, requires energy, and is often in conflict with our deep-seated desire to integrate new information in a way that does not contradict or disturb the safety of our prior beliefs. Understanding causality, as arduous as it can feel, is important. Remember the words of Joseph Tussman:

“What the pupil must learn, if he learns anything at all, is that the world will do most of the work for you, provided you cooperate with it by identifying how it really works and aligning with those realities. If we do not let the world teach us, it teaches us a lesson.”

Leave a Reply