Friends,
You may have seen this question:
In a lake, there is a patch of lily pads. Every day, the patch doubles in size. If it takes 48 days for the patch to cover the entire lake, how long would it take for the patch to cover half of the lake?
It’s a variation of the “how many times would you need to fold a paper in half for the thickness to reach the moon?” that you have probably heard. There’s also the rice on a chessboard version.
Why are there so many covers of the same idea? Because even when people know it’s a trap they still get it wrong. Like watching someone smell something that you warned was gross. This never gets old. These questions don’t get old because we have no intuition for geometric growth.
Jacob Falkovich writes:
Before Rationality gained a capital letter and a community, a psychologist developed a simple test to identify people who can override an intuitive and wrong answer with a reflective and correct one.
Feel free to take that “test” here. (Link)
A Fatal Combination Of Cognitive Errors
So it appears our System 1 thinking is restricted to linear intuition. This is not an issue in isolation. It’s more of a problem if most people were capable of passing a CRT and overriding this System 1 thinking. I’m not well-versed on CRT literature but I suspect most subjects don’t even have an intuition for when a growth problem lives in Mediocristan or Extremistan. There’s another angle though. If it turns out most people are at least socially aware that these questions are traps and they are still getting them wrong then I’m extra sorry. That means we can recognize something’s up but the bottleneck is 2nd-grade arithmetic.
So we don’t know when our slower, methodical thoughts should take the reins from our gut reactions. Or worse, our slower thoughts don’t even know how to drive. But really getting stuck in the mud requires a wider community cognitive failure.
Falkovich continues:
Most people sitting alone in a room will quickly get out if it starts filling up with smoke. But if two other people in the room seem unperturbed, almost everyone will stay put. That is the result of a famous experiment from the 1960s and its replications — people will sit and nervously look around at their peers for 20 minutes even as the thick smoke starts obscuring their vision.
The coronavirus was identified on January 7th and spread outside China by January 13th. American media ran some stories about how you should worry about the seasonal flu instead. The markets didn’t budge. Rationalist Twitter started tweeting excitedly about R0 and supply chains. (Link)
So let’s sum up:
- We have poor intuition about geometric processes.
- Many people don’t override this intuition because they don’t realize when they should.
- Even if they realize they should, they often can’t add.
- And those who do override it are socially inhibited
The devil is too smart to knock on each person’s front door. He waits for people to get together then slips the poison in the punch — remember, alarmism about any 1% event has a 99% chance of indistinguishable from crying wolf.
The Flu Kills More People
Wrong logic. A frequentist will look at zika, Sars, ebola, swine flu and conclude overblown. This is the definition of survivorship bias. The fat, happy turkey who thinks November will be just like the prior months. Two people can come to opposite conclusions if one merely counts past results while the other goes below the surface to find the underlying dynamic. Tyler Cowen generalizes the camps into “base raters” vs “growthers”. (Link)
This has been my favorite thread quantifying the trajectory and timing of CoVid penetration, hospital bed and mask shortages, and the interaction of these variables. (Link)
The Money Angle
On the Gestalt University podcast, Chris Schindler has an intuitive explanation for the CAPM-defying empirical result that says higher volatility assets actually exhibit lower forward returns. Very simply explained — a large dispersion of opinion leads to overpaying. He points to private markets where you cannot short a company. The most optimistic opinion of a company’s prospects will set the price.
Options markets don’t care about CAPM. They model geometric returns. Higher volatility explicitly maps to lower expected geometric returns. I’ve referred to this idea as a “volatility drain” before. But here’s another way to see this. If you hold the price of an asset constant and raise the volatility the median expected outcome is necessarily more negative. Why? Because a stock is bounded by zero, so increasing the volatility should seemingly make the expected value of the asset higher. But if the market thinks the stock price is worth the same despite the higher volatility, that implies the probability of the asset declining must be higher.
(In reality, markets are constantly voting on the price, the volatility, and the left and right skew which allows an inclined observer to impute a continuous distribution.)
Back to Schindler’s point, if you want to fetch a high price for an asset, you want its value to be highly uncertain. Then sell it in an un-shortable auction with many bidders.
Climb Higher
You will probably relate.
Picture some serious non-fiction tomes. The Selfish Gene; Thinking, Fast and Slow; Guns, Germs, and Steel; etc. Have you ever had a book like this—one you’d read—come up in conversation, only to discover that you’d absorbed what amounts to a few sentences? I’ll be honest: it happens to me regularly. Often things go well at first. I’ll feel I can sketch the basic claims, paint the surface; but when someone asks a basic probing question, the edifice instantly collapses. Sometimes it’s a memory issue: I simply can’t recall the relevant details. But just as often, as I grasp about, I’ll realize I had never really understood the idea in question, though I’d certainly thought I understood when I read the book. Indeed, I’ll realize that I had barely noticed how little I’d absorbed until that very moment.
Andy Matuschak is a designer, engineer and researcher. I’m a fan of his writing and his cred is impressive. He helped design iOS and ran R&D at Khan Academy. He describes his work as:
building technologies that expand what people can think and do. I explore ideas by expressing them in real-world systems, juggling approaches from industry and academia to seek insights they can’t see alone. Thinking through making.
In his essay Why Books Don’t Work he makes claims that provide a basis for his career. It may force you to revalue your impression of familiar activities. (Link with my highlights)
Books
- Books are surprisingly bad at conveying knowledge, and readers mostly don’t realize it.
Lectures
- Lectures don’t work because the medium lacks a functioning cognitive model. It’s (implicitly) built on a faulty idea about how people learn—transmissionism—which we can caricaturize as “lecturer says words describing an idea; students hear words; then they understand.” When lectures do work, it’s generally as part of a broader learning context (e.g. projects, problem sets) with a better cognitive model. But the lectures aren’t pulling their weight. If we really wanted to adopt the better model, we’d ditch the lectures, and indeed, that’s what’s been happening in US K–12 education.
Education is changing.
To understand something, you must actively engage with it. That notion, taken seriously, would utterly transform classrooms. We’d prioritize activities like interactive discussions and projects; we’d deploy direct instruction only when it’s the best way to enable those activities. I’m not idly speculating: for the last few decades, this has been one of the central evolutionary forces in US K–12 policy and practice.
This is a topic I’m thinking about a lot these days. Whether we are re-training adults or experimenting with childhood learning, this is a thread worth watching.
If interested, know that Matuschak’s claims about books are contested. (Link)
Last Call
1) I shared Taylor Pearson’s useful COVID-19 list of Twitter follows. Synthesizing some lessons, he writes:
a) This is very likely to be mild (e.g. no worse than the flu).
b) One should take relatively aggressive and early precautions in case it is severe.
Check out his Coronavirus Primer for Reasonably Rational People (Link)
2) John Gottman is a legend in the field of marital counseling.
Gottman decided the field needed statistical rigor, and that he – a former MIT math major – was exactly the guy to enforce it. He set up a model apartment in his University of Washington research center – affectionately called “the Love Lab”, and invited hundreds of couples to spend a few days there – observed, videotaped, and attached to electrodes collecting information on every detail of their physiology. While at the lab, the couples went through their ordinary lives. They experienced love, hatred, romantic dinners, screaming matches, and occasionally self-transformation. Then Gottman monitored them for years, seeing who made things work and who got divorced. Did you know that if a husband fails to acknowledge his wife’s feelings during an argument, there is an 81% chance it will damage the marriage? Or that 69% of marital conflicts are about long-term problems rather than specific situations? John Gottman knows all of this and much, much more.
Gottman claims he can predict divorce with over 90% accuracy. He has trained countless therapists and his book The Seven Principles for Making Marriage Work has sold millions of copies. Slatestarcodex is a psychiatrist. Enjoy his book review. (Link)
3) This 40-second video was the best thing I saw on social media this week. (Link)
From my actual life
Without any local plans, we just took care of some chores and most nights we played Quacks of Quedlinburg with Zak. For the game nerds, it’s a bit like a deck builder. It’s known as a bag builder but with a don’t-bust-press-your-luck mechanic. To most of you, that means nothing but for the remaining, you should know this an outstanding game. It’s fun, and seasoned gamers won’t like this necessarily, but it has enough luck to allow a first grader to compete with an adult. I found myself thinking quite a bit about the value of the “options” (they’re actually chips representing ingredients in a potion recipe) in the game and their respective costs. The concepts of theta, volatility, and vega would be visible to someone with a finance background if they looked past the game skin. An engineer would see this game as a very pure simulation (most likely AI) based problem especially since the game has no trading interactions. Avi tells me the designer is coming out with a much heavier follow-up catering to a less casual crowd.
Here’s a random bit.
My 3-year-old, Maxen, is obsessed with dogs. He constantly pretends he is one, barking and crawling on all fours. He can never let one pass by without giving it the full Pepe Le Pew treatment. Friday he asked for one. He doesn’t know this is never happening. His grandmother lives with us. She believes all dogs are members of a sleeper cell waiting for their chance. Actually, that’s the wrong metaphor. This is more widespread. More like a canine Skynet. We think we have programmed our “best friends” but it’s only a matter of time before the pack becomes self-aware. I’m not kidding. She doesn’t even trust puppies.
So cohabitation with grandma and Sparky is a non-starter. I keep trying to explain to her how ridiculous she is. If Arab Spring taught us anything, a networked rebellion would require large-scale coordination. Twitter. Smartphones.
Thumbs.
Have a good week all.