Notes on How Not to Be Wrong: The Power of Mathematical Thinking

How Not to Be Wrong: The Power of Mathematical Thinking by Jordan Ellenburg

• Math gives you tools to extend your common sense of reasoning and logic. He uses the analogy of Ironman’s suit.
• Competitions including war are often decided by small edges. Being 5% better at x or y can decide the outcome over a long enough game. (Similar to my experience with boardgames. A more efficient engine in a game of 7 Wonders or Settlers can save you actions; like reducing the cost of capturing winning victory points.)
• Zooming in on points of a curve they look and can be approximated by lines. Trajectories are curves influenced by gravity but objects appear to move in straight lines. Using lines to approximate curves is the basis of calculus and even the derivation of pi as the area of a circle (Archimedes did this by iteratively computing the edges of a circumscribed circle as a polygon. Imagine an octagon, then a polygon with 64 sides, and so forth until it looks like a circle. You can then use trigonometry to compute the area of the triangles you keep creating until the sum of all the area approximates the area of the circle)
• Critics of this method like Zeno highlighted the uncomfortable paradox of constantly halving something until you get nowhere. Comically, the skeptic Diogenes countered Zeno by simply walking across the room to make the point that motion is indeed possible!
• The law of large numbers explains why South Dakota can have the highest rate of brain cancer and North Dakota the smallest. They both have small populations. Be careful when comparing quantities from 2 very different sample sizes. Small samples are more volatile. If you flip 10 coins, your odds of getting 8 heads is unlikely but real. But flip 1000 coins it’s nearly impossible to get 800 heads.

Data mining

“The more chances you give yourself to be surprised the higher your threshold for surprise had better be.”

“A significance test is a scientific instrument and like any other instrument, it has a certain degree of precision. If you make the test more sensitive by increasing the size of the studies population, for example, you enable yourself to see ever-smaller effects. That’s the power of the method but also the danger”

• An underpowered study has the opposite problem. You dismiss an effect that your method was too weak to see. A good example is the original 1985 hot hand studies. They rejected the idea of a hot hand but it turns out the methods they used rejected a hot hand even on data sets that were generated by simulations that deliberately baked in a hot hand! In fact, their methods failed to notice even the effects of good vs bad defenses which we know influences offensive shooting percentages.
• The final verdict is there may be some hot hand effect but it is too difficult to detect because if it exists it is very small. In fact, players who think they are hot take harder shots and perform worse so it’s best for them to not believe in the effect since it will be more than offset by an unjustifiably confident shot selection.

The Bayesian examples in the book are great.

• In a Bayesian framework how much you believe something after you see the evidence depends not just on what the evidence shows but much you believed it to begin with. Posterior probabilities still depend on the strength of your priors.
• On conspiracy theories: “If you do happen to find yourself partially believing a crazy theory, don’t worry — probably the evidence you encounter will be inconsistent with it, driving down your degree of belief in the craziness until your beliefs come in line with everyone else’s. Unless, that is, the crazy theory is designed to survive the winnowing process. That’s how conspiracy theories work”.

Tradeoffs and cost of perfection

• Stigler type arguments that optimal decisions often leave a margin for error. Getting to the airport early enough to have a 100% chance of making the flight is probably so conservative it’s wasteful (depends on your utility curve but almost certainly wasteful to be 100% certain vs say 95%). When you read a story about social security overpaying people bc they were actually dead, it turns out that mistake represents less than 1 basis point of payments. In other words, they do a great job not making this mistake and the cost of being 100% compliant may simply not be cost-effective to be worthwhile.

St Petersburg and the role of expected utility

• Fran Lebowitz utility curve of money: she would drive a cab each month until she could eat and pay rent. Afterwards, she would write. In other words, she had a linear utility curve which flattened abruptly. If you raise her taxes she works more as opposed to someone with a logarithmic curve who is at the point of indifference between work and leisure
• Ellsburg Paradox highlights the limitations of utility theories. It highlights the difference between what Rumsfeld called “known unknowns” or what mathematics refers to as risk vs “unkown unknowns” or uncertainty. Utility theory may help with uncertainty but formal math is less useful.

Regression to the mean explains many phenomena that are usually attributed to another reason.

• Examples, best-performing companies (competition attribution), musician/writer sophomore slump, RB after signing a big contract, dietary fiber speeding or slowing digestion, Scared Straight juvenile detention program, diet effects when people are at their peak weights. When something is at an extreme we should expect reversion simply bc of math and therefore be very careful of attributing to an intervention.

Correlations between variables reduce the information content of the variable.

• You try to identify criminals by foot and hand size you are choosing highly correlated variables.
• Strong correlations lie behind how we compress images and music files. A green pixel is probably next to a green pixel.

• (via Wikipedia) The most common example of Berkson’s paradox is a false observation of a negative correlation between two positive traits, i.e., that members of a population which have some positive trait tend to lack a second. Berkson’s paradox occurs when this observation appears true when in reality the two properties are unrelated—or even positively correlated—because members of the population where both are absent are not equally observed.
• Wikipedia summarized Ellenberg’s attractiveness example:

Suppose Alex will only date a man if his niceness plus his handsomeness exceeds some threshold. Then nicer men do not have to be as handsome to qualify for Alex’s dating pool. So, among the men that Alex dates, Alex may observe that the nicer ones are less handsome on average (and vice versa), even if these traits are uncorrelated in the general population. Note that this does not mean that men in the dating pool compare unfavorably with men in the population. On the contrary, Alex’s selection criterion means that Alex has high standards. The average nice man that Alex dates is actually more handsome than the average man in the population (since even among nice men, the ugliest portion of the population is skipped). Berkson’s negative correlation is an effect that arises within the dating pool: the rude men that Alex dates must have been even more handsome to qualify.

Asymmetric domination effect

• Aka “decoy effect”. When an clearly inferior option is introduced in one’s menu of choices it makes the clearly dominant choice appear even better than it did against its prior competitor. He uses the example of slime mold behavior which implicitly rak preferences between more food (oats) vs dark environments.

Marketing is the most common domain of the decoy effect, but it’s also present elsewhere.

• Price tables: These, like The Economist example above, frequently display the decoy effect.

• Menus and wine lists: Putting an expensive option at the top of a menu makes the other meals seem cheaper (remember anchoring?). Similarly, wine lists make use of the decoy effect: “People often order the second cheapest wine on the list and not the cheapest because they don’t want to look too stingy. Most of the time, the second cheapest wine is the one that has the highest profit margin.”

• Romance: Ariely offers some dating advice: If you are looking to meet that special someone in a social setting, he recommends bringing someone who looks similar to you but is less attractive. They will act as a decoy, making you seem more attractive by comparison. And if a “similar but better-looking friend of the same sex asks you to accompany him or her for a night out, you might wonder whether you have been invited along for your company or merely as a decoy.”

• Elections: Studies show that third candidates and minor parties influence your voting preferences by acting as decoys.

Democracy as tool for rightness not fairness

This was an interesting comment on what is at heart a question of whether democracy is both positive and normative. And how many ideas about fairness are never considered in light of their normative merit.

Condorcet thought that questions like “Who is the best leader?” had something like a right answer, and that citizens were something like scientific instruments for investigating those questions, subject to some inaccuracies of measurement, to be sure, but on average quite accurate. For him, democracy and majority rule were ways
not to be wrong, via math.

We don’t talk about democracy that way now. For most people, nowadays, the appeal of democratic choice is that it’s fair, we speak in the language of rights and believe on moral grounds that people should be able to choose their own rulers, whether these choices are wise or not.

This is not just an argument about politics—it’s a fundamental question that applies to every field of mental endeavor. Are we trying to figure out what’s true, or are we trying to figure out what conclusions are licensed by our rules and procedures? Hopefully the two notions frequently agree but all the difficulty, and thus all the conceptually interesting stuff happens at the points where they diverge.