Moontower #291

Friends,

The phrase “studies show…” is almost always followed by bs. To understand why, I’ll point you to a post by David Epstein.

David is the author of RangeThe Sports Gene, and the new book Range Adapted for Young Readers which I bought for my 12-year-old son and nephew.

I’ve been a long-time reader of David’s letter and this post is both useful and timeless.

Everything in Your Fridge Causes and Prevents Cancer

It’s a reminder that outlier studies and results in general make headlines, but are statistically inevitable if you do enough studies.

Excerpt:

It wasn’t every sauna enthusiast who reaped the supposed protective effect against dementia; it was specifically those who used a sauna 9-12 times a month. Sauna bathers who hit the wooden bench 5-8 times a month — sorry, no effect. And those who went more than 12 times a month — again, no luck.

That should raise a caution flag in your head.

When only a very specific subpopulation in a study experiences a benefit, it may indeed be that there is some extremely nuanced sweet spot. But it is more likely that the researchers collected a lot of data, which in turn allowed them to analyze many different correlations between sauna use and dementia; the more different analyses they can do, the more likely some of those analyses will generate false positives, just by statistical chance. And then, of course, those titillating positive results are the ones that end up at the top of the paper, and in the press release.

Here’s the point I want to hammer home: when you see a tantalizing health headline — like that saunas prevent dementia — keep an eye out for indications that the effect only applies to specific subgroups of the study population. Even if the headline is very authoritative, revealing nuggets are often buried lower in the story.

I want to stress that you shouldn’t assume the sauna results can’t possibly be true. But when you see Bears-undefeated-in-alternate-jerseys type conclusions — and someone is claiming one thing causes the other — you should hold out for more evidence.

This doesn’t just happen in health news. Investing/trading is another area where making a mountain out of a statistical molehill is rampant. Unless you are specifically studying a phenomenon that you’d expect to be discontinuous (binary, “phase change”, threshold cutoff) you should be wary of any signal from a specific range of an otherwise continuous function.

I’ll take a simple example from Kris Longmore’s article explaining how month-end rebalance trades work. The post is titled How Wealth Managers Pay You To Trade. He writes (emphasis mine):

How I’d Test This

So here’s the hypothesis: if we can identify which asset outperformed during the first part of the month, the underperformer should outperform as we approach month-end, when rebalancing pressure is likely to be greatest.

The first step is simple. Pull daily data for SPY and TLT going back as far as you can get it (I used data from 2007). You can get this from Yahoo Finance – nothing fancy.

Then ask a straightforward question: If I know which asset outperformed during the first 15 trading days of the month, can I predict which will outperform during the last ~7 trading days?

Why 15 days? Because it’s roughly two-thirds of a trading month, and it gives us a reasonable window to identify the outperformer before month-end rebalancing kicks in.

Could you use 10 days? 20 days? Sure. But 15 seemed reasonable and shouldn’t really matter much. If it did, then that would be a big red flag. We want stuff that’s fairly robust to the actual implementation details.

Back in my floor days my biz partner was incubating a futures trend strategy and he’d have me look at the backtest results. I’m no scientist, but I knew enough to realize that if the signal depended on a particular value of the parameter (ie the exact amount of what defined a “breakout”, the number of lookback days, etc), then the result was overfit.

It’s the same idea as David’s sauna therapy study.

When you are in a competitive domain where many people are constantly mining, “too good to be true” discoveries should be met with extra skepticism.

A current example of this is the so-called Mississippi Miracle in which both the left and right appear to have an axe regarding the childhood literacy improvement in Mississippi schools. It checks the box of “domain where many people are constantly mining” so interventions that show huge returns deserve a lot of skepticism. You can count on Freddie deBoer to deliver that, but I think the pushback in the comments section of his post show the complexity:

There Are No Miracles in Education

 

Would be interesting if there was a prediction market on how much literacy scores would improve in places that decide to adopt the Mississippi approach?

Which brings us to this week’s Money Angle, which should get a rise out of you…


Money Angle

Prediction markets are all the rage. They even played a main character role in an episode of South Park just a few weeks ago with Kalshi being specifically shouted:

 

On Friday I shared a rare interview with SIG founder Jeff Yass that came out this week about prediction markets:

Spooky? Jeff Yass on Prediction Markets

Spooky? Jeff Yass on Prediction Markets

·
Oct 31

 

On the subject of prediction markets, long-time Moontower sub Andrew Courtney has launched a substack with many of his recent topics being analysis of prediction markets. His thought processes look familiar because…well, Andrew retired quite young being an extremely successful SIG trader himself.

You can get started with these posts:

🔗from the Kalshinomics Lab: conditional election probabilities

🔗are you good or just up?

🔗Relearning Math at 38 — Andrew was at the top of the Math Academy leaderboard for a bit which iirc corresponds to learning math with the same time commitment as a full-time job. My kids and I have looked at the top of the list thinking “who the heck are these people?!”. I was envisioning autistic homeschooled kid not retired SIG trader.

Finally, this is also Andrew’s site:

Kalshinomics

If you are in the Philly area, he’s done meetups for prediction market enthusiasts.

Fun fact: I told Andrew I was going to boost his awesome letter this week and I asked him to make me a market on how many subs it would lead to.

He gave me a 90% confidence interval which I thought was a good market although too wide to trade on. I showed him a 175 bid if he wants to hedge his happiness. We’ll see what happens.

Good handicapping practice would be to try to list the info you’d like to have in making such a market!


Money Angle For Masochists

The “Masochists” header word this week is a pointer to “aspiring traders”.

I’m going to reprint Joel Rubano’s tweet in full. Joel is a friend, energy trader, author and entrepreneur running a corporate trading education company with a focus on commodity trading and hedging.

His book: Trader Construction Kit

The tweet pairs well with the post from last week’s so you’re interested in trading.

Joel:

I had the opportunity to guest lecture to a university class yesterday and got some questions about resources for students interested in working toward a trading seat.

The good news is that there are massively more and better resources available now than ever before. The bad news is that for every useful book, class, or podcast, there are 999 more that are worthless at best and massive value destructors at worst.

A few hints to help sort the wheat from the chaff:

Anything that tells you trading is easy is lying to you. Trading is a brutally hard game played against literally the smartest, most disciplined, most aggressive people in the world. The people who survive and thrive tend to welcome that specific challenge, even though most would not describe their time on the desk as “fun.”

Anything that claims a risk-free or can’t-lose strategy is garbage. Most professional traders are hoping to be right 50–55% of the time and relying on extremely strict discipline and risk management to be profitable with that hit rate. They also have to manage capital so they can survive stretches of worse-than-normal performance, which invariably happen.

Anything that tries to sell you trading as a lifestyle — the cars, the watches, the vacations — is almost certainly a scam. Real traders are not sitting there thinking about what the money buys in real time; that’s distracting and leads to bad decisions. There’s even a famous passage in Reminiscences of a Stock Operator about a group of traders who all try to make enough money to buy a fur coat, and they all fail because they were focused on the coat instead of on playing the game well.

Anything that says “anyone can do it” ignores how markets actually work. Most markets are zero-sum: people have to lose for other people to win. The softer version — that anyone can become a trader if they just grind — is also not really true. The job demands unusually high levels of discipline, curiosity, intellectual honesty, and competitiveness. Some people have those traits and can develop into professional traders; most people don’t, and that’s fine. The good news is that there are lots of trading-adjacent roles (risk, research, sales, tech, execution, ops, product) that let you work on markets, think about markets, and have a productive, interesting career without being the person taking risk.

Anyone promising something “just like what the pros use” or “better than the professionals” does not understand what professionals actually have. Elite hedge funds, banks, and merchant trading firms spend huge amounts of money on proprietary tools, data, infrastructure, and staffing to compete in an intellectual arms race. A single trader can easily consume hundreds of thousands to millions of dollars’ worth of technology and information resources per year, which is one reason their profit targets are so high. You are not getting that for $29.99 a month.

Anything that claims “the edge is AI” with no further detail is almost certainly not going to outperform anything. Yes, serious trading firms are racing to integrate AI, and yes, AI will be useful for specific tasks. But AI is very good at some things and still not very good at others. If someone is just putting a thin interface on top of a generic stock-picking model and calling that “AI-driven alpha,” it’s not only unlikely to be useful — even if it does work for a bit, it will almost certainly get out-competed by more specialized, internally developed tooling at a bank or hedge fund.

Ultimately, if you’re serious and you’re early, your main job is not to find a shortcut; it’s to build the traits that compound: discipline, honesty with yourself, curiosity, and competitiveness under stress.

 

Stay groovy

☮️

Moontower Weekly Recap

Posts:

Leave a Reply