The original moontower blog is https://moontowermeta.com/. The “meta” is an important word. Important enough that Facebook stole my language and turned it into their ticker. F’n Zuck. Leave some for the little people bruh.
The reason I used “meta” (besides the fact that moontower.com wasn’t available) was because a lot of things I think about are fairly meta. Knowledge is the object but how we acquire knowledge is the meta. Trading is a very meta discipline because games with counterparties require a solid “theory of mind”.
In the spirit of meta, I really enjoyed a recent post by Robot James titled:
Valuation Timing with Excel (6 min read)
It’s meta because it’s really about arming yourself with data analysis to confront a narrative or chart. It’s worth stepping through the article together to appreciate just how many meta-nuggets it contains.
First, we start with an object-level observation that you’ve likely encountered. I’ll quote freely from the post but all bold is mine:
You may have seen a lot of charts like this recently:

The conclusions people tend to draw from this chart are:
- there is an obvious and strong relationship between valuation and expected future returns (cheap = good, expensive = bad)
- valuation estimates are currently historically high; therefore, expected returns of the S&P 500 are historically low.
We should always be wary of drawing strong conclusions from stuff people share on the internet or in sell-side research.
There are a few reasons to be skeptical of the strong conclusions people tend to make on seeing this:
- the chart might just be wrong (people screw up financial data analysis all the time)
- 10 years is a really long time horizon
- all of the 10-year total returns are actually positive
- why are there so many points? How many 10-year periods has the index even existed for?!
The good news is that, with a few simple skills, we don’t have to believe what randos on the internet say.
Even if we can’t write code, we can use Microsoft Excel and free online tools to investigate these things ourselves.
James shows how simple it is to grab the data that would feed such a chart so we can manipulate it ourselves. One of the first manipulations is addressing the fact that such a chart is really derived from an extremely small sample size because each data point is highly overlapping to the others. A rolling 10-year return is comprised of 120 months so each new “sample” overlaps with the prior one by 119/120.
James starts the exploration by looking at monthly returns (instead of 10-year returns) vs CAPE.

Let’s turn back to James for interpretation.
Unsurprisingly, that looks like a big blob. (Anything with monthly returns on the y-axis will look like a big blob.)
[Kris: that bold statement is a useful bit of knowledge that comes from looking at financial data frequently]
What does James do next?
We can look at longer non-overlapping periods. Let’s keep with the 10-year forward window and look at decades.
The problem is that we now only have 15 observations! Ten years is a long time, and we simply don’t have that many unique non-overlapping ten-year periods. And we certainly don’t have many unique non-overlapping ten-year periods that are similar to the current market structure and competitive environment.
[Kris: that bold bit is an evergreen problem in finance because investing is biology not physics. Markets learn so output become inputs. What does that mean? Markets are more likely to fall AFTER everyone starts believing they can only go up. The “only goes up” is the output or observation that then becomes an input into how much risk investors take. There is always some price that peers back at history and says “not this time”.]
So James slices the data another way.
Plot the valuation metric itself…

whenever we see an effect, we should ask what other than our pet theory might be causing that effect to appear. In particular a lot has changed over that time period. The market looks nothing like what it did in 1900 today.
And, indeed, if we plot a time series of our valuation metric, it looks kinda drifty.
It’s not really reasonable, I don’t think, to assume that CAPE 20 would “mean” the same thing in 2024 as it did in 1900.
He tries another manipulation:
One cheap and dirty way we can make that metric a bit less drifty and more comparable over time is to standardize it by its values over a recent rolling window.
For example, here I’ve standardized it as a 10yr rolling score. (Not necessarily cos I think that’s the right thing to do – I just want to make a point).

Now it looks a lot more stationary. It stays in the same range. It doesn’t drift off. This is unsurprising cos we forced it to look like that.
[Kris: the bold is another lit bit of fingertip knowledge that you acquire from frequent contact with data.]
Yet, another manipulation:

Now, we can plot our next year’s returns vs this standardized z-score.
If we still see an effect when we do this, it would make us more confident in the valuation effect. If we don’t, it won’t destroy our confidence because we’ve made some pretty arbitrary and dubious scaling choices here.
Indeed, at least with this scaling choice, we don’t see the effect we are looking for.
That’s ok. That’s the nature of work like this. We’re just exploring, trying to break things. We try to look at things from as many different angles as we can and see how much of the limited evidence lines up.
[Kris: I just want to pause for beauty as my wife likes to say. James is spoon-feeding serum against chart crimes and charlatans who read “How To Lie With Statistics” as a manual].
James’ Conclusion
I think the evidence (and economic sense) supports the idea that high valuations are correlated with lower expected returns. But it’s nowhere near as clear-cut as the initial scatterplot suggests. We simply don’t have enough data, and the market is constantly changing underneath us, making it hard for us to draw strong inferences.
My conclusion
This points to an uncomfortable reality. If a data analysis was conclusive then everyone would do the thing prescribed until the data exhaust from the behavior was no longer conclusive. This is deeply reminiscent of what I call the Paradox of Provable Alpha.
Notice what James did.
He recognized that the data proves nothing but it’s simply too underpowered to accept or reject any claims. His prior barely gets updated: “I think the evidence (and economic sense) supports the idea that high valuations are correlated with lower expected returns.”
He goes to bed at night with judgment as his best guess much like a farmer’s almanac will do better at predicting the weather in a month vs some meteorological model.
Thanks again to Robot James for the heavy lifting on the original article. I was just narrating alongside it to highlight what stood out to me and how it related to other topics we discuss here.
