Aftertones are fragments of things I’ve consumed that continue to linger in my brain. The hyperlinks lead to the original sources.
We, the royal “we”, are confused right now. Part of this confusion is the felt dissonace that technoptimists who are changing the world and advertising it without disclaiming that progress has side effects. There’s no “Warning: there are tradeoffs” at the bottom of their products, which we can admit, resemble magic. When progress is happening as fast as it is now, without time to adjust, the cures start to look like a disease.
1. Silicon Valley’s quest to remove friction from our lives (Rohit Krishnan)
“The Conservation of Friction”
In any complex system when we remove bottlenecks the constraints moves somewhere else. This is true in operations, like when you want to set up a factory. It’s also true in software engineering, when you want to optimise a codebase. It’s part of what makes system wide optimisation really difficult. It’s Amdahls Law: “the overall performance improvement gained by optimizing a single part of a system is limited by the fraction of time that the improved part is actually used”.
To optimise, you have to automate. And the increase in supply that reducing friction brings is the defining feature of automation; it always creates new externalities.
Re AI-assisted coding, Karpathy had a tweet that talked about the problem with LLMs not providing enough support for reviewing code, only for writing it. In other words, it takes away too much friction from one part of the job and adds it to another. You should read the full tweet, but the key part is here:
You could say that in coding LLMs have collapsed (1) (generation) to ~instant, but have done very little to address (2) (discrimination). A person still has to stare at the results and discriminate if they are good. This is my major criticism of LLM coding in that they casually spit out *way* too much code per query at arbitrary complexity, pretending there is no stage 2. Getting that much code is bad and scary. Instead, the LLM has to actively work with you to break down problems into little incremental steps, each more easily verifiable. It has to anticipate the computational work of (2) and reduce it as much as possible. It has to really care.
As it’s easier to create content it becomes harder to discover it and even harder to discern it.
Kyla Scanlon wrote a wonderful essay on this topic.
[Kris: I’ve linked to this Kyla essay before. It’s outstanding. Her writing the last few months is some of the best I’ve seen for putting words on the hard-to-express sense of change that is swirling in a word cloud of ‘financialization’, ‘attention’, ‘theater’.]
She discusses how friction is effectively relocated from the digital into the physical world while we move into a simulated economy where friction, like gravity, doesn’t apply. This is akin to thinking about a ‘Conservation of Friction’, where it’s moved from the digital realm to the physical realm. Or at least our obsession with reducing friction reduces it in one place but doesn’t eliminate it elsewhere.
Rohit explains how removing friction in one place causes it to bubble up elsewhere. Which is a perfect preamble to VGR’s observation in the following piece that refutes the recent headline-grabbing study that LLM’s are making us dumber.
Taking my own liberty to summarize VGR, the researchers’ misplaced alarmism stems from misunderstanding that most modern work’s brain rhythm is already sentry mode — not creator mode.
2. Prompting is Managing (Venkatesh Rao)
A new pre-print made the rounds last week waving red flags about your brain on ChatGPT. Undergrads, EEG caps on, wrote three 20-minute essays. Those who leaned on GPT-4o showed weaker alpha-beta coupling, produced eerily similar prose, and later failed to quote their own sentences. Headline: “LLMs dull your mind.”
I buy the data; I doubt the story.
The experiment clocks students as if writing were artisanal wood-carving—every stroke hand-tooled, originality king, neural wattage loud. Yet half the modern knowledge economy runs on a different loop entirely:
delegate → monitor → integrate → ship
Professors do it with grad students, PMs with dev teams, editors with freelancers.
Neuroscience calls that stance supervisory control. When you switch from doer to overseer, brain rhythms flatten, attention comes in bursts, and sameness is often a feature, not decay.
The EEG paper diagnoses “cognitive debt,” but what it really spies is role confusion.
We strapped apprentices into a manager’s cockpit, watched their brains idle between spurts of oversight, and mistook the silence for sloth.
As I’ve shared here before, I spun up an investing class for middle and high…
In this issue: The math of investing moontower as a “bridge” Friends, The money sections…
I sent this to our moontower.ai list this week: If you run a trading or…
Oil vols and calls skews were up a lot this week as the expectation of…
Ex-SIG quant trader, friend of the Moontower, fellow Substacker Whirligig Bear and prediction market enjooooyer Andrew Courtney…
In this issue: white rabbit a downside of trading careers oil options, Iran, Polymarket Friends,…