Newsletter Thoughts

Aftertones: Friction and Confusion

Aftertones are fragments of things I’ve consumed that continue to linger in my brain. The hyperlinks lead to the original sources.

Theme

We, the royal “we”, are confused right now. Part of this confusion is the felt dissonace that technoptimists who are changing the world and advertising it without disclaiming that progress has side effects. There’s no “Warning: there are tradeoffs” at the bottom of their products, which we can admit, resemble magic. When progress is happening as fast as it is now, without time to adjust, the cures start to look like a disease.

 

Readings

1. Silicon Valley’s quest to remove friction from our lives (Rohit Krishnan)

“The Conservation of Friction”

In any complex system when we remove bottlenecks the constraints moves somewhere else. This is true in operations, like when you want to set up a factory. It’s also true in software engineering, when you want to optimise a codebase. It’s part of what makes system wide optimisation really difficult. It’s Amdahls Law: “the overall performance improvement gained by optimizing a single part of a system is limited by the fraction of time that the improved part is actually used”.

To optimise, you have to automate. And the increase in supply that reducing friction brings is the defining feature of automation; it always creates new externalities.

Re AI-assisted coding, Karpathy had a tweet that talked about the problem with LLMs not providing enough support for reviewing code, only for writing it. In other words, it takes away too much friction from one part of the job and adds it to another. You should read the full tweet, but the key part is here:

You could say that in coding LLMs have collapsed (1) (generation) to ~instant, but have done very little to address (2) (discrimination). A person still has to stare at the results and discriminate if they are good. This is my major criticism of LLM coding in that they casually spit out *way* too much code per query at arbitrary complexity, pretending there is no stage 2. Getting that much code is bad and scary. Instead, the LLM has to actively work with you to break down problems into little incremental steps, each more easily verifiable. It has to anticipate the computational work of (2) and reduce it as much as possible. It has to really care.

As it’s easier to create content it becomes harder to discover it and even harder to discern it.

Kyla Scanlon wrote a wonderful essay on this topic.

[Kris: I’ve linked to this Kyla essay before. It’s outstanding. Her writing the last few months is some of the best I’ve seen for putting words on the hard-to-express sense of change that is swirling in a word cloud of ‘financialization’, ‘attention’, ‘theater’.]

She discusses how friction is effectively relocated from the digital into the physical world while we move into a simulated economy where friction, like gravity, doesn’t apply. This is akin to thinking about a ‘Conservation of Friction’, where it’s moved from the digital realm to the physical realm. Or at least our obsession with reducing friction reduces it in one place but doesn’t eliminate it elsewhere.

 

Rohit explains how removing friction in one place causes it to bubble up elsewhere. Which is a perfect preamble to VGR’s observation in the following piece that refutes the recent headline-grabbing study that LLM’s are making us dumber.

Taking my own liberty to summarize VGR, the researchers’ misplaced alarmism stems from misunderstanding that most modern work’s brain rhythm is already sentry mode — not creator mode.

 

2. Prompting is Managing (Venkatesh Rao)

A new pre-print made the rounds last week waving red flags about your brain on ChatGPT. Undergrads, EEG caps on, wrote three 20-minute essays. Those who leaned on GPT-4o showed weaker alpha-beta coupling, produced eerily similar prose, and later failed to quote their own sentences. Headline: “LLMs dull your mind.”

I buy the data; I doubt the story.

The experiment clocks students as if writing were artisanal wood-carving—every stroke hand-tooled, originality king, neural wattage loud. Yet half the modern knowledge economy runs on a different loop entirely:

delegate → monitor → integrate → ship

Professors do it with grad students, PMs with dev teams, editors with freelancers.
Neuroscience calls that stance supervisory control. When you switch from doer to overseer, brain rhythms flatten, attention comes in bursts, and sameness is often a feature, not decay.

The EEG paper diagnoses “cognitive debt,” but what it really spies is role confusion.
We strapped apprentices into a manager’s cockpit, watched their brains idle between spurts of oversight, and mistook the silence for sloth.

 

editor

Share
Published by
editor

Recent Posts

appreciating diversification

This week, I hosted class #4 of the Investment Beginnings for local kids aged 12+. The series’…

2 days ago

Moontower #313

In this issue: The “three pitches” rule and a lazy man’s framework for getting in…

2 days ago

how a high implied vol can be cheap

EWY, the South Korea ETF, was an interesting source of disagreement in our Discord about…

3 days ago

hard earned trading wisdom

Euan Sinclair needs no introduction from me. I’ll cut straight to the gold. He’s been…

6 days ago

options policework

A moontower user sent this [paraphrased] message in our Discord the morning of Jan 9th:…

1 week ago

Positive delta puts in the wild: Avis stock (CAR)

Remember that chart of CAR last week. (Matt Levine wrote about the fundamentals of the squeeze on…

1 week ago