work is going to feel very different by next Christmas

I started to feel it over the break, but the feeling is inescapable after the past week.

This has nothing to do with current events. It’s me having the same reaction to Claude Code that early adopters using the terminal have already felt:

Work is going to feel very different by next Christmas. Yinh and I were talking about how long 2024 felt. There were a lot of life events, but just in terms of workflow, it felt like a year ago I was mostly using LLMs for transcription, editing, and giving it photos of broken stuff for help.

Today, I can write a description of a bug in Linear or Jira (who am I kidding — I upload a screenshot with a blurb and have AI write a detailed bug spec complete with testing protocol) and assign it to…”Claude bot”. A dev approves the change. Push to prod. Hundreds of hours saved over the course of a year. It’s accelerating by the week.

I’ll share more about what I’m doing personally in the letter this week, but here are a few must-reads if you are curious about getting more out of AI.

1) Claude, Code, and What Comes Next (6 min read) by Ethan Mollick

This is a strong overview of why Claude Code feels so stepped-up in capability. I’m using Opus 4.5 regularly and for one project in particular, its ability to compress the chat is lengthening the context window. This post is a nice primer to read while the idea of agents working 24/7 for you floats in the back of your mind.

2) How I code with agents, without being ‘technical’ by Ben Tossell

Khe texted me this post and it’s the next thing to read after Mollick’s. This gives you a glimpse into the near future (which is already here for Ben despite his humility in this post) with very concrete ideas. PSA: Khe’s letter is mandatory if you are a regular person trying to get the most out of the tools around us. I feel like I’m literally stepping in his tracks, just 3 months behind as I’m finally using Claude Code (I just needed it to be in a desktop app rather than command line because I just think DOS and my brain powers off.)

In this article, Ben says:

Not to be like everyone else on Twitter when they see Andrej Karpathy tweeting something, but this really rang true to me: **there’s a new programmable layer of abstraction to master.**

First of all, I have the Claude extension in my chrome browser. This lets you talk to Claude about anything you are seeing. Like the design of the website you’re on? Ask it to extract a style sheet. Don’t want to read the whole email or article? No need to copy/paste, just ask the sidekick to summarize it. There’s even a Google Sheets add-on if you want it to spreadsheet for you.

In this case, I just asked Claude in my browser for the Karpathy thread that Ben is referencing.

Boom, it just goes out to the web and finds it.

Here’s the Karpathy quote:

I’ve never felt this much behind as a programmer. The profession is being dramatically refactored as the bits contributed by the programmer are increasingly sparse and between. I have a sense that I could be 10X more powerful if I just properly string together what has become available over the last ~year and a failure to claim the boost feels decidedly like skill issue. There’s a new programmable layer of abstraction to master (in addition to the usual layers below) involving agents, subagents, their prompts, contexts, memory, modes, permissions, tools, plugins, skills, hooks, MCP, LSP, slash commands, workflows, IDE integrations, and a need to build an all-encompassing mental model for strengths and pitfalls of fundamentally stochastic, fallible, unintelligible and changing entities suddenly intermingled with what used to be good old fashioned engineering. Clearly some powerful alien tool was handed around except it comes with no manual and everyone has to figure out how to hold it and operate it, while the resulting magnitude 9 earthquake is rocking the profession. Roll up your sleeves to not fall behind.

If Karpathy feels behind, I guess the rest of us shouldn’t feel so bad. But the part I bolded feels big. Like you need to stop your first reflex about how you’d approach a problem and embody someone to whom this is all native to (while recognizing that nobody is perfectly native to it. Instead, there’s a continuum of how far along people are in how easily they consider problems in light of the new capabilities.)

3) Claude Codes by Zvi Mowshowitz

Things are moving fast. This came out 48 hours ago. Highly practical and honest assessment of the current state. Also happens to echo my opinion — this is going to be a vertical year in terms of workflow.

4) Greg Isenberg on “what young builders do” (X thread)

This thread is just a mind-eff because it shows the frontier of the kids building in the context of entrepreneurship.

5) Everyone Is Wrong About the Skilled Labor Shortage (5 min read) by Jon Matzner

. I tend not to think about things along the lines of “what are the jobs of the future”. It feels like when you do that, you are choosing a self-alienating frame that favores the predicate over the subject.

Anyway, a local friend is a lecturer at Cal in AI. Kids similar age as mine. He gets asked about future jobs all the time and I can’t pretend I never think of that even if I resist the impulse.

His answer is “fix people, fix animals, or fix robots”. He’s also partial to the “trades”. Basically, work that AI will eat last.

It makes sense. I can’t say I’m sold. My own view is that the acceleration is so fast that any prediction on those lines is swamped by the error bars, but insofar as you must choose, it’s as good a guess any. But that’s not a great foundation for deciding, so I just treat that topic as entertainment.

My view here even disappoints me because it sounds helpless with respect to planning. But then I read an article like Matzner’s and it’s an example of how a lot of consensus thinking (like going into the trades) is perfectly risky. The frictions to knowing how to do something will melt. The asymmetry in info that a tradesperson has compared to the client has been narrowing over time (YouTube) but the “last mile” of actually doing is going to get shorter. You’re going to know how to fix anything at home, it will be a question of whether the time is worth it or not. If there are no jobs, we’ll have plenty of time to fix things. I think I’m kidding. But what if I’m accidentally right?

6) Dos Capital by Zvi Mowshowitz

And now we get to the macro. Provocation instead of practical. For the lolz.

Zvi’s post is a reaction to Trammell and Dwarkesh’s post about the unprecedented wealth inequality we are about to see. What Zvi calls absurd is effectively Trammel & Dwarkesh not taking their premise seriously enough.

Zvi (emphasis mine):

They affirm, as do I, that Piketty was centrally wrong about capital accumulation in the past, for many well understood reasons, many of which they lay out.

They then posit that Piketty could have been unintentionally describing our AI future.

As in, IF, as they say they expect is likely:

[redacted list of assumptions in order]

Does the above conclusion follow from the above premises if you include the implicit assumptions?

Then yes. Very, very obviously yes. This is basic math.

Sounds Like This Is Not Our Main Problem In This Scenario?

In this scenario, sufficiently capable AIs and robots are multiplying without limit and are perfect substitutes for human labor.

Perhaps ‘what about the distribution of wealth among humans’ is the wrong question?

I notice I have much more important questions about such worlds where the share of profits that goes to some combination AI, robots and capital rises to all of it.

Why should the implicit assumptions hold? Why should we presume humans retain primary or all ownership of capital over time? Why should we assume humans are able to retain control over this future and make meaningful decisions? Why should we assume the humans remain able to even physically survive let alone thrive?

Note especially the assumption that AIs don’t end up with substantial private property. The best returns on capital in such worlds would obviously go to ‘the AIs that are, directly or indirectly, instructed to do that.’

Even if we assumed all of that, why should we assume that private property rights would be indefinitely respected at limitless scale, on the level of owning galaxies? Why should we even expect property rights to be long term respected under normal conditions, here on Earth? Especially in a post calling for aggressive taxation on wealth, which is kind of the central ‘nice’ case of not respecting private property.

The world described here has AIs that are no longer normal technology (while it tries to treat them as normal in other places anyway), it is not remotely at equilibrium, there is no reason to expect its property rights to endorse or to stay meaningful, it would be dominated by its AIs, and it would not long endure.

If humans really are no longer useful, that breaks most of the assumptions and models of traditional econ along with everyone else’s models, and people typically keep assuming actually humans will still be useful for something sufficiently for comparative advantage to rescue us, and can’t actually wrap their heads around it not being true and humans being true zero marginal product workers given costs.

That’s the thing. If we’re talking about a Dyson sphere world, why are we pretending any of these questions are remotely important or ultimately matter? At some point you have to stop playing with toys.

I don’t even know that ‘wealth’ and ‘consumption’ would be meaningful concepts that look similar to how they look now, among other even bigger questions. I don’t expect ‘the basics’ to hold and I think we have good reasons to expect many of them not to.

Ultimately all of this, as Tomas Bjartur puts it, imagines an absurd world, assuming away all of the dynamics that matter most. Which still leaves something fun and potentially insightful to argue about, I’m happy to do that, but don’t lose sight of it not being a plausible future world, and taking as a given that all our ‘real’ problems mysteriously turn out fine despite us having no way to even plausibly describe what that would look like, let alone any idea how to chart a path towards making it happen.


The AI discourse is a ready reminder that there are no rules. There’s only power. We are bears on a unicycle. To some of our tech overlords humanity is but an experiment. A branch of a codebase we can’t see the extent of. And most likely NOT ‘main’.

To be overwhelmingly confident that this path is humanist is either hubris or motivated by the next round of funding. It just doesn’t seem clear to me that human flourishing is a layup end state of this trajectory.

Google’s mission is “to organize the world’s information and make it universally accessible and useful.” Google’s AI division’s mantra?

“Solve intelligence, and then use that to solve everything else.”

There’s mission creep and then there’s MISSION CREEP. If a business’s goal is to solve a problem and this is a quest to solve all the problems, I think it’s only fair to ask, while we still can…

“What is the last problem?”

[Nate Bargatze voice: Nobody knows]

Not a bad place to insert Asimov’s famous short story, The Last Question.

Leave a Reply