slop

Merriam-Webster’s Word of the Year for 2025 is “slop”.

First of all, what a difference a year makes. In late 2024, my friend Fonz texted me one morning thinking that the prediction market for who would be Time’s Person of the Year was really mispriced. Sam Altman was the favorite. I agreed with him immediately. “Prediction market audience is nerds. Anyone living in the real world knows would expect TSwift.” I mean 2024 was the year regular people paid a meaningful portion of their annual post-tax income to see the Eras tour live.

Well, a year later, prediction markets are normie enough that South Park lampooned them and “slop” is being discussed by…the dictionary. An institution with the heat signature of ancient dirt.

AI promises productivity. For a given amout of time, either

  • more output holding quality constant, or
  • higher quality.

The concern here is that AI slop is the efficient spam leveraging oplagiarism. I used Gemini to get the “L” in that acronym. There’s probably a word for the semantic/syntactic recursion, but I’m out of tokens so I’ll just sit here and spare trees but not know stuff.

I kid. I’m not out of tokens. I’m paying $100 a month to spew carbon instead of looking at sponsored results.

Ok, this time I am kidding. I still use Google. To look up store hours.

Anyway, AI slop is everywhere. My kid came into the room excited because he saw some genetic engineering video that had him think dragons could be real. Bruh, we expect this from grandma but you’re growing up in world where you know better than to trust your eyes.

[Pause for a community sigh. I’m good now. You good? Onwards.]

Before we get to my opinions, I want to share an extended thought from Brent Donnelly’s It’s not just X. It’s Y.

[Brent’s fantastic daily market letter is paywalled but I asked him to unlock this one because I really appreciated this section which will have broad appeal, so thank you Brent!]

Brent:

Caveat Lector

The biggest problem I have encountered with the recent trend towards decentralized content (Medium, Twitter, Substack, etc.) is that the writing has often not been edited for clarity, legibility, or accuracy. While it’s cool to begrudge the gatekeepers, there is value in having a reliable editor who will filter for bloat and garbage and egregious factual errors so that you don’t have to do it yourself. Random non-experts have been offered various platforms where they can disseminate objectively wrong information in essay format. If those essays tap into the right vibes, they will go viral. They can be dense and full of obvious factual errors. That will not matter.

This problem has become exponentially worse now with AI. I boosted a tweet late last week, and upon further review realized that I cannot tell whether or not it’s AI.

[Kris: I also boosted that Tweet despite knowing that parts of it were written by AI. I know Jared and that post is in keeping with this beliefs and the fact that at least the latter half of it is written by AI, I’ll address below. By the way, there are a few sentences which scream AI even if you are just skimming. I didn’t excerpt the full section by Brent but he includes some solid tips for recognizing AI writing. Anyway, back to Brent…]

So I deleted my boost. I put the tweet into an AI checker and Gemini; the checker said the tweet was 100% AI, whereas Gemini wasn’t totally sure.

Compounding the problem is that all people, including good writers, write in a voice that is an amalgamation of:

  1. Their own conversational voice
  2. Society’s accepted parameters around the style or type of writing they’re trying to produce, and
  3. The voice of everything they have ever read. The more people read AI text, the more their honestly-generated and original writing is still going to sound like AI. Just like Kurt Cobain kinda sorta accidentally copied the chords from More Than a Feeling because he was listening to a lot of Boston albums in the Nevermind days… Writers will accidentally sound more and more like AI unless we’re careful.

As a consumer of financial journalism and of writing in general, I am now at the point where I assume everything is AI and then work backwards to figure out if it’s not, based on the author and the publisher. I know if I’m reading Ben Hunt, Jared Dillian, or Noah Smith (for example), it’s not AI. If I’m reading yet another piece of financial nihilism on Substack or Twitter, it probably is.

[Kris here again. I especially liked this section which recommends ignoring financial nihilism pieces, which are now associated with virality. Oh the adverserial attention game is more boring when you recognize it. I must note that applying Brent’s rule to kyla scanlon would be a false positive—she’s been on that beat for a while with a sharp, grounded perspective. You might even say that the application of Brent’s rule to her would make her a victim of her own success. Back to Brent…]

This slop problem is good news for legit publications like Bloomberg because at some point, many people like me will find the effort to filter out the AI-generated garbage too onerous and migrate back to properly gatekept content. Much like if the FDIC got rid of deposit insurance, everyone would put their money at JPM. It’s too much work for everyone to have to vet everything all the time. Gatekeepers have bias and risk, but they also have utility. They have fact checkers and professional writers. Decentralization is overrated.

This move back towards gatekeepers is evident in the rise of The FP and the surprising success of the NYT in recent years. People don’t want random, unedited rants full of factual errors. But that’s what we’re getting from Substack and Twitter. And it’s going to get worse. I am noticing AI-generated slop all over the place, even in company press releases. Check out the unending stream of gibberish press releases coming out of SMX, for example. As AI would say: This is not just an inconvenience—it’s the critical new reality.

Here’s my approach to content consumption in 2026:

  1. Assume long form articles on Substack and Twitter are AI-generated unless there is reason to believe otherwise. When in doubt, filter it out. I don’t have time to extensively vet every single author and article. Best to over-filter quickly, not ingest a ton of stochastically parroted slop. Substack and Twitter are not inherently bad, but I need to be vigilant.
  2. Prioritize content from legitimate gatekeepers like Bloomberg and Reuters and anything that’s worth paying for. If it’s free, it’s suspect. If I am willing to pay $10 / month, it’s probably not.
  3. Ignore financial nihilism. Cynicism and nihilism were cool in high school, and they sound smart on Substack and Twitter. But they lead you nowhere. This doesn’t mean you should never be bearish. It just means that no amount of wishing we were still in the 1990s or 1950s will bring us back there. Successful traders are open-minded and forward-looking.
  4. Delete Twitter off my phone. I will use X at work, and that’s it. It’s an incredible timesuck and mental health wrecker mostly promulgating hate, falsehoods, nihilism, and negativity. Minimum viable dose only.
  5. Mute aggressively on Twitter. Mute users, mute words, mute conversations. If something bugs me on Twitter, I mute it; I don’t engage with it. Let them tell you the dollar has lost 97% of its value. Don’t waste your time correcting people who dish out obviously wrong information or who are writing fan fiction about imminent bank collapse, silver prices in Tokyo, or repo. Just chuckle and mute.
  6. Filter out all permabears, angry people, permabulls, nihilists, and captains of clickbait. Know the bias of every author you read and filter accordingly. What is useful and what gets boosted are two different things.

Finally, I will try to be the best gatekeeper I can possibly be. All my writing is edited and fact checked, but I have still made the mistake of boosting AI-generated content a few times and I still make factual errors. I will make a strong effort not to do so in future, or to advise readers as soon as I’m made aware of a mistake or AI boosting.

Thanks again to Brent for letting me share that.

Manager mode

I mentioned that I boosted Jared’s tweet despite a significant portion of it being written by AI. This is not confirmed but it doesn’t matter because what I’m about to say is only interesting insofar as I’m giving cover to AI writing.

I personally don’t care if what I’m reading is written by an AI if the message was the author’s intent (so long as it wasn’t plagiarized*) just as I don’t really care if the president didn’t write his own speech.

*Brent is concerned about plagiarism while recognizing the Cobain problem. Well, the Cobain problem is insidious and everywhere. I keep a file of “turns of phrases” I like. Am I to believe that my mind hasn’t recycled some of that indirectly? I always give credit in this letter when I can remember that there was credit to be given (and my notes are very good at keeping track because credit is a priority). I don’t think I have ever plagiarized. But I wouldn’t sell the tail option on that because we all know thoughts can be recombined inputs without us being aware of it. Just look at the opening of Andrew Courtney’s latest piece where he basically writes a post and scraps it because I once covered the same topic. We chatted about this. Sometimes it’s hard to know where you begin and your inputs end. But like porn, you probably know plagiarism when you see it. Fyi, the piece Andrew actually pivoted to is excellent, but I’m biased because I feel the same as he does.

Instead, I think of AI output that you share as YOUR agent. If you let AI write for you and present it as your own, you are responsible for those thoughts. You don’t get to enjoy the benefits of production without accountability. You can’t disclaim what you say with “I didn’t write that”. If you platform a bot, it’s on you.

As far as I’m concerned Jared approved a PR and is responsible for what’s in prod. So long as he maintains accountability, this is not only a valid workflow, but it’s the new default whether you realize it or not.

A few months ago I shared Venkatesh Rao “sloptraptions” post Prompting is Managing

It argues that concerns of using AI as a crutch misunderstand the nature of using it effectively. Interacting with LLMs is not a form of individual cognitive “doing”. It’s a shift into supervisory control, where the user’s brain state mirrors that of a manager overseeing a junior. Rao argues it is actually the standard cognitive signature of management necessary for high-level coordination.

I don’t feel brain-dead when I’m orchestrating multiple agents across different projects. Instead, it feels like writing outlines for articles or pseudocode for projects. It’s a different form of executive function. It feels managerial. It’s not my favorite kind of work, but it’s productive and necessary.

Hmm…that sounds an awful lot like the management aspect of any job. It’s tedious but sits at the heart of leverage.

I believe it was Agustin Lebron who said most jobs of the future are going to be “shit umbrellas”. Bots will do the actual work, but they can’t be held accountable. Humans will be paid to absorb decision risk rather than actually doing things. That sounds right.

Before AI, people spewed plenty of slop. They were held accountable. Either by the law, the market, or the judgement of their peers. The accountability will stay even if the transmission syntax and medium change.

To address AI making us lazy, I already posted slow is smooth and smooth is fastIt’s a concern but you’re hardly defenseless (that post resonated, it might have been the most popular non-trading one I wrote last year).

Scott H. Young wrote a post Will AI make us stupid? dealing with similar themes. He explains how we’ll bifurcate according to how they employ these tools. This is how I see it:

There are things that we don’t NEED to do because computers are better, but the act of doing them anyway changes you. You will need to actively decide what to continue doing and what to outsource. You will not always make the right decision.

Here’s Scott’s view:

Learning requires an investment of effort, and AI will make us stupider if that effort is avoided.

At the same time, not all effort in learning is helpful. Much of learning involves cognition that does not directly contribute to understanding and knowledge. Think of learning like powering a motor—all learning requires a source of energy to make progress, but not all energy is transformed into forward motion. Depending on the vehicle, much of it might be wasted as heat and noise.

Therefore, while I think AI is probably going to result in an incredible “dumbing down” of our self-education in the average case, it is probably also going to enable more careful students and teachers to facilitate learning much better than before. Because while AI can simply solve a problem for you, it can also generate worked examples, practice problems and feedback, and guide problem-solving dialog.

Getting this balance right is hard, and I don’t think it’s simply a matter of laziness. Even intelligent students are often wrong about what effort actually matters when it comes to learning, and many teachers are no better. This has always been the case, but AI has raised the stakes as the ability both to enhance learning and to bypass it entirely have expanded.

Thus my personal prediction is that in domains that are already largely under the powers of modern AI, such as languages, programming or chess, we’re going to see a divergence in human abilities. The average person will rely on the AI more, robbing them of the ability to learn the underlying skills. More sophisticated students will use AI to learn better, removing inefficiencies that were unavoidable in pre-AI learning environments.

Some evidence of this is already emerging…

Leave a Reply