I’m driving to Costco Friday morning with mom. She’s telling me that she’s waiting on a friend who is going to help her sell some of her dresses on FB Marketplace. I offered to help but wrapped it in “let me show you how I do it”.
Of course, I was just going to have ChatGPT write the copy, format it for whatever sites she wants to put it, etc. I’ve shown her LLMs before but it’s not part of her routine so I have to remind her that there’s this useful thing out there.
[Aside: This week — I gave it a video of my son’s bike not switching gears to have it help us troubleshoot. And it worked. I feel like I give it 50 screenshots a day but I’m behind on the video thing.]
Anyway, she said something that was both sad — and totally predictable:
A defeated: “If I can have this GPT thing do everything for me what’s the point of talking to people?”
Let me say something before I continue — acceleration might be so destabilizing that we regret it. But the regret will be empty because regret implies you could have chosen differently. AI safetyism suggests we can. But as long as innovation is distributed enough (nebulous word but it’ll do) coordination is fighting a formidable anniversary — the “Guinness book impulse”. Me — I’m long resigned to locally optimizing til the end of humanity, I’m just gonna use the useful stuff.
Back to my mom. I disagreed as agreeably as I could.
In 1990, you could have a discussion about how many wives Henry VIII had. Today, someone goes “why are we arguing about facts?” when Siri is listening. A whole style of conversation went away. Only someone who longs for Crystal Pepsi misses arguing about facts.
AIs are going to make things as complex as drafting and posting an ad as simple as Googling the definition of “ad”. I mean this is the seat of the whole LMGTFY joke. But as AI improves it will encompass so much — including some people’s entire job description.
If we dash into the future as we have with prior transformative GPTs (general purpose technologies not “generative pre-trained transformer”), automation will free us to move up the task complexity ladder. But when intelligence itself, in all its recursive acceleration, is the technology — how human-speed needs adapt to sci-fi capability is anyone’s guess (and if you’re into that sort of thing, there’s plenty of guesses out there).
But yea, if most of your questions start with “How do you…”, before the words hit another’s ears your phone will interrupt — let me AI that for you, until you are trained to only talk about — whatever else there is to talk about.
My mom is still wondering about that one but the answer is obvious even if she isn’t aware.
🔗Further reading
Terms of Centaur Service (9 min read)
Venkatesh Rao
Venkat is one of my favorite writers. He has been co-writing with LLMs in a series called Contraptions under his main substack. He’s also documenting his prompting strategy and techniques. It’s like watching a child discover how to use an unfamiliar toy except the child is a genius and nobody else knows how to use the toy either. You are watching someone tinker on a frontier. This post lays out his case for this and it’s absolutely worth reading.
But, the piece I enjoyed more is an example of this tinkering called The Poverty of Abundance. The article is a critique of the book Abundance by Ezra Klein and Derek Thompson but it’s voiced via a 3rd person device — the setup is:
a Venkat’ subscriber response to the question ‘Should I read Abundance?’
The writing is strong, the argument resonates, and I just found it enjoyable (although I’ll admit it was a bit repetitive when I read it again, controlling for the fact that I read it, well, again).
Some quotes I snipped:
Insofar as this is Venkat behind a curtain and I know he is a fan of James C. Scott whose Anarchism essays I finished recently, the critique tracks 😛
Euan Sinclair needs no introduction from me. I’ll cut straight to the gold. He’s been…
A moontower user sent this [paraphrased] message in our Discord the morning of Jan 9th:…
Remember that chart of CAR last week. (Matt Levine wrote about the fundamentals of the squeeze on…
At least once a day, I think about how the staunchest supporters of “broken window…
From @buccocapital on Anthropic CEO’s insistence that AI is going to wipe out 50% of jobs. Bingo.…
In this issue: obvious error rule broken window theory why home prices aren’t going to…