Welcome To The Machine

Unless you are Silicon Valley billionaire you accept that you have a finite life. But for those of us who can’t afford the frozen-in-carbonite shot at immortality, there is a poor man’s substitute. Parallel identities. Multiple ways to live the single draw you get from a womb. It’s gonna sound strange but stick with me for a second. See this musing I came across (here’s the text, I saved you a click):
Many friends have taken a serious interest in longevity.

I get it. But I’ve always been more interested in the other lever; resets.

There’s little reason identity should persist across 80 (or 200) years. French Foreign Legionnaires and cheating husbands have always presumed new identities. Identity persistence has only recently happened as a result strong government record keeping and centralization.

If we are going to pursue biological longevity– we should allow a diversity of lives to be lived. Many folks achieve this with an ‘alt’– see LARPing and trail names. But if life is to be radically extended, information resets seem almost necessary. That is to allow total amnesia as a choice. Total reset.

I see it as an extension of what we do online. We can have different avatars/profiles/etc in different spaces, and there are also different spaces for different purposes. Online games are not necessarily meant to be a place where you live your entire life (although it does happen), but they are meant to be places where you can explore different parts of yourself or engage in different types of play.

Not taking this seriously feels like the same type of failure the anti-longevity often traffics in. That is to refuse to believe that the way we are living life now could not be better because it lack’s biological precedent. Sometimes all you need is to reset the game.

So maybe it’s just that I don’t see why our first lives should be the only ones that we can explore. Maybe we can have lives for different spaces in our limited time. Or maybe this is just wishful thinking on my part?

If that resonates, raise your hand. You have just enjoyed the work of a T1000. It can mimic the voice of whoever it comes into contact with. In this case, it put a liquid sword through writer Nadia Egbhal.

What’s going on?

That musing was generated by GPT-3 after being primed by one of Nadia’s posts. H/t to Stefan for sharing it with me (which was very coincidental since he didn’t know that I recently ordered Nadia’s latest book).

GPT-3

I’m going to lean on Anne-Laure‘s concise description of GPT-3:

GPT stands for “generative pre-training transformer”, a language model which can generate world knowledge by training on a diverse corpus of text. GPT-3 is the third iteration of this model. It’s basically a language predictor: you feed it some content, and it guesses what should come next.

What makes GPT-3 extraordinary compared to its predecessors is the sheer size of the model, which has 175 billion parameters. GPT-2 “only” had 1.5 billion parameters, which was already considered massive when it was released last year.

GPT-3 has effectively ingested most of what humans have published online. It uses all the text available on the Internet to generate a statistically plausible response based on the text input it receives. And because it has lots of data to figure out what response is most plausible, the predictions tend to be quite accurate—too accurate for some people who fear software based on GPT-3 will replace their job.

Faster than you can utter the syllables in “Skynet”, Ann-Laure gives a promising view of how such a virtual assistant can augment not just our productivity but more importantly our creativity. Check out her post for concrete examples of how such a tool is aiding teaching, idea gen, and most intriguingly, design. (Link)

 


Standing Out

There’s already a beta of a tool that uses GPT-3 to generate personal-sounding emails trained on your own writing style. Byrne Hobart anticipates the concern:

“Managers can write emails with text like “k got it,” but their subordinates have to fluff that out with punctuation, capitalization, and other niceties. If this gets widespread, it will save some time, but also force us to find a new norm for maintaining the social pecking order in text.

The emphasis is mine.

The concern is that as GPT-3 flattens or commoditizes skills, even creative skills like writing, people will jockey for new ways to assert their value. This recognizes an often critical dimension of value: relative scarcity. You understand this if you play fantasy football. If all TEs are very good but very similar, then a 2011 Gronk or Jimmy Graham VORP justifies a first-round pick.

Alex Danco has written brilliantly on the idea of positional scarcity in business strategy. And with all the talk of Zoom meetings these days, his model would suggest the VORP of in-person meetings will increase. He provides more examples in his essays:

  • Positional Scarcity (Link)
  • Positional Scarcity and the Virus (Link)

More GPT Links

  • Chess

    GPT-2 (the predecessor to GPT-3) can play chess without having any conception of a chessboard or that it’s playing a game. After being fed a giant corpus of chess game logs, its moves are generated as text responses. Kind of how you might train a chatbot on customer interactions. In this case, the customer is saying “f2-f4” and GPT responds “Ng1-f3”. (A twitter thread by Tom Chivers)

  • How to spot a GPT-3 generated post

    It turns out the text it generates violates Zipf’s Law such that common words occur too frequently and uncommon words are almost non-existent. Will we have browser extensions that tell us when we might be reading a robot-generated post? Leon makes a deep dive into GPT-3. (Link)

Leave a Reply