Good to know what the AI thinks of me, I think.
GPT-3 is the massive neural net from OpenAI that can adopt a persona from a few initial text prompts (examples are a Smug VC, Sam Harris, Noah Yuval Harari and Dr. Seuss).

With 175 billion parameters in 96 attention layers, it cost an estimated $15 million to produce. It was pre-trained with a common crawl of the Internet over eight years (60%), Reddit-sourced text (22%), a corpus of books (15%) and Wikipedia (3%) — hundreds of billions of words in total. This includes proper names, like mine, as I just found out.

The NYT review said that GPT-3 is “amazing”, “spooky”, “humbling”, and “more than a little terrifying.”
Recent update from OpenAI:

“We currently generate an average of 4.5 billion words per day, and continue to scale production traffic.”

Given any text prompt like a phrase or a sentence, GPT-3 returns a text completion in natural language. Developers can “program” GPT-3 by showing it just a few examples or “prompts.”

The “Smug VC” persona was initiated with a few prompt sentences, starting with “The following is an interview with a smug venture capitalist (VC). He is self-important, self-congratulatory, and enthusiastic about technology and startups. He is a master of the humble-brag.”

While there is a waiting list to access the GPT-3 API, you can chat with several AI personas or create your own after a quick sign-in at www.typical.me/skins

4 responses to “the ghost in the Roko’s Basilisk machine”

  1. on the exponential curve… The red dot is GPT-3, at approximately ~3624 petaflop/s-days to train. Remember, a straight line on a semi-log graph like this is an exponential. The left epoch on this graph is in line with Moore’s Law; the light blue epoch on the right, starting with the neural net renaissance in 2012, has been doubling every 3.5 months!

  2. At what point will this reach the world’s computing capacity? 😉

  3. a few moments after the great Awakening, clearly 🙂

  4. …coming soon…. In the October 2021 issue of Artificial Intelligence, Deep Mind argues that no further algorithmic breakthroughs are needed to achieve AGI with reinforcement learning:

    "the generic objective of reward maximisation contains within it many or possibly even all the goals of intelligence. Powerful reinforcement learning agents could constitute a solution to artificial general intelligence."

Leave a Reply

Your email address will not be published. Required fields are marked *