Astral Code Ten writes a lengthy and fascinating post exploring the true nature of AI entitled “Janus’ Simulators“. He notes that early (pre-2022) AI philosophers posited three types of AI: a goal-directed agent, a question answerer, and a direction follower. ChatGPT, he argues, is none of these. Rather, it is a “simulator”. Also, it is a “Masked Shoggoth.”
Key Points
- The Masked Shoggoth is a creature that is part machine learning GPT and part Reinforcement Learning from Human Feedback (RLHF) agent. (If you are wondering, the original shoggoth is a (fictional, I hope) creature described in H.P. Lovecraft’s At the Mountains of Madness, “a shapeless congeries of protoplasmic bubbles, faintly self-luminous, and with myriads of temporary eyes forming and un-forming as pustules of greenish light all over the tunnel-filling front that bore down upon us.” Also, bigger than a subway train.)
- ChatGPT is not an AI in any of the above three ways. It pursues no goal other than completing a text. As an aside, my advisor, Jeff Elman, once explained that one of the reasons he chose prediction as the learning task for his Simple Recurrent Network was that it was always easy to get data. Just wait for the next word. GPT also simulates how texts play out, and can be used to pretend to be other things.
- Nostalgebraist claims that ChatGPT is a GPT instance simulating a character called the “Helpful, Harmless, and Honest Assistant”. GPT and RHLF jointly compoase a machine learning model that successfully simulates an agent. If treated properly, it simulates this “Helpful, Harmless, and Honest Assistant.” GPT is not inherently dangerous, but GPT and RLHF is—if it is used for bad. We’ve heard this argument before: for example, it’s not atomic energy that’s evil, it’s the people who misuse it.
- However, regarding simulation, many philosophers also see humans as “simulating a character.” We are born as prediction engines, faced with what at first seems to us as random noise, but environmental shaping via reward and punishment shape us into something more aligned with reality.