Progress in ChatGPT, AI and Safety:
Nitasha Tiku, Gerrit De Vynck and Will Oremus tell us, in The Washington Post that “Big Tech was moving cautiously on AI. Then came ChatGPT.” Their article is based upon interviews, some anonymous, with current and former Google and Meta employees.
Months before ChatGPT became the talk of the town in November, they tell us how Facebook’s parent company Meta released Blenderbot, a chatbot that was met with a lukewarm response. Yann LeCun, Meta’s chief artificial intelligence scientist, blames their over-cautious approach to content moderation. Instead of allowing the chatbot to discuss controversial topics, Meta opted to make it change the subject when users asked about topics such as religion.
ChatGPT, on the other hand, dives right into conversations about anything, from falsehoods in the Quran to writing a prayer for a rabbi to deliver to Congress. ChatGPT’s willingness to enter into difficult conversations is what makes it a hit.
The article quotes Mark Riedl, professor of computing at Georgia Tech, who states that OpenAI’s technology may not be better than what Google and Meta have developed, but that OpenAI’s practice of releasing its language models to the public using reinforcement learning from human feedback has given it a huge advantage.
Google may be a cautionary tale about the dangers of playing it too safe. In the past, Google has deployed AI breakthroughs in understanding language, but these have lead to a wide variety of protests. Following these protests, Google released its AI principles in 2018. Despite pressure to see innovation reach the public sooner, Google’s system of checks and balances for vetting the ethical implications of cutting-edge AI may have significantly slowed progress.
The article also discusses similar barriers over at Meta. Before launching new products or research, Meta employees have to answer questions about how their work could be misconstrued. Public relations staff and internal compliance experts are brought in to ensure the potential work is compliant with the law and unlikely to draw controversy.
This focus on safety may be causing an exodus in engineering talent from major companies. AI researchers have left large companies to launch start-ups around large language models, including Character.AI, Cohere, Adept, Inflection.AI, Inworld AI, and Neeva.
In a situation reminiscent of Xerox Parc’s giving up the Graphical User Interface in the seventies to Apple, the article quotes Stable Diffusion‘s David Ha: “If Google doesn’t get their act together and start shipping, they will go down in history as the company who nurtured and trained an entire generation of machine learning researchers and engineers who went on to deploy the technology at other companies.”