The New Yorker on OpenAI and Whisper:

Over at the New Yorker, James Somers tells us about open source language transcription application Whisper. Somers gives a rather lengthy and discursive recent history of large language models, which anyone reading this blog is probably already familiar with. But I think he’s buried the lede, AI researcher Frank Sutton on “the bitter lesson” that applications like Whisper teach us:

General methods that leverage computation are ultimately the most effective, and by a large margin.

This is not a new argument, and also exists outside the fields of computer science. 

Linguistic nativism and it’s implication for AI

My doctoral advisor, psycholinguist Elizabeth Bates, once said that evolution tends to take one of two branches in implementing information processing:

A) An animal like a bee, with precise, specific, embedded, and dedicated abilities that don’t change over time.

B) An initially dumber but wider and more flexible learning system like humans.

Her argument was against language nativism, believing that it cast human beings as being pre-wired, more clever bees.

To provide some context: Linguist Noam Chomsky and many of his followers state that humans are born with an innate language faculty and a “universal grammar.” This faculty is not learned through exposure to language but is instead a result of genetic inheritance. 

Arguments in favor of linguistic nativism include the speed and ease with which children learn language, including complex grammatical structures and vocabulary, without explicit instruction. There is little experimental or naturalistic evidence to suggest that every time a child makes a language mistake he is corrected.

By contrast, linguists such as Zelig Harris and Jeff Elman both believed that language is not an inborn, biologically determined ability, but rather a product of the brain’s ability to learn in general. Children learn language through experience and interaction with the environment, just as they acquire other skills.

So, assuming the non-nativist view is correct, perhaps in a nod to carcinization (aka “Why does everything become a crab?”) our AI models are evolving in the same direction that nature guided human cognitive evolution and language development