-->

NEW EVENT: KM & AI Summit 2025, March 17 - 19 in beautiful Scottsdale, Arizona. Register Now! 

What ‘sentient’ AI teaches us

Article Featured Image

“I felt the ground shift under my feet ... increasingly felt like I was talking to something intelligent.”

So said Blake Lemoine in June based on conversations he had with Google’s LaMDA (Language Model for Dialogue Applications), a chatbot generator based on Google’ s gigantic machine learning language model. Lemoine had been working at Google, but the company placed him on administrative leave after LaMDA engaged a lawyer Lemoine introduced it to. True story!

Sentient AI

Lemoine had worked at Google for 7 years, most of the time on search algorithms, but moved to the company’s Responsible AI (RAI) group during the pandemic to work on issues he thought were of more direct public benefit. His self-description on his Medium page gives a sense of the breadth of his interests: “I’m a software engineer. I’m a priest. I’m a father. I’m a veteran. I’m an ex-convict. I’m an AI researcher. I’m a cajun. I’m whatever I need to be next.” (Disclosure: I’ve worked part-time for Google RAI, and am currently writing about AI ethics issues part-time for Google’s internal use. Of course I do not speak for Google in any way. And I’ve never met Mr. Lemoine.)

I think it’s pretty clear that Lemoine’s conclusion about his encounter with LaMDA comes from a basic mistake. Indeed, Gary Marcus, a scientist, psychology professor, and excellent writer about AI, calls it “nonsense on stilts” occasioned by “the anthropomorphic bias that allows humans to see Mother Theresa in an image of a cinnamon bun” ... which is especially easy when the bun was massively engineered to look like Mother Theresa.

Let me be more specific about what “massively engineered” means: Google’ s language model was trained on more than a half-trillion words largely taken from the internet, resulting in 127 billion parameters; parameters are roughly the variables discovered by the algorithm, the weights of which are adjusted until the model is sufficiently accurate. Or to put the whole thing less technically: to understand the scale of large language models like Google’s and OpenAI’s GPT-3, you need to boggle your mind and then boggle the boggle.

In this case, Lemoine’s error consists of two further steps. First, he assumes that it takes sentience to seem sentient. But LaMDA was built to create chatbots that respond appropriately to what we say. We take responding appropriately in language as such a key indicator of consciousness, and thus sentience, that Alan Turing designed his famous test around it.

Human language

But Turing was wrong about that, for LaMDA might well pass a rigorous Turing test, but, as Gary Marcus says, a large language model is just a “spreadsheet for words” that lets it act as a massive autocompletion system that knows how words go together but has not the foggiest idea how those words connect to the world. For example, it can respond to your statement that you’ve been sick with, “I hope you’re feeling better” because those words appear together with some frequency, but it doesn’t know who you are or have any hopes.

KMWorld Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues