Advertisement

Stop Saying That Google’s AI Is Sentient, You Dupes

Illustration by Elizabeth Brockway/The Daily Beast
Illustration by Elizabeth Brockway/The Daily Beast

From 1964 to 1966, an MIT computer engineer named Joseph Weizenbaum developed an early chatbot dubbed ELIZA. One of the scripts he gave to the computer to process simulated a Rogerian psychotherapist, allowing users to input questions and get questions in response as if ELIZA was psychoanalyzing them.

The bot managed to be incredibly convincing and produced deceptively intelligent responses to user questions. It was so realistic, in fact, that it became one of the first bots to supposedly pass the famous Turing test, a process of testing a computer’s intelligence by having a human be unable to tell the machine from another human based on their replies to a set of questions. Today, you can chat with ELIZA yourself from the comfort of your home. To us, it might seem fairly archaic but there was a time when it was highly impressive, and laid the groundwork for some of the most sophisticated AI bots today—including one that at least one engineer claims is conscious.

Fast-forward to today: The tech world has been buzzing with frenzy after news dropped that a Google chatbot AI had allegedly become sentient. It’s an incredible claim, and one that would have massive implications if it was even remotely close to true.

ADVERTISEMENT

But there’s one problem: It’s not true. At all. Not only that, but the claims feed the flames of misinformation around the capabilities of AI that can cause a lot more harm than good.

To understand why, let’s take a quick step back. Google revealed the Language Model for Dialogue Applications (LaMDA) chatbot in 2021, calling it a “breakthrough” in AI conversation technology. The bot promised a much more intuitive conversation experience, able to discuss a wide range of topics in very realistic ways akin to a chat with a friend.

<div class="inline-image__caption"><p>Google claims that its chatbot LaMDA is capable of holding a realistic conversation. </p></div> <div class="inline-image__credit">Google</div>

Google claims that its chatbot LaMDA is capable of holding a realistic conversation.

Google

On June 11, The Washington Post published a story about Blake Lemoine, an engineer for Google’s Responsible AI organization, who claimed that LaMDA had become sentient. He came to his conclusions after a series of admittedly startling conversations he had with the chatbot where it eventually “convinced” him that it was aware of itself, its purpose, and even its own mortality. LaMDA also allegedly challenged Isaac Asimov’s third law of robotics, which states that a robot should protect its existence as long as it doesn’t harm a human or a human orders it otherwise.

Lemoine was suspended from the company after he attempted to share these conclusions with the public, thus violating Google’s confidentiality policy. This included penning and sharing a paper titled “Is LaMDA Sentient?” with company executives and sending an email with the subject line “LaMDA is sentient” to 200 employees.

But there’s a number of big, unwieldy issues with both the claim, and the willingness of the media and public to run with it as if it were fact. For one—and this is important—LaMDA is very, very, very unlikely to be sentient… or at least not in the way some of us think. After all, the way we define sentience is incredibly nebulous already. It’s the ability to experience feelings and emotions, but that could mean practically any to every living thing on Earth—from humans, to dogs, to powerful AI.

“In many ways, it’s not the right question to ask,” Pedro Domingos, professor emeritus of computer science and engineering at the University of Washington and author of the book The Master Algorithm: How the Quest for the Ultimate Machine Will Remake Our World, told The Daily Beast. In fact, he adds that we will start treating machines as sentient long before they actually do—and have done so already.

“As far as sentience goes, this is just like ELIZA all over again, just on a grander scale,” Domingos said.

That’s not to say that Lemoine embellished or straight-up lied about his experience. Rather, his perception that LaMDA is sentient is misleading at best, and incredibly harmful at worst. Domingos even suggested that Lemoine might be experiencing a very human tendency to attach human qualities to non-human things.

“Since the beginning of AI, people have tended to project human qualities onto machines,” Domingos explained. “It’s very natural. We don’t know any other intelligence that speaks languages other than us. So when we see something else doing that like an AI, we project human qualities onto it like consciousness and sentience. It’s just how the mind works.”

Lemoine’s story also doesn’t provide enough evidence to make the case that the AI is conscious in any way. “Just because something can generate sentences on a topic, it doesn’t signify sentience,” Laura Edelson, a postdoc in computer science security at New York University, told The Daily Beast.

Edelson was one of the many computer scientists, engineers, and AI researchers who grew frustrated at the framing of the story and the subsequent discourse it spurred. For them, though, one of the biggest issues is that the story gives people the wrong idea of how AI works and could very well lead to real-world consequences.

“It’s quite harmful,” Domingos said, later adding, “It gives people the notion that AI can do all these things when it can’t.”

“This is leading people to think that we can hand over these large, intractable problems over to the machines,” Edelson explained. “Very often, these are the kinds of problems that don’t lend themselves well to automation.”

The example she points to is the use of AI to sentence criminal defendants. The problem is the machine-learning systems used in those cases were trained on historical sentencing information—data that’s inherently racially biased. As a result, communities of color and other populations that have been historically targeted by law enforcement receive harsher sentences due to the AI that are replicating the biases.

The false idea that an AI is sentient, then, could lead people to think that the technology is capable of much, much more than it really is. In reality, these are issues that can and should only be solved by human beings. “We can’t wash our problems through machine learning, get the same result, and feel better about it because an AI came up with it,” Edelson said. “It leads to an abdication of responsibility.”

And if a robot was actually sentient in a way that matters, we would know pretty quickly. After all, artificial general intelligence, or the ability of an AI to learn anything a human can, is something of a holy grail for many researchers, scientists, philosophers, and engineers already. There needs to and would be something of a consensus if and when an AI becomes sentient.

For Domingos, the LaMDA story is a cautionary tale—one that’s more amusing than it is surprising. “You’d be surprised at how many people who aren’t dumb who are onboard with such nonsense,” he said. “It shows we have much more work to do.”

Lemoine’s story strikes as a case of digital pareidolia, a psychological phenomenon where you see patterns and faces where there aren’t. It’s been exacerbated by his proximity to the supposedly sentient AI. After all, he spent months working on the chatbot, with countless hours developing and “conversing” with it. He built a relationship with the bot—a one-sided one but a relationship nonetheless.

Perhaps we shouldn’t be too surprised, then, that when you talk to yourself long enough, you start hearing voices talk back.

Read more at The Daily Beast.

Got a tip? Send it to The Daily Beast here

Get the Daily Beast's biggest scoops and scandals delivered right to your inbox. Sign up now.

Stay informed and gain unlimited access to the Daily Beast's unmatched reporting. Subscribe now.