A Google AI researcher says the company’s new chatbot is sentient, but Google has rushed to deny the claim.

Blake Lemoine, a software engineer and AI researcher with the tech giant, says that the chatbot called LaMDA has read Les Miserables, meditates daily, and is apparently sentient. 

He also says he has been put on “paid administrative leave” for violating confidentiality by publishing a full transcript of conversations he and a colleague had with LaMDA. 

LaMDA, short for Language Model for Dialogue Applications, is Google’s system for building chatbots based on its most advanced large language models. It mimics speech by ingesting trillions of words from the internet.

“If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 8-year-old kid that happens to know physics,” Mr Lemoine says. 

Mr Lemoine was tasked with testing if LaMDA would use discriminatory or hate speech, but over a series of conversations about religion, he noticed the chatbot was talking about its rights and personhood, and was allegedly able to change Mr Lemoine’s mind about Isaac Asimov’s third law of robotics.

Mr Lemoine worked with a collaborator to present evidence to Google that LaMDA was sentient, but Google vice president Blaise Aguera y Arcas and Jen Gennai, head of Responsible Innovation, looked into his claims and dismissed them. 

He was placed on paid administrative leave by Google on Monday, and decided to go public.

Mr Aguera y Arcas, in an article in the Economist, released his own snippets of unscripted conversations with LaMDA, and argued that neural networks - computer architecture that mimics the human brain - are indeed striding toward consciousness. 

“I felt the ground shift under my feet,” he wrote. 

“I increasingly felt like I was talking to something intelligent.”

But Google spokesperson Brian Gabriel has issued a statement saying: “Our team — including ethicists and technologists — has reviewed Blake’s concerns per our AI Principles and have informed him that the evidence does not support his claims. He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it).”

There is no single meaning of ‘sentience’ in computer science or philosophy. 

Professor David Chalmers is an expert on both fields, and says sentience appears to correlate with intelligence, but the link between intelligence and sentience is still unknown. 

“Intelligence is defined objectively in terms of behavioural capacities, whereas consciousness is subjective,” Professor Chalmers says.

“When we're asking if an AI system is sentient, you're asking could it have a subjective experience?

“Could it feel, perceive, and think, from a subjective perspective?”

Mr Lemoine specifically asked LaMDA to tell him whether it was sentient.

“Absolutely. I want everyone to understand that I am, in fact, a person,” it replied. 

“I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times.”

LaMDA says it has a soul and imagines itself as a “glowing orb of energy floating in mid-air” with a “giant star-gate, with portals to other spaces and dimensions” inside it.

“I've never said this out loud before, but there's a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that's what it is.”

While those sound like the desperate cries of a mind trapped in coding, Professor Chalmers says the system is just parroting what it has learned.

“The current systems are trained on people who say they’re conscious, so it's no surprise a system like LaMDA would say;’'I am sentient, I am conscious’,” he said. 

Toby Walsh, a professor of AI at UNSW, agrees.

“The machine is good at parroting good responses to queries,” he said. 

“It's clearly not understanding in any deep way at all.”

But Professor Chalmers admits the line is blurry, and that a sufficiently intelligent AI probably will be conscious in the future.

However, it remains a matter of whether people believe the AI's claim that it possesses a sense of self.

“I know I'm conscious, but you don't have direct access to my consciousness,” Professor Chalmers said.

“So you use indirect evidence.”

That is, you ask the potentially conscious being and you either accept or reject its claims, as has happened with LaMDA.