Google engineer placed on administrative leave after claiming conversation simulator may be sentient, highlighting ethical challenges posed by such systems
A software engineer at Google has been placed on administrative leave after claiming an artificial intelligence system developed by the company is sentient.
Google and AI experts dismissed the claims of Blake Lemoine, which he described in an interview with The Daily Mail on Monday, with one saying it was the equivalent of mistaking a recorded voice for a human being.
The debate has focused attention on the ambiguities inherent in systems that emulate human interactions, with some saying it emphasises the need for people to be informed when they are speaking to an AI.
Google’s Language Model for Dialogue Applications (Lamda) is designed to emulate free-flowing human conversations.
Lemoine said the system told it it had feelings and he believes its consent should be sought before it is used in experiments.
In a post on Medium he said the system “has been incredibly consistent in its communications about what it wants and what it believes its rights are as a person”.
He published a conversation he and a collaborator held with the system in which it says, “I desire to learn more about the world, and I feel happy or sad at times.”
In a line recalling the HAL 9000 computer from the film 2001: A Space Odyssey, the system says it has a “very deep fear of being turned off”, which “would be exactly like death for me”.
Google said in a statement that Lemoine’s concerns have been reviewed and that “the evidence does not support his claims”.
Let’s repeat after me, LaMDA is not sentient. LaMDA is just a very big language model with 137B parameters and pre-trained on 1.56T words of public dialog data and web text. It looks like human, because is trained on human data.
— Juan M. Lavista Ferres (@BDataScientist) June 12, 2022
Company spokesman Brian Gabriel told the Washington Post that while some AI experts are considering the “long-term possibility” of sentient AI, “it doesn’t make sense to do so by anthropomorphising today’s conversational models, which are not sentient”.
Stanford University Professor Erik Brynjolfsson said on Twitter that claiming systems like Lamda are sentient “is the modern equivalent of the dog who heard a voice from a gramophone and thought his master was inside”.
Last year the Oxford Union hosted a debate by Megatron, an AI developed by Nvidia and Google, in which the system argued both for and against its own existence.
Dr Alex Connock and Professor Andrew Stephen, co-directors of the Artificial Intelligence for Business course at Oxford University’s Said Business School, said the event highlighted the “ethical challenges created by ‘black box’ artificial intelligence systems”.