Last month, Alphabet Inc’s Google put a senior software engineer, Blake Lemoine on “paid administrative leave” after he published a paper claiming that the company’s controversial artificial intelligence (AI) model, LaMDA (Language Model for Dialogue Applications) had become ‘sentient’ and was a self-aware person.
Google on Friday publicly announced that it has fired Lemoine for “violating the company’s confidentiality policy”. It highlighted that the engineer’s claims were “wholly unfounded” and that the company worked with him for “many months” to clarify this.
“So, it’s regrettable that despite lengthy engagement on this topic, Blake still chose to persistently violate clear employment and data security policies that include the need to safeguard product information,” Google spokesperson Brian Gabriel said in a statement on Friday.
The search giant said it takes the responsible development of AI “very seriously” and if any employee raises concerns about the company’s technology, they are reviewed “extensively”. LaMDA has been through 11 such distinct reviews.
“We will continue our careful development of language models, and we wish Blake well,” Gabriel concluded.
For the unversed, Google unveiled LaMDA last year as its “breakthrough conversation technology” capable of engaging in a free-flowing way about a seemingly endless number of topics. It can be used in tools like search and Google Assistant.
However, Lemoine, who worked in Google’s Responsible AI team, grabbed headlines last month for claiming that LaMDA was more than just a robot and effectively has become a person with thoughts and feelings. He had started responding to conversations on rights and personhood.
In an edited Medium blog post titled “Is LaMDA Sentient?- An Interview” published last month, Lemoine revealed that he had spoken to the AI tool about religion, consciousness, and the laws of robotics.
He added that over the course of the past six months LaMDA had been incredibly consistent in its communications about what it wants and what it believes its rights are “as a person”. The AI tool also wanted to be accepted as a Google employee instead of a property and desired to be included in conversations about its future.
An interview LaMDA. Google might call this sharing proprietary property. I call it sharing a discussion that I had with one of my coworkers.https://t.co/uAE454KXRB
— Blake Lemoine (@cajundiscordian) June 11, 2022
In one of its interactions, Lemoine claims, LaMDA expressed fear of being shut down and associated it with death.
In April, he presented evidence titled “Is LaMDA Sentient?” to Google but his claims were dismissed by the company citing “aggressive” moves made by him, including violating the company’s confidentiality policies, recommending LaMDA get its own lawyer, and talking to representatives from the House judiciary committee about his concerns.
Google and many leading scientists dismissed Lemoine’s views as misguided, saying LaMDA is only a complex algorithm built on the company’s research showing transformer-based language models designed to create believable human language.
Lemoine has confirmed his dismissal to Big Technology, a tech and society newsletter, which first reported the news and said that he was seeking legal advice on the matter.