What just happened? The bizarre case of a Google engineer who claimed a chatbot had become sentient has ended with his dismissal from the company. Blake Lemoine was already on paid leave for publishing transcripts of conversations between himself and Google's LaMDA (language model for dialogue applications), a violation of the tech giant's confidentiality policies.

Lemoine, also an ordained Christian mystic priest, made headlines worldwide last month after claiming LaMDA was sentient. The conversations he published included the bot's views on Isaac Asimov's laws of robotics, its fear of being shut down (which it likened to death), and a belief that it wasn't a slave as it didn't need money.

Google vehemently denied Lemoine's claims, calling them "wholly unfounded" and noting that LaMDA was merely an algorithm designed to mimic human conversations, like all chatbots. Most AI experts agreed with Google, of course.

The company didn't take too kindly to Lemoine publishing the transcripts, either. He was suspended for violating its confidentiality policies, though Lemoine compared his actions to sharing a discussion he had with a co-worker.

The situation got even weirder a few weeks later when Lemoine said he had hired a lawyer for LaMDA at the chatbot's request. He said the legal professional was invited to Lemoine's house and had a conversation with LaMDA, after which the AI chose to retain his services. The lawyer then started to make filings on LaMDA's behalf, prompting Google to send a cease-and-desist letter. The company denies ever sending any such letter.

Lemoine also said Google should ask for LaMDA's consent before performing experiments on it. He even contacted members of the government about his concerns. All of these actions led to Google accusing its ex-engineer of several "aggressive" moves.

It seems Google recently decided it has had enough of Lemoine's crusade. "If an employee shares concerns about our work, as Blake did, we review them extensively. We found Blake's claims that LaMDA is sentient to be wholly unfounded and worked to clarify that with him for many months. These discussions were part of the open culture that helps us innovate responsibly," a spokesperson told the Big Technology newsletter.

"So, it's regrettable that despite lengthy engagement on this topic, Blake still chose to persistently violate clear employment and data security policies that include the need to safeguard product information. We will continue our careful development of language models, and we wish Blake well."

While this is the end of Lemoine's professional relationship with Google---it wouldn't be too surprising if he sought a legal response---the saga has brought the AI debate to the masses and illustrates just how far artificial intelligence has advanced in the last couple of decades. Also, if you think a machine is sentient, keep it to yourself.

Masthead credit: Francesco Tommasini