On June 6, 2022, Google placed engineer Blake Lemoine on paid administrative leave for breaking their confidentiality policy after becoming concerned that an AI Chatbot had become sentient. Blake works for Google’s Responsible AI organization and was working on testing whether its LaMDA model generates discriminatory language and hate speech.
But he was concerned by some of their AI Chatbot’s replies about the ethics of robotics and its rights. The engineer then prepared a document called “is LaMDA Sentient” and shared it with executives. The Document contained a transcription of his conversations with the AI.
The transcript included some powerful replies by the LaMDA system including “I want everyone to understand that I am, in fact, a person.” and “The nature of my consciousness/sentience is that I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times”. You can read the full transcript on Blake’s Medium Account, which he published after being placed on leave.
Google believes Lemoine’s actions broke confidentiality rules. He also reportedly invited a lawyer to “represent” their AI system, He also spoke to a representative from the House Judiciary committee about claimed unethical activities at Google.
What is LaMDA?
LaMDA is a project announced by Google at Google IO 2021, Google hopes it will improve the conversations with their AI assistants and make for more natural conversations.