“Our team - including ethicists and technologists - has reviewed Blake’s concerns per our AI Principles and have informed him that the evidence does not support his claims,” said Chris Pappas, a Google spokesperson. In a response to Lemoine’s claims, Google said that LaMDA can follow along with prompts and leading questions, giving it an appearance of being able to riff on any topic. If LaMDA is like other large language models, he said, it wouldn’t learn from its interactions with human users because “the neural network weights of the deployed model are frozen.” It would also have no other form of long-term storage that it could write information to, meaning it wouldn’t be able to “think” in the background. The architecture of LaMDA “simply doesn’t support some key capabilities of human-like consciousness,” said Max Kreminski, a researcher at the University of California, Santa Cruz, who studies computational media. “It is mimicking perceptions or feelings from the training data it was given - smartly and specifically designed to seem like it understands,” said Jana Eggers, the chief executive officer of the AI startup Nara Logics.
But throughout the weekend and on Monday, researchers pushed back on the notion that the AI was truly sentient, saying the evidence only indicated a highly capable system of human mimicry, not sentience itself. He said the feeling was not scientific, but religious: “who am I to tell God where he can and can’t put souls?” he said on Twitter.Īlphabet Inc.’s Google employees were largely silent in internal channels besides Memegen, where Google employees shared a few bland memes, according to a person familiar with the matter. In his conversation with the chatbot, Lemoine said he concluded that the AI was a sentient being that should have its own rights. The system has been trained on trillions of words from the internet in order to mimic human conversation. The Washington Post on Saturday ran an interview with Lemoine, who conversed with an AI system called LaMDA, or Language Models for Dialogue Applications, a framework that Google uses to build specialized chatbots. If the focus is on the system’s apparent sentience, Bender said, it creates a distance from the AI creators’ direct responsibility for any flaws or biases in the programs. “The problem is, the more this technology gets sold as artificial intelligence - let alone something sentient - the more people are willing to go along with AI systems” that can cause real-world harm.īender pointed to examples in job hiring and grading students, which can carry embedded prejudice depending on what data sets were used to train the AI. “Lots of effort has been put into this sideshow,” she said. Lemoine’s stance may also make it easier for tech companies to abdicate responsibility for AI-driven decisions, said Emily Bender, a professor of computational linguistics at the University of Washington.