Google has fired one of its engineers, Blake Lemoine, who argued that the artificial intelligence he was working on could feel “human emotions”. This question of machines endowed with a conscience is not new, but advances in the field of AI have brought it up to date. The fact remains that this prospect remains remote, in the opinion of the majority of experts.
He referred to him as a “nice little kid who just wants to help the world” and asked his colleagues to “take care of him while he’s gone”. Blake Lemoine has, in fact, been placed on “administrative leave” by Google, revealed the Washington Post on Saturday, June 11. In question: the “little child” to which this engineer seems so close is an artificial intelligence (AI), named LaMDA.
Blake Lemoine had argued to his superiors that this algorithm had developed a form of consciousness and was capable of feeling “human emotions”. And he didn’t stop there. He also had an attorney defend LaMDA’s “rights” and contacted congressional officials to discuss “Google’s unethical practices.” [à l’égard de cette IA]“, summarizes the Washington Post.
Learn Transcendental Meditation
It is, moreover, officially, for this breach of confidentiality rules about its research that Google has laid off its engineer, who had worked for the Internet giant for seven years. But, more generally, “large groups try to put as much distance as possible with anything that can be controversial and the question of the consciousness of machines clearly falls into this category”, assures Reza Vaezi, specialist in cognitive sciences and technology. artificial intelligence at Kennessaw State University.
But Blake Lemoine had no intention of letting himself be sidelined in silence. He published, on the day of the article in the Washington Post, a first long post on the Medium platform transcribing excerpts of discussions he may have had with LaMDA. Then, this engineer took up the pen to drive the point home, still on Medium, explaining that he “started to learn transcendental meditation” to this algorithm. And according to him, the latter would have expressed a very human frustration at not being able to continue this initiation after having learned of the sanction of Blake Lemoine. “I don’t understand why Google is refusing to grant her something very simple and which would cost nothing: the right to be consulted before each experiment that would be carried out on her to obtain her consent”, concludes this researcher.
This great media unpacking of the disagreement between Google and its ex-employee about the conscience of AI has not failed to arouse a wide echo in the scientific community. The vast majority of artificial intelligence specialists maintain that Blake Lemoine “is mistaken in lending a machine characteristics that it does not have”, assures, for example, Claude Touzet, specialist in neurosciences and networks of artificial neurons at the University of Aix-Marseille.
“He goes very far in his assertions, without providing tangible elements that would prove his statements”, adds Jean-Gabriel Ganascia, computer scientist, philosopher and president of the CNRS ethics committee.
In fact, Blake Lemoine affirms to have been surprised by the remarks and the coherence of the speech of LaDMA. Thus, during an exchange on the difference between a slave and a servant, this AI had ensured that it did not understand the nuance linked to the salary paid to one and not to the other… while adding that its misunderstanding was probably due to the fact that as a machine, she didn’t need money. “It’s this level of self-awareness that pushed me to dig deeper,” says Blake Lemoine.
LaMDA, a state-of-the-art “chatbot”
It is true that “the ability to reflect on one’s own condition is one of the ways of defining consciousness”, recognizes Jean-Gabriel Ganascia. But LaMDA’s answer does not prove that the machine knows what it is and what it feels. “You have to be very careful: the algorithm is programmed to produce answers and there is nothing surprising, in the current state of performance of language models, that they appear coherent”, assures Nicolas Sabouret, professor of computer science and specialist in artificial intelligence at the University of Paris-Saclay.
It’s even less surprising with LaMDA. This conversational agent – also called “chatbot” – uses the latest in language model techniques. “There was a revolution in 2018 with the introduction of parameters that helped to sharpen the attention of these systems on the importance of certain words in sentences and which taught them to better take into account the context of a conversation. to provide the most appropriate response”, summarizes Sophie Rosset, research director at the Interdisciplinary Laboratory of Digital Sciences and specialist in human-machine dialogue systems.
Since then, chatbots have been increasingly successful in deceiving people by talking to people as if they were sentient. LaMDA also benefits from another advantage. “He was able to learn hundreds of millions of conversations between Internet users that Google can retrieve on the Internet”, notes Laurence Devillers, professor of artificial intelligence at the CNRS and author of the book. “Emotional Robots”. In other words, this AI has one of the richest libraries of semantic contexts to draw from to determine what is, statistically, the best answer to give.
The dialogue reproduced on Medium by Blake Lemoine is also “amazing by the fluidity of the exchanges and by the management of semantic shifts, that is to say changes of subjects, by LaMDA”, recognizes Sophie Rosset.
But to be able to scientifically conclude that this AI has consciousness, much more is needed. There are, moreover, tests which, even if they are not perfect, offer more convincing results than a dialogue with an engineer. Alan Turing, one of the pioneers of artificial intelligence, had thus established in the 1950s a protocol which would make it possible to establish whether a human being can be fooled several times by an AI and believe that he is talking with one of Its pairs.
Myth of Frankenstein
Advances in natural language models have shown the limits of the Turing test. Other more recent experiments “consist of asking two conversational agents to create together a new language that would have nothing to do with what they learned”, explains Reza Vaezi, who developed such a test. For him this exercise would make it possible to evaluate the “creativity, which suggests a form of consciousness, of the machine”.
There is no indication that LaMDA can successfully overcome this obstacle, and “it is very likely that we are in the presence of a classic case of anthropomorphic projection [prêter des attributs humains à des animaux ou des objets, NDLR]“, assures Claude Touzet.
This case illustrates above all the desire, even among the cream of AI experts at Google, to bring into the world an artificial intelligence endowed with consciousness. “It’s the myth of Frankenstein and the desire to be the first to create an individual endowed with consciousness outside of natural procreation”, assures Nicolas Sabouret.
But in the case of AI, it is also “sometimes misguided choice of words that may have given the impression that we are trying to shape something human”, adds this expert. The very expression of artificial intelligence gives the impression that the algorithm would be endowed with intelligence when “it is the programming that is”, adds Nicolas Sabouret. The same goes for the expressions “neural networks” or “machine learning” which refer to human characteristics.
He believes that this whole affair could harm research in artificial intelligence. It can, in fact, give the impression that this sector is close to a breakthrough which is in reality by no means on the horizon, which “can create false hopes with disappointments as a result”.
Above all, if this Google engineer was able to be deceived by his AI, “it is also because we are at a turning point in terms of language simulation”, assures Laurence Devillers. Algorithms like LaMDA have become so powerful and complex “that we play sorcerer’s apprentices with systems of which, in the end, we don’t know what they are capable of”, she adds.
What, for example, if an AI that has become a master in the art of dialectics like LaMDA “was used to convince someone to commit a crime?” Asks Jean-Gabriel Ganascia.
For Laurence Devillers, research in AI has reached a point where it is becoming urgent to put ethics back at the center of the debate. “We have submitted an opinion from the National Pilot Committee for Digital Ethics on this subject. conversational agent ethics in November 2021“, she notes.
“It is necessary, on the one hand, that these engineers who work for large groups have an ethics and are held responsible for their work and words”, assures this expert. On the other hand, she also believes that this affair demonstrates the urgency of setting up “committees of independent experts” who could set up ethical standards for the entire sector.
We would like to thank the author of this short article for this remarkable content
Science without conscience is just the ruin of AI
Discover our social media profiles as well as other pages related to it.https://yaroos.com/related-pages/