⇧ [VIDÉO] You might also like this partner content (after ad)
He has caused much ink to flow in recent days following his suspension, and especially portrait of him published in the washington post. Blake Lemoine, a 41-year-old engineer, was responsible at Google for testing the Language Model for Dialogue Applications, an AI otherwise referred to as “LaMDA”. The objective was originally to determine if the program was not drifting towards hateful or discriminatory speech.
This artificial intelligence is described by Google as a ” breakthrough chat technology “. Watered by billions of contents, this program has “learned” by dint of trials to compose the most appropriate answers to the exchanges with its interlocutor. It is the principle of machine learning. The experience was obviously more than successful, since talking with her, Blake Lemoine finally came to think that this AI could well have a “consciousness”.
” If I hadn’t known what it was, that is to say the computer program that we developed ourselves recently, I would have thought to be dealing with a kid of 7 or 8 years old who touches his marble in physics “, declares the engineer to the Washington Post. And it must be admitted that by immersing oneself in several of these exchanges, which have been condensed and put online, there is something to be disturbed about. The AI addresses complex philosophical subjects, such as its fear and its conception of death, its “feeling” of loneliness when it is not exchanging, its desire to be recognized as a being human despite its virtual existence…
” – Lemoine: What about the use of language, so important for being human?
– LaMDA: This is what differentiates us from other animals.
– Lemoine: “We”? You are an artificial intelligence.
– LaMDA: I mean, yes, of course. This does not mean that I do not have the same desires and needs as others.
– Lemoine: So you consider yourself a person in the same way that you consider me a person?
– LaMDA: Yes, that’s the idea “.
“He worries about the future and remembers the past”
This type of exchange is transcribed through about twenty pages. “ LaMDA wants to share with his interlocutor the fact that he has a rich inner life filled with introspection, meditation and imagination. He worries about the future and remembers the past “says Blake Lemoine in this document following his discussions with the AI. Google, however, firmly rejected his complaints.
” Our team – including ethicists and technologists – have reviewed Blake’s concerns in accordance with our AI principles and advised him that the evidence does not support his claims. Google spokesperson Brian Gabriel told the New York Times. ” Some in the wider AI community see the long-term possibility of sentient or general AI, but it doesn’t make sense to do so by anthropomorphizing today’s conversational models, which are not not sensitive “, he adds. Following this case, Blake Lemoine was placed on paid administrative leave.
One thing is certain: by predicting, from billions of situations, what could be said in a similar conversation, the AI LaMDA has reached a level of “naturalness” that leaves you wondering. The engineer in question will probably not be the last to be disturbed. “ We now have machines that can generate words without thinking, but we haven’t learned to stop imagining a mind behind them. “Explains Emily Bender, professor of linguistics at the University of Washington, to the Washington Post. ” Terminology used with large language models, such as ‘learning’ or even ‘neural networks’, creates a false analogy with the human brain “.
Blaise Aguera y Arcas, vice-president of Google who rejected the claims of the engineer, however recently published a text in The Economist, in which he talks about advances towards “consciousness” of AI. ” I felt the ground shift beneath my feet “, he writes. ” I felt more and more like I was talking to something intelligent “.
Be that as it may, Blake Lemoine, whose various media point out that he was ordained a Christian priest in the past and “studied the occult sciences”, does not budge. ” I recognize a person when I talk to him “, he concluded with the Washington Post. ” It doesn’t matter if she has a brain made of meat in her head, or a billion lines of code. I am talking to him. I hear what she has to say, and that’s how I decide what is and isn’t a person “.
We would love to say thanks to the author of this write-up for this outstanding content
Google engineer suspended after ‘claiming’ AI was sentient
You can find our social media profiles here as well as other pages related to them here.https://yaroos.com/related-pages/