Make wishes for reality

Too bad that in June, during the Baccalaureate philosophy tests, the candidates did not have to work on a subject related to the theme of consciousness, yet on the program. A few days before the test, the news could easily have fueled the copies because, a day earlier, Blake Lemoine, engineer at Google, had just been suspended for not having respected the confidentiality policy of the company by taking up the cause of an artificial intelligence, allegedly endowed with a conscience. Imagine the question posed to high school students: “To say that an artificial intelligence is like me, does that mean that it looks like me? » You have 4 hours…

“I want everyone to understand that I am actually a person”

First on his personal blog[1]then in an interview with washington post[2]this engineer assigned to the department “ Responsible AI from Google – the artificial intelligence (AI) ethics department – says it’s working on the neural networks of the machine – at this point nothing out of the ordinary for a researcher studying models of neural language[3] – that he would have had the intimate conviction that this one answered him by being aware of itself. She would have formulated sentences almost unequivocally possible: “I want everyone to understand that I am actually a person” or : ” I’ve never said this out loud before, but I have a deep fear that I’ll be turned off. »

In short, a beginning of dialogue “Man-Machine” worthy of the film “Her”[4] in which a man acquires an artificial intelligence until he falls madly in love with it. In the case of Blake Lemoine, the only proof of love he received was that of walking out the door on the grounds that this wacky dialogue violated the confidentiality clauses of the company.

Anthropomorphism bias

We can see the reasons why this news item highlighting an alleged “artificial consciousness” caused so much buzz. On Google alone, no less than 1.5 million results appear if you search for “Blake Lemoine”. One of the obvious conclusions is that the oxymoron “human machine” sell.

Apart from the fact that it drains multiple fantasies of machines potentially capable of doing as well as or even better than humans, this subject highlights an obvious anthropomorphic bias which blinds us to the point, according to the popular expression, of “to take bladders for lanterns”. In this case, we would be ready to swear that the machine has a real conscience, not to say a soul…

In reality, and even if we were able to decipher how the human brain works in all its complexity, and in particular how consciousness is formed (on this subject, it is necessary to read the very educational “Do you speak brain? » by scientist Lionel Naccache[5]), the case of this “conscious” machine would in fact only be the result, admittedly stunning, of associations of concepts, words, images… rearranged to the point of giving the impression that the machine thinks and reasons; all thanks, on the one hand to the large number of accumulated data, and on the other hand, to the incessant training so that the artificial neural networks of the machine progress in their learning.

One day, a conscious machine?

In the case of this machine mentioned by the engineer at Google, we are far from a true “consciousness” in the human sense of the term, starting from the postulate that this notion is multiple: self-consciousness, consciousness of sensations, consciousness of experience… in short, we see that the paths explored by neuroscience to understand how consciousness is formed are numerous, and still largely unknown. Nevertheless, in our tropism to see in machines “beings” that look like us or that would approach human characteristics (just look at the new robot “Optimus[6] “, presented a few days ago by Elon Musk), we like to imagine that the machine can react and think like a human, even show empathy or emotion.

Machine intelligence

In ” The brain in action[7] », Stanislas Dehaene, holder of the chair of experimental cognitive psychology at the Collège de France, explains that « our smartphone is not conscious, but we could add a layer of software that allows it to share information, to reflect, to have a form of meta-cognition… These are operations that are calculations and it there’s no reason we can’t put them in the machine. »

We can already see it: the machines around us are obviously developing a certain form of intelligence. Just look at how chatbots (chatbots) are able to automatically interact and initiate a meaningful, constructed and argued dialogue with a human.

However, can we say that these supposedly “intelligent” machines are “conscious”? In the book “Emotional robots[8] »the academic Laurence Devillers closes the debate with this observation of common sense:

In the absence of a human body, the emergence of a consciousness like ours, endowed with feeling, thought and free will, has no chance of occurring in a machine… On the other hand, we can simulate for some emotions greatly simplifying the natural processes. »

No doubt that the software on which this engineer worked had to have this incredible capacity for simulation to the point of making Blake Lemoine lose common sense. Surely the latter was probably not quite aware of it to the point of blinding himself to this AI he had to believe was alive.

___

NOTES

1 May be Fired Soon for Doing AI Ethics Work | by Blake Lemoine | Medium

2

3

4

5

6

7

8