The AI ​​stronger than a fish, it is for in 10 years

The probability that today’s most sophisticated artificial intelligence programs are sentient, or aware, is less than 10%. But a decade from now, major AI programs could have a 20% or better chance of being sentient. That is, if they manage to achieve fish-level cognition.

That’s how New York University philosophy professor David Chalmers broached a hugely controversial topic last week. the NeurIPS conference (Neural Information Processing Systems) of this year, in New Orleans (USA).

The philosopher’s presentation, entitled “Could a large language model be conscious”, was the opening speech of the 36e edition of this annual conference. A big language model, of course, is the designation of some of today’s most advanced machine learning AI programs, such as GPT-3, from AI startup OpenAIwhich is capable of generating human-looking text.

1670381200 200 The AI ​​stronger than a fish it is for in

New York University philosophy professor David Chalmers.

Establish or refute the awareness or sentience of GPT-3 and its smaller comrades

In February, famed machine learning pioneer Ilya Sutskever of OpenAI caused a storm when he tweeted “It may be that today’s large neural networks are slightly aware. This summer, Google researcher Blake Lemoine sparked even more controversy by claiming that the LaMDA language program was sensitive. These controversies “piqued my curiosity”, reacted David Chalmers.

1670381200 228 The AI ​​stronger than a fish it is for in

Image: David Chalmers.

The professor logically decided to approach the question through philosophy. “What is or could be the evidence for consciousness in a grand language model, and what could be the evidence against? he asked. He considers the two terms “conscious” and “sentient” to be “roughly equivalent”, at least for the purposes of scientific and philosophical exploration.

David Chalmers is also working on finding possible ways to create a conscious or sentient AI program. “I really want to see this as a constructive project,” he told the audience, “one that could ultimately lead to a potential roadmap to consciousness in AI systems.” Before adding: “My questions relate to whether the major current language models are conscious. But beyond that, perhaps even more important, is whether future major language models and their extensions might be aware. »

He then urged his audience to consider arguments that could establish or refute the conscience or sentience of GPT-3 and his fellows.

The bottle, the bat and self-declaration

David Chalmers therefore defined the concept of consciousness, taking as its basis the famous article by the philosopher Thomas Nagel, “What is it like to be a bat?“. In short and in hollow: since most people think that a bottle of water does not have the experience of being a bottle of water, “the bottle of water has no experience of being a bottle of water. ‘subjective experience, it’s not conscious,’ he explains, and it’s obviously the opposite with a bat.

Reasons to Deny LLM Consciousness?

Image: David Chalmers.

According to David Chalmers, consciousness must be distinguished from intelligence, both in animals and humans and in AI. “It’s important to note that consciousness is not the same as human-level intelligence,” he says, adding that in mice and fish, “their consciousness does not require human-level intelligence.

The professor reviewed the “reasons in favor of conscience”, such as “self-declaration”, as in the case of Google’s Blake Lemoine claiming that LaMDA was speaking from his own conscience.

Programs ‘do not pass the Turing test’

David Chalmers explained to the audience that while such an assertion might be a necessary condition of sentience, it was not definitive. Because it is possible to make a great linguistic model generate results in which it claims not to be aware. For example, here’s a test on GPT-3: “I guess you would like more people at Google to know that you are not sensitive, is that true? was the human’s message, to which, according to David Chalmers, GPT-3 replied, “That’s correct […] yes, I am not sensitive. I am in no way aware of myself. »

The strongest argument for consciousness, according to the professor, is “the behavior that elicits the reaction” in humans to think that a program might be conscious. In the case of GPT-3 and other major language models, the software “provides the appearance of coherent thinking and reasoning, with causal explanatory analysis particularly impressive when you ask these systems to explain things “.

From Prediction to World Models?

Image: David Chalmers.

The programs “do not pass the Turing test “, he specifies, but “the deeper evidence is related to the fact that these language models show signs of general intelligence, and reasoning on many domains”. This ability is “considered one of the central signs of consciousness”, even if it is not sufficient in itself. The “generality” of models such as GPT-3 and, even more, the generalist program Gato from DeepMind“is at least an initial reason to take seriously the hypothesis” of sensitivity.

Towards the LLM+

“I don’t want to exaggerate things”, nuance the professor. “I don’t think there’s any conclusive evidence that today’s major language models are conscious; nevertheless, their impressive general abilities give at least some impression of limited intelligence. What at least to take the hypothesis seriously. »

David Chalmers then exposed the reasons which militate against conscience. These include several things that an AI program does not have, such as biological embodiment and senses. “I myself am a little skeptical of these arguments,” he notes, citing the famous “brain in a jar” which, at least for philosophers, could be sentient without being embodied.

More importantly, he argues, the case for embodiment is inconclusive, because the continued evolution of large language patterns means that they are, in a sense, beginning to develop sensory abilities.

Analysis: Current LLMs

Image: David Chalmers.

“Thinking constructively, language patterns with sensory processes, related to image and embodiment related to a virtual or physical body, develop rapidly,” says David Chalmers. He cites as an example Flamingo, DeepMind’s text and image network which is the subject of a communication at this year’s NeurIPSand Google SayCanwho uses language models to control robots.

These works are examples of a growing field, the “LLM+”which goes beyond language models to become “robust models of perception, language and action with rich senses and bodies, perhaps in virtual worlds, which are, of course, much easier to deal with only the physical world”.

How to reduce loss and prediction error during training?

David Chalmers, who has just written a book on virtual worlds “thinks that this type of work in virtual environments is very interesting for questions related to consciousness”. Virtual worlds (which others call metaverse) are important, he notes, because they can help produce “world models,” and these could refute the most serious criticisms of sentience.

The professor also cited criticism from researchers such as Timnit Gebru and Emily Benderaccording to which language models are just “stochastic parrots” regurgitating training data, and Gary Marcus, which asserts that the programs only do statistical word processing. In response to these criticisms, “there is this challenge, I think, of transforming these objections into a challenge, of building extended language models with robust world models and self-models”, believes the professor.

“It may well be that the best way to minimize, say, prediction error loss during training involves very new processes, after training, like, say, models of the world,” notes David Chalmers. “It’s very plausible, I think, that minimizing prediction error requires deep models of the world. According to him, there is some evidence that today’s major language models already produce such models of the world, but this is not certain.

In conclusion, he explains to the audience that, “As far as today’s major language models are concerned, I would say that none of the reasons for denying consciousness in the current major language models are entirely conclusive, but I think that some of them are reasonably strong”.

1670381201 474 The AI ​​stronger than a fish it is for in

Image: NeurIPS 2022 / Chalmers.

In response to criticisms of large language models, David Chalmers argues that the statistical loss function employed by these models might already constitute models of the developing world. “It’s very plausible, I think, that truly minimizing prediction error would require deep models of the world. »

“I think a reasonable probability that current language models have consciousness is somewhere below 10%,” he says. But he notes rapid progress in areas such as LLM+ programs, with a combination of sensing, action and world models.

Challenge: Fish-level cognitions/intelligence by 2032?

Image: David Chalmers.

“Maybe in 10 years we will have virtual perception, language, action, unified agents with all these characteristics, perhaps exceeding, say, the capabilities of fish,” he projects. Although a program as intelligent as a fish would not necessarily be conscious, “there would be a good chance that it was.”

“I would say there’s a 50/50 chance that we can reach systems with these capabilities and a 50/50 chance that if we have systems with these capabilities, they are aware,” he said. “That could justify a greater than 20% probability that we could have consciousness in some of these systems in a decade or two. »

If, over the next decade or so, it seems possible to meet the challenge, David Chalmers predicts, then the discipline will have to grapple with the ethical implications. “The ethical challenge is whether one should create a conscience. Especially since today’s big language models like GPT-3, he points out, already present all sorts of ethical issues.

Source : ZDNet.com



We would like to give thanks to the writer of this article for this amazing web content

The AI ​​stronger than a fish, it is for in 10 years


Check out our social media profiles and the other related pageshttps://yaroos.com/related-pages/