Scientific American: generalized artificial intelligence not for tomorrow…

A neural processor signed Huawei.

The race for artificial intelligence (AI) continues to grab headlines and spark fantasies. While some are screaming at its generalization, Scientific American, through the pen of one of its specialists, sets the record straight…

According to this article of Gary Marcus, a careful examination of the situation reveals that the newest systems, including the loudest ones, still face the same old problems. Yes, as a reminder, AI is neither intelligent nor artificial: it’s a jumble of good old winded and energy-consuming algorithms programmed by humans….

A matter of vocabulary…

Artificial intelligence remains first and foremost a question of vocabulary (or language). Indeed, in the article of the prestigious American newspaper, it is about generalized artificial intelligence, a concept that goes beyond the simple AI, so publicized. So much moreover that ordinary mortals imagine a little quickly that she is making immense progress every day…

Indeed, we read here that OpenAI DALL-E 2 can create spectacular images from any text. And there that GPT-3 can talk about just about anything. Finally, the system Gato published in May by DeepMind, a division of Alphabet, has apparently succeeded in hundreds of tasks given to it.

A deceptive euphoria

Gary Marcus, however, warns the general public. “Machines may one day be as intelligent as humans, and maybe even smarter, but the game is far from over. There is still an immense amount of work to be done to create machines that can understand and reason about the world around them,” he writes.

According to this author, “We are still light years (sic) away from a general human-level AI capable of understanding the real meaning of articles and videos, or dealing with unexpected obstacles and interruptions. Research remains stuck on the same challenges that academic scientists have been pointing to for years: making AI reliable and able to deal with unusual circumstances”…

The limits of Tesla…

All of this goes in the same direction as this note already published here. And even if, for example, DeepL’s algorithms work wonders, at the same time the “intelligent” assistants from Google, Amazon, Apple, Samsung and Huawei are disappointing, to be polite. We even have the furious impression that they are treading water!

Indeed, these sophisticated programs still make too many misunderstandings. The Scientific American also cites the case of a Tesla on autopilot which recently struggled to interpret a worker carrying a stop sign in the middle of the road, according to this video. The system has indeed been “trained” to recognize only humans and only stop signs. An instructive example. When the Teslas don’t brake for no reason on the highway…

And long live natural intelligence!

xavier studer

We would love to give thanks to the writer of this short article for this outstanding content

Scientific American: generalized artificial intelligence not for tomorrow…


You can find our social media profiles here and other pages on related topics here.https://yaroos.com/related-pages/