Artificial Intelligence (AI) is a very common term today, but what AI is today, and what most people think it is, can be very different. The AI you know is “weak” AI, but the AI many fear is “strong”.
What is artificial intelligence in reality?
It’s easy to throw around a term like ‘AI’, but it doesn’t make it clear what we’re really talking about. In general, “artificial intelligence” refers to an entire field of computing. The goal of AI is to allow computers to replicate what natural intelligence can accomplish. This includes human intelligence, the intelligence of other animals, and the intelligence of non-animal life, such as plants, single-celled organisms, and anything with some form of intelligence.
This subject raises a deeper question, namely what is “intelligence” in the first place. In fact, even the science of intelligence fails to agree on a universal definition of what intelligence is or is not.
Basically, it is the ability to learn from experience, make decisions and achieve goals. Intelligence makes it possible to adapt to new situations, which distinguishes it from preprogramming or instinct. The more complex the problems to be solved, the more important intelligence is.
We still have a lot to learn about human intelligence, even though we have many methods of measuring intelligence. We don’t even know how human intelligence works under the hood. Some theories, such as Gardner’s theory of multiple intelligences, have been thoroughly debunked, while there is plenty of evidence to support the existence of a general intelligence factor in humans (called the “G-factor”).
In other words, the details of intelligence, both natural and artificial, continue to evolve. Although we seem to intuitively recognize intelligence when we see it, it turns out that drawing a precise circle around the idea of intelligence is difficult!
The era of weak AI has arrived
The AI we have today is commonly referred to as “weak” or “narrative” AI. This means that a specific AI system is very good at performing a single or a narrow set of related tasks. The first computer to beat a human at chess, Deep Blue, was totally useless for everything else. Fast forward to the first computer to beat a human at Go, AlphaGo, who is several times smarter, but still just as good at one thing.
All AIs you encounter, use, or see today are weak. Sometimes different narrow AI systems are combined to form a more complex system, but the result is still effectively narrow AI. While these systems, especially those that focus on machine learning, can produce unpredictable results, they are nothing like human intelligence.
Strong AI does not exist
An AI equivalent or superior to human intelligence does not exist outside of fiction. If you think of AIs from movies like HAL 9000, the T-800, Data from Star Trek or Robbie the Robot, they are seemingly sentient intelligences. They can learn to do anything, function in any situation, and do anything a human can do, often better. This is “strong” AI or AGI (artificial general intelligence), that is, an artificial entity that is at least equal to us and most likely surpasses us.
As far as we know, there are no real examples of this “strong” AI. Unless she’s somewhere in a secret lab, of course. The thing is, we wouldn’t even know where to start to make an AI. We have no idea what gives rise to human consciousness, which would be an essential emergent characteristic of an AI. This is called the difficult problem of consciousness.
Is strong AI possible?
Nobody knows how to make an AI, and nobody knows if it’s even possible to create one. That’s all there is to it. However, we have proof that strong general intelligence does exist. Assuming that human consciousness and intelligence are the result of material processes subject to the laws of physics, there is no principled reason why an AGI cannot be created.
The real question is whether we are smart enough to figure out how this can be done. Humans may never progress far enough to give rise to AGIs and it is impossible to set a timeline for this technology, as it can be said that 16K displays will be available in a few years.
Once again, our narrow AI technologies and other branches of science such as genetic engineering, exotic computing with quantum mechanics or DNA, and advanced materials science could help us bridge the gap. . This is all pure speculation until it suddenly happens by accident or we have some sort of roadmap.
Then there is the question of whether we should strive to create IGAs. Some very smart people, like the late Professor Stephen Hawking and Elon Musk, believe that AGIs will have apocalyptic consequences.
Considering how impractical AGIs seem, these worries might be a little overstated, but be nice to your Roomba, just to be safe.
We wish to thank the writer of this write-up for this outstanding material
What is the difference between strong AI and weak AI?
You can find our social media profiles here , as well as other related pages herehttps://yaroos.com/related-pages/