Will Artificial general intelligence (AGI) be created any time soon?
No.
Here’s why…
What is AGI?
Let’s start with the question of what Artificial general intelligence (AGI) really is? Google has this to say:
“Artificial general intelligence (AGI) refers to the hypothetical intelligence of a machine that possesses the ability to understand or learn any intellectual task that a human being can. It is a type of artificial intelligence (AI) that aims to mimic the cognitive abilities of the human brain.”
There are two things to note in this quote:
The concept “Artificial general intelligence“.
You have probably heard the term AI. It’s usually being used, quite generously, for things that, half a decade ago, were known as Machine Learning (ML).
So, since AI (that originally was supposed to be used for intelligent machines) now is used for the ML equivalent (for sales purposes) the intelligent machine must therefore be called AGI instead.
I’m sure, when the sales logic dictates it something like “Artificial real general intelligence” or “Artificial general intelligence for real this time we promise” will be used instead of AGI that would then be used about some fractionally more capable ML.
AGI is supposed to mimic the cognitive abilities of the human brain. Mimic the human brain.
A roadmap to AGI
In essence, there are two things we need in order to create an AGI or super-intelligent AI:
- A correctly designed digital brain
- Sufficient input (learning data) to that brain
We’re unable to do both of these today. Let me explain…
What do we know about brains?
I’m in a unique position since I’ve studied both computer science, AI and psychology, and I’ve tried, and failed to write a paper incorporating computer science and psychology. Mostly because the institution where I tried to do this (a CompSci one) had zero knowledge of or experience with psychology.
When I see software companies creating digital “brains” I am always reminded of this experience, and I’m always seeing this total lack of understanding of how the human brain works.
Don’t get me wrong. Nobody knows how the human brain works!
No, really. Nobody!
Here’s what we know about the human brain:
We know how a neuron works. How synapses and axons work. How signals are likely being sent and received, but get this: We’ve never seen a neuron in an undisturbed brain in real live action doing what it usually does.
Not to mention a whole network of them.
We know different parts of the brain does different things.
This has been measured using something called functional magnetic resonance imaging (fMRI). This measures the blood flow in the brain when the person being scanned performs different tasks, thus making it possible to designate areas of the brains as different centers (language, movement of the body, tactile skin sensations of the body, vision, etc.)
Finally, there have been experiments where the brain of a fruit fly was mapped.
This was done by slicing the brain in ultra-thin slices, photographing them, feeding them into a computer (ML software) and building a 3D map of the brain.
What does this tell us about the brain? We know where things happen, we have a pretty solid theory on how the smallest component, the brain cell, works we also have a hopefully rather correct, map of the brain… of a dead fruit fly.
We have no clue what happens on the neuron level in a live, undisturbed brain. We cannot, like in computer programming, step through the different signals and stages taking place in a brain when something happens.
With this “vast” knowledge of the human brain, we’re about as likely to understand how it works as we are to understand the internet by knowing how the smallest component in a CPU (a NAND-gate) works and that different parts of the computer whirs and clunks when we do different tasks on it. Oh, and we also have the circuit board of a 1980ies calculator to assist us in our understanding of “TikTok Memes”…
But AI companies are working on it!
Sure. There are some invention going on when talking about artificial neural nets and ML/AI research. (Though most significant endeavors seems to center around adding more artificial neurons and making the models slimmer, faster etc.)
This research has been going on for almost 70 years now.
Nature has been evolving brains for over 500 million years.
And what we have right now isn’t even able to understand simple objects and that they are a cohesive whole. To understand what I mean, ask an AI to produce sources for its statements. Most of the time those sources will be wrong, or will not support the statement they are supposed to support. This is because the AI does not have an object understanding of articles. And they are generally bad at sources anyway.
I don’t doubt that researchers will be able to create a digital brain that will be able to do as good as or even better than a human brain, but I think it’s rather silly to imagine this will be done tens of millions of times faster than nature did it.
Sure, give it a century or two… or at least until we do actually know how the human brain works… but trying to create a digital brain when we don’t know how the organic version works… that’s something only people that have no clue we don’t have that information would ever dream of trying.
Language is not meaning
Another thing worth noting with respect to the efforts of AI companies to develop AGI by creating larger and larger LLMs (Large Language Models) is the very nature of language. Words and sentences does not contain all information about the world. They refer to memories of experiences we’ve had as human beings, and they get a lot of their meaning from this.
Take a simple example:
"The ball slipped out of the child's hands and bounced down the staircase."
You get an image of what is happening here, how it’s happening, and what some consequences of this could be. But not because you’ve spent hours reading up on balls, bouncing, staircases, and childhood traumas related to balls going where they shouldn’t.
You understand this sentence because you’ve seen balls bounce down staircases, maybe even had a ball that bounced down a staircase where you lived, maybe even had it break something as it reached the end of the staircase…
An LLM/GPT reading everything it can about balls and staircases could never experience this sentence like you do.
All the text on the Internet: a month of visual data…
Let’s look at this problem from another angle.
Did you know that today’s LLMs are trained on almost all available text on the internet?
There are a few problems with that, like what do we have on the internet, on social media platforms and on and on? Hardly objective data. So an AI trained on this will hardly be objective. Nor will it be able to do much original thinking…
But it’s worse.
Comparing textual data (what LLMs are trained on) with visual data (one type of data a human child would encounter and use in order to understand the world) we can quickly see the differences:
“In a mere 4 years, a child has seen 50 times more data than the biggest LLMs trained on all the text publicly available on the internet.”
We can draw the conclusion that text is very light data wise and an AI that would even have a slim hope of gaining a human understanding of the world would have to use not only internet text, but video, audio and even tactile senses.
In effect, it would have to be a robot, living among humans in order to become human like.
This requires some developments in robotics. Give me a robot cat, as dexterous as a real cat, and as light weight, and maybe it’ll be able to understand humans. Unless we condemn it to live in our mobiles.
Robots of today are hardly that agile or light on the foot. They are more like your grandpa on steroids. Though there are some quite interesting developments of “Super dogs”:
To be honest, though, they still seem arthritic… only faster…
Will there ever be AGI?
Here’s a thought experiment:
Let’s say you’re a forklift manufacturer, and you’re looking into AI to create an autonomous forklift. You want to give it a room with shelves and have it do all the heavy lifting from moving actual packages to organizing and inventorying the space.
Pretty much, you want to be able to just leave stuff off on the front of the storehouse for it to organize in there or ask for it to bring stuff out you can pick up from the front of the storehouse without having to bother with how that happens.
This type of product would cost you research and development, however, if I told you, just add another 10 million (or 10 billion?) dollars and I can make your forklift critique Dostoyevsky or do childcare or collect stamps?
Why would you want all that, and why would you pay extra for it? You want a forklift and the better if it isn’t conscious or self-aware or have a ton of extra interests. It would be cruel and unusual to lock a living being into that warehouse for its whole life!
People tend to see AGI as this humanoid robot (or digital human) that can jump into any kind of situation and solve any kind of problem, but they don’t realize that’s going to cost extra and my prediction is that it’s going to cost so much extra very few companies or venture capitalists will be willing to pay for it.
And at the other end of AGI rests the nightmare of dealing with a self-aware conscious being. Can you really treat it however you want? Does it have rights? Will it become a digital slave? What if it wants to unionize?
I wouldn’t want to pay an extra million or billion to buy me those kinds of problems when all I wanted was a more autonomous fork lift.
The problem with AI
The biggest problem with AI isn’t that it’s going to take over, out-smart us, or necessarily steal our jobs.
The biggest problem with AI is that we think it’s super smart and give it tasks it cannot handle.
We’re already doing that:
- Racially biased AIs being used in the court system
- AI algorithms in social media and genocide
- Discriminatory AI used to witch hunt the marginalized, or mass surveil them, with no accountability
- Or more close to home, how AI ruins your job interview or ChatGPT religion (what could possibly go wrong there?)
- Using AI to put innocent people in front of police with drawn weapons…
- It’s probably worth mentioning again: AI runs what you see on social media
So while AI can be a threat or a promise, I think the human level AI will not be around for a long time. Though, AI (or ML) is here, and it already impacts our lives and if we don’t understand what it is and what it isn’t we risk giving it more power than it should have.
I sincerely hope decision makers do understand that the people peddling “almost humanlike intelligence” have no clue how human intelligence actually works.
Nobody does.
Header image: By El contenido de Pixabay se pone a su disposición en los siguientes términos. Pixabay, CC0.