Category Archives: Talk

Talk about politics, society, science, computers, internet, psychology, and philosophy.

Artificial general intelligence (AGI)

Will Artificial general intelligence (AGI) be created any time soon?

No.

Here’s why…

What is AGI?

Let’s start with the question of what Artificial general intelligence (AGI) really is? Google has this to say:

“Artificial general intelligence (AGI) refers to the hypothetical intelligence of a machine that possesses the ability to understand or learn any intellectual task that a human being can. It is a type of artificial intelligence (AI) that aims to mimic the cognitive abilities of the human brain.”

There are two things to note in this quote:

The concept “Artificial general intelligence“.

You have probably heard the term AI. It’s usually being used, quite generously, for things that, half a decade ago, were known as Machine Learning (ML).

So, since AI (that originally was supposed to be used for intelligent machines) now is used for the ML equivalent (for sales purposes) the intelligent machine must therefore be called AGI instead.

I’m sure, when the sales logic dictates it something like “Artificial real general intelligence” or “Artificial general intelligence for real this time we promise” will be used instead of AGI that would then be used about some fractionally more capable ML.

AGI is supposed to mimic the cognitive abilities of the human brain. Mimic the human brain.

A roadmap to AGI

In essence, there are two things we need in order to create an AGI or super-intelligent AI:

  1. A correctly designed digital brain
  2. Sufficient input (learning data) to that brain

We’re unable to do both of these today. Let me explain…

What do we know about brains?

I’m in a unique position since I’ve studied both computer science, AI and psychology, and I’ve tried, and failed to write a paper incorporating computer science and psychology. Mostly because the institution where I tried to do this (a CompSci one) had zero knowledge of or experience with psychology.

When I see software companies creating digital “brains” I am always reminded of this experience, and I’m always seeing this total lack of understanding of how the human brain works.

Don’t get me wrong. Nobody knows how the human brain works!

No, really. Nobody!

Here’s what we know about the human brain:

We know how a neuron works. How synapses and axons work. How signals are likely being sent and received, but get this: We’ve never seen a neuron in an undisturbed brain in real live action doing what it usually does.

Not to mention a whole network of them.

We know different parts of the brain does different things.

This has been measured using something called functional magnetic resonance imaging (fMRI). This measures the blood flow in the brain when the person being scanned performs different tasks, thus making it possible to designate areas of the brains as different centers (language, movement of the body, tactile skin sensations of the body, vision, etc.)

Finally, there have been experiments where the brain of a fruit fly was mapped.

This was done by slicing the brain in ultra-thin slices, photographing them, feeding them into a computer (ML software) and building a 3D map of the brain.

What does this tell us about the brain? We know where things happen, we have a pretty solid theory on how the smallest component, the brain cell, works we also have a hopefully rather correct, map of the brain… of a dead fruit fly.

We have no clue what happens on the neuron level in a live, undisturbed brain. We cannot, like in computer programming, step through the different signals and stages taking place in a brain when something happens.

With this “vast” knowledge of the human brain, we’re about as likely to understand how it works as we are to understand the internet by knowing how the smallest component in a CPU (a NAND-gate) works and that different parts of the computer whirs and clunks when we do different tasks on it. Oh, and we also have the circuit board of a 1980ies calculator to assist us in our understanding of “TikTok Memes”…

But AI companies are working on it!

Sure. There are some invention going on when talking about artificial neural nets and ML/AI research. (Though most significant endeavors seems to center around adding more artificial neurons and making the models slimmer, faster etc.)

This research has been going on for almost 70 years now.

Nature has been evolving brains for over 500 million years.

And what we have right now isn’t even able to understand simple objects and that they are a cohesive whole. To understand what I mean, ask an AI to produce sources for its statements. Most of the time those sources will be wrong, or will not support the statement they are supposed to support. This is because the AI does not have an object understanding of articles. And they are generally bad at sources anyway.

I don’t doubt that researchers will be able to create a digital brain that will be able to do as good as or even better than a human brain, but I think it’s rather silly to imagine this will be done tens of millions of times faster than nature did it.

Sure, give it a century or two… or at least until we do actually know how the human brain works… but trying to create a digital brain when we don’t know how the organic version works… that’s something only people that have no clue we don’t have that information would ever dream of trying.

Language is not meaning

Another thing worth noting with respect to the efforts of AI companies to develop AGI by creating larger and larger LLMs (Large Language Models) is the very nature of language. Words and sentences does not contain all information about the world. They refer to memories of experiences we’ve had as human beings, and they get a lot of their meaning from this.

Take a simple example:

"The ball slipped out of the child's hands and bounced down the staircase."

You get an image of what is happening here, how it’s happening, and what some consequences of this could be. But not because you’ve spent hours reading up on balls, bouncing, staircases, and childhood traumas related to balls going where they shouldn’t.

You understand this sentence because you’ve seen balls bounce down staircases, maybe even had a ball that bounced down a staircase where you lived, maybe even had it break something as it reached the end of the staircase…

An LLM/GPT reading everything it can about balls and staircases could never experience this sentence like you do.

All the text on the Internet: a month of visual data…

Let’s look at this problem from another angle.

Did you know that today’s LLMs are trained on almost all available text on the internet?

There are a few problems with that, like what do we have on the internet, on social media platforms and on and on? Hardly objective data. So an AI trained on this will hardly be objective. Nor will it be able to do much original thinking…

But it’s worse.

Comparing textual data (what LLMs are trained on) with visual data (one type of data a human child would encounter and use in order to understand the world) we can quickly see the differences:

“In a mere 4 years, a child has seen 50 times more data than the biggest LLMs trained on all the text publicly available on the internet.”

We can draw the conclusion that text is very light data wise and an AI that would even have a slim hope of gaining a human understanding of the world would have to use not only internet text, but video, audio and even tactile senses.

In effect, it would have to be a robot, living among humans in order to become human like.

This requires some developments in robotics. Give me a robot cat, as dexterous as a real cat, and as light weight, and maybe it’ll be able to understand humans. Unless we condemn it to live in our mobiles.

Robots of today are hardly that agile or light on the foot. They are more like your grandpa on steroids. Though there are some quite interesting developments of “Super dogs”:

To be honest, though, they still seem arthritic… only faster…

Will there ever be AGI?

Here’s a thought experiment:

Let’s say you’re a forklift manufacturer, and you’re looking into AI to create an autonomous forklift. You want to give it a room with shelves and have it do all the heavy lifting from moving actual packages to organizing and inventorying the space.

Pretty much, you want to be able to just leave stuff off on the front of the storehouse for it to organize in there or ask for it to bring stuff out you can pick up from the front of the storehouse without having to bother with how that happens.

This type of product would cost you research and development, however, if I told you, just add another 10 million (or 10 billion?) dollars and I can make your forklift critique Dostoyevsky or do childcare or collect stamps?

Why would you want all that, and why would you pay extra for it? You want a forklift and the better if it isn’t conscious or self-aware or have a ton of extra interests. It would be cruel and unusual to lock a living being into that warehouse for its whole life!

People tend to see AGI as this humanoid robot (or digital human) that can jump into any kind of situation and solve any kind of problem, but they don’t realize that’s going to cost extra and my prediction is that it’s going to cost so much extra very few companies or venture capitalists will be willing to pay for it.

And at the other end of AGI rests the nightmare of dealing with a self-aware conscious being. Can you really treat it however you want? Does it have rights? Will it become a digital slave? What if it wants to unionize?

I wouldn’t want to pay an extra million or billion to buy me those kinds of problems when all I wanted was a more autonomous fork lift.

The problem with AI

The biggest problem with AI isn’t that it’s going to take over, out-smart us, or necessarily steal our jobs.

The biggest problem with AI is that we think it’s super smart and give it tasks it cannot handle.

We’re already doing that:

So while AI can be a threat or a promise, I think the human level AI will not be around for a long time. Though, AI (or ML) is here, and it already impacts our lives and if we don’t understand what it is and what it isn’t we risk giving it more power than it should have.

I sincerely hope decision makers do understand that the people peddling “almost humanlike intelligence” have no clue how human intelligence actually works.

Nobody does.


How to land an airplane—and how not to…

Watching a TV-show (not going to name names) where someone is trying to land a small propeller airplane and is asked by the “expert” to decrease speed by pushing the foot pedals just made me go bonkers.

This was the second episode of this show where some airplane “expert” seem to have kidnapped the scriptwriters and held them at gun point until they incorporated their “knowledge” in the script.

The first episode featured an airplane going into the water at an obviously unsurvivable angle, then floating for hours as a boat, and the ditching was explained by someone having hacked the plane’s “crash-landing” protocol.

SHUDDER!

Or how about the book where the pilot is night flying a passenger jet and suddenly the altimeter starts showing descent, but the pilot is confused because they can’t feel anything going on with the airplane, so they quickly (and “smartly”) concludes the ground must be coming rushing up towards the airplane.

DOUBLE SHUDDER!

For the uninitiated: Airplanes do not float. The Hudson River splashdown plane floated for a while then sunk. They do not have crash-landing protocols… they have checklists and so on the pilots follow, but not a hackable protocol for crash(land)ing an airplane. Not even Boeing had a crash landing protocol in MAX 8… though only God knows what they were really trying to do there…

Also, the foot pedals seem to have two purposes in airplanes. One, to adjust the rudder, the other to activate the wheel brakes… there’s some finesse where you press the upper or lower part of them but not in the air… unless you want to roll and crash… spiral into the ground… or something… (I have to research what pushing the pedals as if panic braking in midair will do to an airplane!)

And for the airplane with the strange altimeter? Where do I even begin? An altimeter doesn’t measure distance to the ground, it measures air pressure (speed is also not measured against the ground… it turns out, when flying, your relation to the air is way more important than your relation to the ground… as long as you keep from flying into it…)

When flying in the night, pilots have felt things going on with the airplane, in really unfortunate cases, grabbed the controls and (almost) crashed the airplane. Pilots are advised NOT to listen to their sense of balance or what their eyes tells them when flying in darkness, but rather look at the instruments.

So, a moderately trained pilot would in this situation concluded that the airplane was going down, not that the ground was coming up and done something about that…

There are instruments to measure “the ground coming up against the airplane” it’s called the radio altimeter, but in this case it’d likely be used to issue terrain warnings when the airplane is about to crash… which is what a moderately trained pilot would have assumed, had they seen the ground come rushing up against them with this instrument… I.e. that they were flying into a mountain.

Back to the original question…

Want to know how to slow down a small propeller airplane in the air?

RESEARCH IT.

Or, heck, just ask ChatGPT. (Though, as always, don’t take its words for granted, but it could get you started on researching how to land a small airplane):

Slowing down a small propeller plane involves reducing its speed while maintaining control and stability. Here are several steps commonly used to slow down a small propeller plane:

  1. Reduce Throttle: Gradually decrease engine power by reducing throttle settings. This reduces the thrust generated by the propeller, resulting in a decrease in forward speed.
  2. Pitch Up: Increase the aircraft’s angle of attack by gently pulling back on the control yoke or stick. This increases the aircraft’s drag and reduces its forward speed.
  3. Deploy Flaps: Extend the flaps to increase drag and lift. Flaps increase the wing’s surface area and change its shape, allowing the aircraft to maintain lift at lower speeds. Most small propeller planes have adjustable flaps that can be deployed for takeoff and landing.
  4. Extend Landing Gear: If the aircraft is equipped with retractable landing gear, extending it will increase drag and contribute to slowing down the aircraft.
  5. Air Brakes or Speed Brakes: Some small propeller planes are equipped with air brakes or speed brakes. These are surfaces that can be deployed into the airflow to increase drag and reduce speed quickly.
  6. Use Spoilers: If the aircraft is equipped with spoilers, deploying them can disrupt the airflow over the wings, increasing drag and aiding in slowing down the aircraft.
  7. Maintain Level Flight or Descend: Flying level or descending slightly while reducing power helps bleed off speed more effectively.
  8. Trim Adjustment: Adjusting the trim settings can also affect the aircraft’s speed and descent rate. This is typically done in coordination with other control inputs to maintain stability.

It’s important to remember that all these actions should be performed gradually and in coordination with each other to maintain control of the aircraft. Sudden or abrupt maneuvers can lead to loss of control or overstressing the airframe. Additionally, the specific procedures may vary depending on the aircraft type and manufacturer’s recommendations, so pilots should refer to the aircraft’s operating manual for guidance.

Or hey, guess what? There’s a YouTube-video for this!

And I bet there are a ton more where this one came from!

Also, had the scriptwriters watched this video they’d figured out the thing to hack on an airplane is the auto-land function, tricking the plane into thinking it was going to land on an airport.

Just don’t ask me what they thought the pilots would be doing when they figured that one out… perhaps shut the autopilot off? I guess that button had to be hacked too… oh well…

Or just hack it to perform what’s commonly called “a controlled flight into terrain”. Though, those usually end up killing everybody because the plane goes too fast (which is why the MAX 8 incidents were lethal…)

The key takeaway: RESEARCH IT!


Whiskey Whiskey Whiskey

“WHISKEY… WHiskey… Whiskey… whiskey.. w-h-i-s-k-e-y. Uh…”
“Identify yourself? Over.”
“WHISKEY… WHIskey… Whiskey… whiskey… w-h-i-s-key… And… over?”
“This is an army frequency. You’re not supposed to be here unless you’re army. Over.”
“I am army.”
“Explain the Whiskey… thing. Over.”
“It says here, my call sign is whiskey echo five… times I guess? Uh… Over.”