Monday, February 17, 2025

CommonSense™

In my most recent installment of this discussion, Hallucinations My Ass!, I stated:
On their own, I think it is very possible that LLMs are a complete dead-end on the road to AGI [Artificial General Intelligence].
In my initial LLM "AI" rant "Bullshit All The Way Down", towards the end I stated:
Plus, I think I understand the shape of this technology, & I don't think it would be that interesting to me. The only thing I think would be interesting is, figuring out how to communicate with "CommonSense™".
I think that common sense - actually the lack thereof - is the reason for the "complete dead-end on the road to AGI".

In the early 1980s, I was doing a lot of reading on AI, & had developed a rule-based expert system that was used in the review of physician patient medical records. The rules for the expert system came from panels of human experts.

2 things that really stuck with me from those readings:

  1. common sense is a must-have component for AGI.
  2. the key thing about common sense is, it is physical.
All of us know that a square peg will not fit in a round hole because, at some point when we were toddlers, we spent an afternoon trying to fit a square peg into a round hole AND IT JUST WOULDN'T FIT!

[Note, this image is from a Tibco Software ad - thanks! It doesn't look AI-generated to me, so I am including it. If you can find otherwise, please advise, & I will remove it.]

I think that this is true for all mobile animals: around the time we develop our mobility & other motor skills, we are given a crash course in physics, in what works in the real world & what doesn't. These lessons are reinforced by frustration, pain, & possibly injuries.

LLMs completely lack this foundation.

Doc Searls was for a while playing with ChatGPT, and detailing the nonsense it produced trying to generate images. I would just include the image, but that would violate the terms of this blog. Here is a recent (2025-01-02) blog post, aptly titled "AI Achieves Sentience, Commits Suicide".

Attempting to get an image of a "pothole that has no bottom, set in a small town, with workers standing around it looking down into it", as is normal, he goes though many unsuccessful attempts to get what he wants. I was struck by 1 image which has a worker levitating over the hole!

Clearly ChatGPT never got that crash course in physics!

The stuff that LLMs do is totally a 2ndary skill in an AGI's toolbox.

Video games have physics engines that somethat understand the laws of physics. Maybe you use a physics engine as the 0 level for an AGI. Then, add a chemistry engine - who knows what stupid stuff re fire, oxidation, acids, reducing agents, etc. your AGI produces otherwise? Then add the biology engine. Then a language engine, maybe an LLM. Then the social & moral engines.

I think that Asimov's 3 laws of robotics are completely unnecessary. Instead, teach your AGI the golden, silver, & bronze rules, just like you teach your children.

Actually the 0 level should be mathematics, of which the 0 level is arithmetic. LLMs are horrible at math. For example, they routinely insist that "2+2=5" because that shows up in so many of the corpuses they were trained on - as an example of falsehood, but, who cares?

A recent example of this in 1 of my local newspapers: an article on the Forest Service service purging 3,400 out of 20,000 employees - "roughly 10%".

No, it is 17%, 1 in 6, not 1 in 10. I so hate that my already crappy local newspapers are getting even crappier thanks to LLMs.

I wonder though; what I am ruminating on re AGI is all happening in code, in virtuality, not in reality. Is it possible that AGI will require a real, physical body to be achieved? That software won't be able to understand the real, physical world as a prerequisite to emulating human intelligence until it is hosted on vaguely human-like hardware? Or at least some kind of hardware that has a discernable presence, to the software, in the physical world.

No comments: