Sunday, July 27, 2003

Hard AI

Hard AI (Artificial Intelligence) posits that mind can be instantiated in hardware other than the human brain. If you can do hard AI, then presumably truly intelligent machines can be created or evolved.

The other side of the Cybernetic Singularity is humans being able to migrate their intelligence to silicon -- kind of like "The Matrix", but then throw the body away. In some science fiction, the supposition had been that you have to do a deep, destructive scan of the brain to make such a transfer -- which may not be too far off of the mark. To a computer geek like myself, the hardware/software <=> brain/mind analogy has been intuitively compelling for many years, but as I have studied brain and mind science more, the implementation of the human mind, particularly memory, is seriously intertwined with the neurological hardware -- not the nice layers that we like to do in software.

One of the 1st sci fi novels that was way ahead of the curve on this stuff was "Vacuum Flowers" by Michael Swanwick (1987, now out of print). About when this came out, some of my colleagues and I were talking about the analogies between computer and human design and maintenance:

  • Hardware maintenance engineer <=> doctor
  • Hardware designer <=> genetic engineer (future)
  • Software maintainer <=> psychiatrist
  • Software developer <=> prophet??? self-help guru???
As part of the discussion, we wanted a term for programs that humans could load into their brains and run and couldn't come up with a word we liked -- then out comes "Vacuum Flowers" with "wetware", which is perfect. Other good concepts in the book:
  • Loadable personalities, available at your local book/music/video store. No doubt in my mind, if the average teen could "be" Brittany or whoever the latest is instead of just dressing like them and idolizing them, they would.
  • Designed personalities. One of the main characters has a personality built from four archetypes: trickster, warrior, leader, fool (I think).
  • The earth is a hive mind. The rest of the solar system is very careful to avoid "being assimilated".
All in all, a fantastic read for 1987. I am going to do a reread soon. Swanwick has been very prolific since then, but nothing else quite in this memespace. Some of his stuff tho has a misogynistic streak I've never understood.

Back to hard AI, I think that the machines will far be able to far surpass humans. Human/machine interfaces or outboard processors for minds will probably be de rigeur for competitive survival. There was an interesting rebuttal of the Cybernetic Singularity last year by Jaron Lanier, I think at The Edge. His point was, he wasn't too worried about it as current software was way too buggy to ever get as sophisticated as the mind. Two points against that argument:

  1. The human mind is buggy. If it weren't, we wouldn't need mental hospitals. And even sane minds are subject to many cognitive illusions (see "Inevitable Illusions", Massimo Piattelli-Palmarini, 1994). You can also find many references to how overrated human intuition is. Physicians who don't follow strict protocols but rather trust their instincts and intuitions are wrong more often than they are right.
  2. Software is still very young. For instance, basic protocols for component communication have never been stable long enough for any kind of organic growth. The DCOM-CORBA rivaly is now Web Services; early ontology exchange models are now being replaced by DAML-OIL. Interestingly, Lanier has recently invented "Phenotropic Computing" -- current hard defined interfaces are replaced by fuzzy pattern recognition between software components. Very interesting, much more brain-like, much less brittle and thus with much more potential to evolve.
Enough for now. Next up, Greg Egan, AE (Artificial Emotion).

No comments: