Foresight Nanotech Institute Logo
Image of nano

Is Robo Habilis a gateway to Intelligence?

In response to my Robo Habilis post, Tim Tyler replied:

An intelligence challenge should not involve building mechanical robot controllers – IMO. That’s a bit of a different problem – and a rather difficult one – because of the long build-test cycle involved in such
projects.
There are plenty of purer tests of intelligence that use more abstract ideas – games, puzzles, and other classical intelligence test fodder.
If you want to measure the abilities of mechanical robots, then fine, but let’s not pretend that it’s the same thing as measuring intelligence.

This is a fairly widely held view — there were a couple of researchers at the AGI Roadmap meeting expressing the same idea.  If I understand him correctly, Minsky feels the same way.  I believe, however, that it is not true.

To begin with, that was the reigning paradigm of the entire “golden age” of AI from the 50s through the 70s. Even Shakey the Robot had a bicameral control architecture: a body control program written in SAIL, and a cognitive engine written in LISP.  It was strongly believed that the parts of thought that were hard for humans would be the hard ones to program, and that once we got those licked, building the lower-level body-controller stuff (or vision, or speech-to-text for the input) would be an afterthought, or at most a clean-up engineering exercise.

Over the course of the 60s, classic AI had a tremendous run of success, which is pretty neatly summed up by the work in Minsky’s “Semantic Information Processing.”  They had programs that did games, puzzles, intelligence tests, arithmetic word problems, freshman calculus. The hard stuff. They were full of optimism, and predicted that AI would run to a successful conclusion, creating an artificial mind, in another decade or two. They had done the college student; how much more effort should it take to do a toddler?

They were wrong.  The greatest lesson that came out of the Golden Age was that “the hard stuff is easy, and the easy stuff is hard.”  Any toddler could recognize a dog in a picture; it would be three more decades before AI could get even close (and it’s still not really there yet).

The mind, it turns out, is like an iceberg — most of it is unseen to consciousness, below the waterline.  Perhaps a better analogy would be that consciousness is like the legislature of a country, or the head office of a company. What they perceive is in reality only an executive summary of what’s really happening. What the early AI researchers had done was to build a “company” consisting only of the board of directors and secretaries, but no factories, no sales force, no middle managers, no shop foremen, and no labor force.

The brain was evolved as a body controller. Evolution typically takes a structure that works and copies and adapts it to the next task. Consider the increasing intelligence of animals as we work ourselves up the evolutionary tree towards the human: insects, reptiles, mammals, primates.  At every level new and improved kinds of control, feedback, discrimination, planning, and learning are built into the structure — and it’s all still there forming the part below the iceberg, the real company outside the boardroom, of human intelligence.

The classic AIers at the Roadmap asked me, “Isn’t a blind paraplegic still intelligent?” and of course he is — but only because his brain still contains all the mechanism that was evolved to to the control and interpretation he now lacks.

The buzzword in current AI for the reason bodies are important is “symbol grounding.”  This refers to philosophical theories of meaning among symbols in symbol-processing machinery, and a simplistic reading of it is that whereas SHRDLU doesn’t “really know” what a red block is, a physical robot that plays with them really does.  Unfortunately, the term in common use is often taken as implying that there is some magical transubstantiation of meaning into symbols by virtue of having a physical body, and this isn’t right and obscures the real issue. The paraplegic still has meaning in his mind.

What has to be there is not the actual body, but the mental mechanism for controlling it — that allows the mind to imagine, predict, describe, and relate other concepts to the one said to be understood. Most of our higher-level concepts are drawn from, by analogy and blending, the basic (very large) set of concepts we have learned, by experience, on the shop floors of our minds as we interact with the real world over the course of our lives.

Could that interpretive, predictive, concept-building, etc, cognitive machinery be built another way than working up a controller for a humanoid robot body? Certainly. But there are two reasons to do it with a body: first, it’s most likely easiest that way.  There are a lot of things we don’t know yet about how the mind works.  There’s no reason to think that we have no more blind spots like the classic AIers did. Working with real robots will show us the gaps fastest.

The second reason is that once we get the brain built, if we’ve put it together in a rough semblance of the phylogenetic/ontogenetic sequence that the human mind is built, there’ll be a much better chance that its meanings will match ours. It will understand things the way we do (of course humans vary a lot in the way we understand things), and do things the way we do, and thus appreciate the way we do them, and vice versa.  For example, the parts of the brain that control language and manual manipulation are strongly overlapped. Try to teach your robot sign language without a similar structure and it will never get the “accent” right.  Nor, unless it has the same kind of manipulation control to borrow, will it ever be as fluent in English as a human.

Separating “intelligence” from the rest of cognitive function is a false dichotomy, and one that has led AI astray — in a big way — before.

9 Responses to “Is Robo Habilis a gateway to Intelligence?”

  1. Mike Cope Says:

    Exactly. You’d like Merleau-Ponty.

  2. Instapundit » Blog Archive » THOUGHT ON THE AI TAKEOVER, from J. Storrs Hall. Plus, Robo Habilis?… Says:

    [...] THOUGHT ON THE AI TAKEOVER, from J. Storrs Hall. Plus, Robo Habilis? [...]

  3. flashgordon Says:

    Yes, as I’ve tried to explain to Chris Pheonix; stop trying to be the toxic avenger to make people do good; instead of using chemicals to get rid of the ants; juts get rid of the food! Actually, I told him more or less that if you want peace, do peacefull things . . . like science! I told him about the difference between herbivores and carnivores; his counter-example?! Rams buckin each other! So what!?

    Well, here you are(Josh Storrs Hall) talking about how you can’t separate the mind and body; that the mind is somehow influenced by the body it has; this kind of reminds me of pretty versus ugly girls; pretty girls get it in their heads and spend all day in front of a mirror learning nothing but beauty tips; ugly girls start to develop their minds; generally speaking, the mathematicians are those types who were the outcasts of their societies; they were the ugly ones; i’m in the military; i’m good looking; i’m telling you, a good looking babe is a real shocker!(yes, I’ve seen a few; i can count them on one hand; in the navy, you get around; you see plenty of people; there’s no real significant amount of ‘really’ good looking babes in the military; really good looking babes just marry some rich kid; after high-school; you don’t see them ever again unless your part of the rich club . . . i’m not really rich)

    Let’s stop the crn . . . i’ll go ahead and throw in the foresight institute and I guess the lifeboat org . . . toxic avengers; the mechanically forcing people to not kill one another and just clean up the sugar and the food; so, the ants don’t come! Just go out in space and do science folks!

    I’ll tell you why! Because your all socially bound up with the irrationalists! I’ve tried to tell Mr Phoenix this as well as Eric Drexler . . . to say the least, they’ve not bothered to reply in years and even when I’ve tried to really push them; they won’t budge!

    No amount of logic or facts will make these people talk about irrationality! I’ve pointed out that the anti-space migration policy does not logically follow from savin the earth; their logic is “well we can’t let anybody leave because we have to save the earth.” They still refuse to comment!

    Why? because they have it in their heads that that those who want to use nanotechnologies to enhance their brains are ‘extremists.’ This all goes to Bill Joy’s article making it politically incorrect to point out that the technological future will make life very hard on the irrationalists!

  4. Tim Tyler Says:

    That’s a long rambling post. I don’t think it does much to refute my original point. Good quality mechanical robot control is a difficult problem involving moving parts, nanotechnology – and a whole bunch of things that come after machine intelligence. If we make the mistake of defining intelligence in terms of mechanical robot controllers we have gotten into an enormous muddle.

    Intelligence has *nothing* to do with mechanical robot controllers. It is a property of arbitrary input-compute-output systems.

    It’s not easiest to build mechanical robot controllers first. That’s because mechanical robot actuators lag behind other types of actuator – because of their moving parts – and therefore have limited penetration. That approach would be *incredibly* slow! What I think is easiest is controlling the existing 6 billion human bodies over the internet via computer screens – a-la Google and hedge funds. That way we get to use the existing 6 billion sets of molecular nanotechnology sensors and actuators practically for free.

  5. Alvis Brigis Says:

    Nice post Mr.Hall. I agree with your criticism of this notion of “purer tests of intelligence”. “Intelligence” is an elusive term that thinkers generally equate with IQ tests, to the detriment of forward progress in actually defining it and its interplay with the broader system.

    Measures of intelligence must take into account the mind/body continuum and extend body to incorporate the environment – especially as we attempt to bridge abstractions like “intelligence” and “consciousness” with convergent acceleration models. To not do so is a cop-out that serves to simplify conversations and models, but allows for miscommunication of meaning, making it harder to achieve consensus on what we really mean.

    You may find interesting some of the writing I’ve done on and near the subject:

    http://socialnode.blogspot.com/2009/10/control-over-perceived-environment-cope.html

    http://memebox.com/futureblogger/show/1212-spivack-kelly-pushing-tech-consciousness-boundaries-but-how-deep-is-the-rabbit-hole-

    http://socialnode.blogspot.com/2009/06/simulation-era.html

    In comments here -
    http://www.sentientdevelopments.com/2009/06/transhumanism-and-intelligence_22.html

    Keep up the thougtful posting!

  6. Alvis Brigis Says:

    Just read Tim’s response and it seems like you guys may be having a miscommunication. I think Tim is among those that views intelligence as a system property:

    It is a property of arbitrary input-compute-output systems.

    And was perhaps simply making a point about the inefficiency of using robots to run the tests.

    @ Tim – Love the line about using “the existing 6 billion sets of molecular nanotechnology sensors and actuators practically for free.” :)

  7. Tim Tyler Says:

    In the definition of universal intelligence, it says it should be able to perform in a range of environments – so tests involving robots are – in principle – allowed.

    However, one should not penalise a mind because its body sucks. If a human can’t make a cup of coffee either using the robot via telepresence technologies – for example because the machine can’t hold the containers of liquid involved, then that shouldn’t go against the machine mind.

    Another way it would be unfair to penalise machines, is if they have less information about the problem. A typical human has been knocking around in their bodies for decades. A machine mind in a robot body might well take a decade to build up a similar set of experiences (this is part of why using robots is so slow). A coffee-making test relies on this kind of laboriously-acquired real-time information. A robot might take a decade or more to acquire such information – and until it does, its failure at coffee making is no reflection on its *intelligence* – but would simply reflect a lack of relevant *experience*.

  8. Dave Wyland Says:

    Excellent post on the problems with “pure” AI. I agree with the idea that intelligence is an abstraction of the learned skills of evolution plus cultural and individual developmental learning. We see only the results of this “iceberg” of learning. We do not have access to the how the learning took place over years to eons. That makes it hard to duplicate the learning, and therefore hard to duplicate the abstraction of intelligence.

    The ski instructor makes it look simple and effortless, but you will not begin to approach that state until you have struggled through all the falls and bad skiing necessary for you and your muscles to learn how to do it well. Same with golf, etc. You can’t come back from where you haven’t been.

  9. Bob Mottram Says:

    Doubtless it will be possible to build an AGI without reference to the wider environment, but if your desire is to create a “habilis” then some sort of body will be required – even if it’s only a simulated one. So much of our intellectual performance is bound up with being embodied that attempts to neatly delineate mind and body into separate magisteria are doomed from the outset. Even in the realm of pure linguistics it is hard to avoid concepts which originate from embodiment, and the purely linguistic AI will struggle to understand many concepts with which humans are trivially acquainted.

    The main difficulty facing habilis creators suffering from robophobia is that current simulation environments are insufficiently rich to be capable of producing experiences comparable to human sensory input. Some satisfaction can be obtained by using data sets such as those on Rawseeds, but real AGI will require learning through interaction which these static data sets are unable to deliver.

Leave a Reply