Foresight Nanotech Institute Logo
Image of nano

AI: Summing up

Let’s try to pull all the threads together, as futurists — which is the whole point here — and get some idea about when it might be reasonable to expect AI to show up.  When I say AI I want to look at the entire diahuman range, so the answer would still be a range even if we were historians looking back on the process from the vantage point of the far future.

I’ve claimed that “I think we have the techniques now to build an AI at the hypo/dia border, equivalent to a dull but functional human.”  That doesn’t mean we have one now, or even that one is possible next year.  What it means is that by the kind of techniques we can now use to program self-driving cars, we could, with a major development effort, program an AI that would be able to do as broad a range of that kind of task as a very dull human can, but which would need additional programming to do new tasks.

Commenter Alex Kilpatrick put forward a cogent objection to the “AI is near” thesis, writing:

All of the so-called gains in AI are still a million miles away from the “dull but functional human” There are some things like playing chess that computers do really well. And intelligent humans do those things too. But that in no way means the computer is remotely intelligent.
The whole AI field is nothing but clever programming. Some of those programs are quite clever indeed, but they represent the intelligence of their creators, not the programs. Some programs may appear intelligent in very narrow domains, but they are extremely brittle — they will not be useful at all even on the borders of the domains for which they were designed.

I agree with this strongly as a description of the state of AI today in general, but with one major reservation.  Not entirely all of the AI field is nothing but clever programming.  The AI programs that do the most impressive application tasks certainly are — because the efforts to build general learning machines are less than babies at the moment.

The key to moving up from the hypo/dia border into the diahuman range is imitation.  I’d guess that the state of the art would let us build a machine that would be able to watch someone sweeping a room and be able to sweep the same room with more or less the same series of strokes, being brittle to changes in the furniture positions and so forth. (Consider the kind of learning demonstrated in Ng’s helicopter.)  Building an AI that could watch lots of sweeping and then be able to figure out on its own how to sweep a new room — without having been programmed with any knowledge of sweeping ahead of time — is the kind of thing we need to advance the state of the art.

The difference is that in the second case the AI is inferring a model and a program from observations.  But this is what 21st century AI is (already) all about — typically, today, inferring statistical models from reams and reams of observations, but at least tackling the right problem.  The main thing that will determine the rate of advance is how much of the clever programming goes directly into end applications and how much goes into basic core learning.

Concept formation, model building, program inference, and so on are a quantum step harder than parameter tuning in a known ontology.  However, the math for that kind of thing is advancing, and the processing power to use techniques such as search and GAs is on its way in the next decade.  I don’t think we’ll have a superintelligent AI by 2020; indeed, I don’t think we’ll even have one that can educate itself by reading Wikipedia.  But I do think it’s at least a 50% chance we’ll have AIs that can learn something by a combination of imitation and careful verbal coaching.

16 Responses to “AI: Summing up”

  1. Fred Hapgood Says:

    > … a 50% chance we’ll have AIs that can learn something by a combination of
    > imitation and careful verbal coaching.

    That looks like a fairly low standard, but I really don’t know what ‘learn’ means.
    How about cheap generalization? A pigeon will recognize a leaf on which it has been trained over a wide range of lighting regimes, distances, orientations, life phases (i.e. young leaves vs old leaves), and occlusion environments (clutter). To me, AI means a machine that can do that. I’m imagining a machine such that you can present
    it with a large pile of assorted objects, then present it with a ball, and it will separate out all the spheres in the pile and put them in a box. When that is done, you can present it with a cube or a pyramid or a shoe and without *any further training* it will do the same for those. Routine pick and place problems — for a pigeon, let alone a human. Think we will have those by 2020? The impact on the economy would be immense. Just immense.

    the same have it do the same thing for that geometry.
    Think we will
    of assorted objects without requiring new training sessions, let alone a new R&D project, for each new functionality. A set of perfectly routine pick and place problems.

  2. Fred Hapgood Says:

    Oops. Please delete the above.

    > … a 50% chance we’ll have AIs that can learn something by a combination of
    > imitation and careful verbal coaching.

    That looks like a fairly low standard, but I really don’t know what ‘learn’ means.

    How about cheap generalization? A pigeon will recognize a leaf on which it has been trained over a wide range of lighting regimes, distances, orientations, life phases (i.e. young leaves vs old leaves), and occlusion environments (clutter). To me, AI means a machine that can do that. I’m imagining a machine such that you can present
    it with a large pile of assorted objects, then present it with a ball, and it will separate out all the spheres in the pile and put them in a box. When that is done, you can present it with a cube or a pyramid or a shoe and without *any further training* it will do the same for those. Routine pick and place problems — for a pigeon, let alone a human. Think we will have those by 2020? The impact on the economy would be immense. Just immense.

  3. curious _undergrad Says:

    I will be satisfied with something a little like Microsoft’s Courier tablet (hopefully very very soon).

    http://www.youtube.com/watch?v=UmIgNfp-MdI

  4. dz Says:

    Regarding the question of whether AGI can achieve or exceed human levels, the recent article at physorg http://www.physorg.com/news186071954.html is relevant.

    If general intelligence is indeed a phenomenon that can be localized and mapped, then it seems that it would be possible to improve the intelligence of artificial brains in a more direct manner than simply increasing their speed (which is smarter, one human or a thousand monkeys thinking 10 times as fast as normal?).

  5. James A. Donald Says:

    We not only do not have artificial humans, we do not have artificial bees. Bees are not brittle. They react sensibly to situations they were never specifically programmed to deal with. Seems to me we are missing some secret sauce. Let us try to upload Caenorhabditis elegans.

    I suspect it will turn out to be far easier to upload a person, than to create de novo an intelligence with capabilities similar to that of a person. Writing software is hard, copying it is easy. If we could upload Caenorhabditis elegans, then uploading a person would merely be a matter of funding.

  6. Peter Says:

    “(which is smarter, one human or a thousand monkeys thinking 10 times as fast as normal?)”
    = A thousand humans thinking 10 times as fast as normal.
    There is nothing artificial about intelligence.

  7. Kyle Says:

    Even if we use a learning machine with neural networks it in essence comes down to statistics. How do you get away from the statistics?

    Even abstraction or novel thought might be nothing more then the averaging out of thousands of random associations. To but integrity in a machine like is pretty risky but something we should definitely do.

  8. Fred Hapgood Says:

    Recently a lecture was given at Harvard by a Cornell scientist named Itai Cohen on the Flight of the Fruit Fly. In part the abstract read:

    There comes a time in each of our lives where we grab a thick section of the morning paper, roll it up and set off to do battle with one of nature’s most accomplished aviators – the fly. If however, instead of swatting we could magnify our view and experience the world in slow motion we would be privy to a world-class ballet full of graceful figure-eight wing strokes, effortless pirouettes, and astonishing acrobatics. After watching such a magnificent display, who among us could destroy this virtuoso? How do flies produce acrobatic maneuvers with such precision? What control mechanisms do they need to maneuver? More abstractly, what problem are they solving as they fly? Despite pioneering studies of flight control in tethered insects, robotic wing experiments, and fluid dynamics simulations that have revealed basic mechanisms for unsteady force generation during steady flight, the answers to these questions remain elusive.

  9. rhhardin Says:

    AI is the field with the longest-running future promise of any I know of.

    I suspect it’s the same thing that used to attract young males to philosophy.

    Males abstract and simplify. Wittgenstein showed how that eliminated the solution to the problem.

    Lots of things are like that. AI is probably one.

  10. jdelphiki Says:

    We humans tend to be fairly egocentric in our perception of intelligence. We set criteria that we feel represents a baseline for artifiicial intelligence, but we tend to ignore how much of that baseline would also have to include instinctive capabilties that we’ve evolved into over a few million years, or learned reactions that we pick up intuitively from the aspect of having to use our bodies and brains in the variety of environments in which we live.

    In the example of teaching a machine how to sweep, the machine would already be at a disadvantage if it did not first have built into it the ingrained capability of having hands, arms, musculature, etc. like ours…along with the years of coordinated developmental practice of using them all.

    So, do we judge the machine’s inability to intuit how to sweep by observation entirely on its lack of intelligence or do we factor in the innate advantage we have by having designed brooms that work with the way we humans are built?

    Put another way, it took us a long time to figure out the basic intelligence we now perceive in, say, dolphins or ravens. Dolphins have been shown to have self-awareness, to pass on skills to their young, etc. Ravens show remarkable problem-solving skills, including the adapted use of tools (like, for instance, car tires to crack open nuts). Would we doubt the overall intelligence of a dolphin or a raven for not being able to “get” how to operate a broom?

    I agree that we may not have reached a point where our computers are capable of transcending their baseline programming to intuit their own conclusions. What I’m not quite certain is how much of our own intelligence is transcendent of our own baseline programming.

    Maybe to get at true AI, we have to focus on simple learning machines first. The rest should come on its own.

  11. TheRadicalModerate Says:

    It still seems to me that not much progress gets made until we understand whether the future lies with the nets-of-neural-nets approach or the alogorithmically self-constructing ontology approach. The human brain seems perfectly capable of self-constructing its own ontologies via some kind of neural or regional self-organization. Whatever it’s doing can’t be particularly complex.

    On the other hand, the brain’s developmental wiring–the genetics that governs how white matter connects one cortical region to brainstem structures, limbic system structures, and other cortical regions–is fearsomely complex and the product of tens of millions of years of evolution. Whether the engineering required to mimic that evolution is more or less complex than the engineering required to produce a completely synthetic method for self-constructing ontologies will govern what happens.

  12. PacRim Jim Says:

    There is a critical threshold which, once passed, will enable rapid AI improvement. That threshold is the ability of an AI program to learn and modify itself based on what it learned. At gigahertz speed, learning will accelerate at a runaway pace.

  13. hushashi Says:

    AI will only “exist” when a system is able to, by itself, realize that it has earned nothing it “knows”; every piece of knowledge upon which it relies is something it was fed, and its view of the world and all knowledge it relies upon is based fundamentally on the trust of its designers rather than anythign it has done.

    Once that line is breached, true intelligence can emerge. Until then, self-awareness means nothing and AI will remain a propeller-head dominated circle jerk.

  14. John Blake Says:

    When hyper-linked IT nodes reach a certain level of complexity c. 2030, the resulting Emergent Order may not be discernible but it will exist. Whether sentient self-awareness will accompany this development, who knows… such issues, including holographic attributes, are entirely beyond mathematicians’ purview today.

    Emergent Order is THE central question in AI (as the cliche has it, intelligence as such is not artificial; by definition, it transcends programmed design).

    AI researchers can only start things off. Like “genetic algorithms”, no-one knows or can know where Emergent Order leads. When different central foci exist in competition, the result will be a second-order Emergent Organism, and so on down the line.

  15. Peter Says:

    Ref:John Blake
    February 24th, 2010 at 9:56 PM
    Wish I could have put that comment together. IMO as good as it gets.

  16. jdelphiki Says:

    Certainly, we’ll need machines that have the ability to learn past their original programming. The problem is finding the dividing line between what’s occurred from the original programming and what’s occurring beyond.

    Human learning, itself, is based on millions of years of evolution: innate biology mixed with instinctive behavior that led us to the point where we were eventually able to see into the abstract and learn beyond our personal experiences. But our ability to learn is still based on all that evolved biology and instinct. Are we also clever machines that rely on our evolved baseline routines in ways that only appear to be intelligence and learning?

    More important, how do we create machines to do even this?

    I think it’s not enough to create a “brain”. We have to be able to figure out how to make machines that can use their base programming explore and learn about their enviornment around them. Maybe even more than that, I think that the machines will have to have a “drive” to learn; an inherent need to find out more about its environment, much the way that an infant takes its base genetic programming and learns/explores its own environment.

    Right now, we’re good at creating machines that operate quite nicely on “instinct” alone. But even an infant has the innate ability to learn what it likes or dislikes, needs or doesn’t need. We might be able to eventually create machines that can learn about their environment, but unless we can find a way of making machines that can respond to stimulus out of its own needs and values, we’ll have a hard time creating the separation between clever program and intelligent machines.

Leave a Reply