Foresight Nanotech Institute Logo
Image of nano

AI: how close are we?

In the terminology I introduced in Beyond AI, all the AI we have right now is distinctly hypohuman:

The overall question we are considering, is AI possible, can be summed up essentially as “is diahuman AI possible?”  The range of things humans can do, done as flexibly as humans can do them, and learned the way humans learn them, is as reasonable definition of intelligence as any. This is reflected in the “Wozniak Test” and the “Nilsson Test”, i.e. the ability to do human jobs.  (If nothing else, this obviates at least one other question, namely, at what point will AI have a major economic impact?)

The problem is, people have been claiming that their robots could do things like the Woz test for quite some time:

Robo-maid from 1930

From the marvelous Paleofuture blog, an advert for a robot maid in 1930!  (Not exactly, read the blog)

Today, these things are getting closer to reality:

Mahru-Z Korean robot maid

which is a lot closer to reality than the previous one — there’s a $3M/yr project behind it at the Korea Institute of Science and Technology.

Even so, I doubt that Mahru-Z or Willow Garage’s PR2 or any other existing robot could come close to passing the Woz test, much less the full Nilsson Test.  On the other hand, I think it’s pretty clear that over the past couple of decades there has been a very strong advance in robotic capabilities and, IMHO, it bids fair to make robots usable in another decade and skillful in another one after that.

How about thinking and learning?  This is really the crux of the issue; the Woz test is simply to sum up the complexity and adaptability necessary in a simple description.  Nobody is putting the processing power necessary to do serious AI into mobile robots.  What the robot example shows is that for specific skills, the state of the art in programming is pretty close to being able to program what a typical person could learn.

The structure of intelligence can be broken down into a set of skills, ranging form pouring coffee to doing integration by parts; meta-skills such as recognizing which skills are appropriate when, and planning with them; the ability to learn new skills, including meta-skills, both from imitation and by inventing them.  (Skills of course include recognizing and understanding things as well as doing things.)

Face Detection and Pose Estimation

Note that we’re well into the useful range if the AI can only learn by imitation or being taught, and never does anything particularly creative or original.  So for the lowest level of AI all we need is to program up all the basic skills we need and the ontologies — datastructures for knowledge representation — for the AI to learn some kinds of new things, or at least be reasonably adaptable.  It would clearly have a built-in “glass ceiling” over what kinds of thing it could learn, but then so do quite a few people.

One fairly good overview of the kinds of skills and meta-skills can be programmed with current techniques is the leading textbook, Russell and Norvig’s Artificial Intelligence: A Modern Approach. Just look thru the table of contents… If this thousand-page epic tome is light in any area, it would be the problems of inferring formalizations from unstructured data — but there’s a lot of work on that in the real world pursuits like data mining where people are trying to take advantage of the treasure trove represented by the internet.

Bottom line: I think we have the techniques now to build an AI at the hypo/dia border, equivalent to a dull but functional human.  It would have to run on a smallish supercomputer — say one rack full of servers stuffed with GPGPUs.  The problem is that it would take a huge, coordinated project to implement all the techniques and skills that are understood into a single integrated system, and AI in practice is a cottage industry.  Right now that’s not economically feasible, given the cost vs the economic value of one more dull human.  But those things will shift during the coming decade — the hardware will get cheaper, the software more sophisticated, and quite possibly by 2020 the economics will look different.  Then and only then will AI really take off.

17 Responses to “AI: how close are we?”

  1. dz Says:

    Thank you once again for your insight, Dr. Hall. I like your division of AI into various categories and it has been a while since I read your article at kurzweilai.net. It seems to me that if you are correct in that we could be a functional human level GAI today, then we will likely never see diahuman AI. To my knowledge, the Blue Brain Project is the furthest along in deciphering the software of self awareness, and they estimate another 10 years before a human size artificial brain could be built.

    10 years of hardware improvements would put us squarely in the epihuman AI range. The only reason for a diahuman AI paradigm would be testing of software, or an attempt to restrain the capabilities of an artificial self-aware mind.

  2. biobob Says:

    J – please explain exactly what advantage synthetic human level AI would impart vs ‘directed software’ as it exists today and actual human intelligence (other than doing it just because its interesting). I am not doing the Luddite thing; I just fail to see any compelling advantage to it all. Are we that lonely ? Are we to create a new class of slaves since human slaves are an unappealing solution?

    I disagree about the utility/applicability of ‘more sophisticated’ software having any impact on AI since we have trouble understanding even the basic nature of intelligence and how it is created biologically. Someone may stumble upon the eureka moment or it may happen despite our best intentions, but I remain skeptical.

  3. J. Storrs Hall Says:

    biobob: You answer your own question: the reason for AI research is in fact to understand the basic nature of intelligence.

    dz: I’d claim that if you sped up an IQ 70 AI 100 times, you wouldn’t have an epihuman intelligence, but merely the equivalent of 100 not-too-bright humans. (The Senate, for example :-) )

  4. dz Says:

    biobob –

    Your argument of a “new class of slaves” is compelling and defensible. We run a risk of “installing” sentient software in devices where it isn’t required, like we pile technical doodads into our mobile phones.

    However, I think synthetic brains do have advantages over organic brains and directed software. At the very least, they can run faster. Imagine an engineer, entrepreneur, financial adviser, inventor, artist or musician could experience a lifetime in the space a month. AI will be running at this speed about a decade after it reaches real time speed.

    Now imagine a cluster of these AIs, communicating, creating, inventing. Now imagine them experiencing 1000 lifetimes in a month – this is where AI will be roughly 20 years after it reaches real time speed.

    AI might not be smarter than the average human. It might not even be that different in its emotional range, perception, values, or goals. But It will greatly accelerate all knowledge based paradigms.

    Finally, AI isn’t something that we have to figure out. We just have to copy it and then accelerate it.

  5. biobob Says:

    J – OK, I agree ;) but the rest still applies :D
    ———————————————
    DZ – the assumption is that AI could be faster but since we have not actually created any AI, the assumption may well be false. Consider the damaged intelligences of “idiot-savants” who are able to instantaneously compute at roughly the speed of today’s computers vs “normal” humans.

    The same goes for any of the human intelligence activities we may project as being potentially faster with AI.

    Just as a stripped down operating system performs certain operations faster, the embellishment of capability required for intelligence may always come at the expense of speed. We just do NOT know.

    However, I am and remain skeptical. Generally, with inexplicable exceptions, evolution has resulted in optimized solutions and I can not see any reasons why human intelligence would be any exception to that rule or why computational speed would not be a target for optimization.

  6. Instapundit » Blog Archive » ARTIFICIAL INTELLIGENCE: How close are we? Says:

    [...] ARTIFICIAL INTELLIGENCE: How close are we? [...]

  7. Kendra Says:

    I think we are already seeing the day to day of artificial intelligence: Congress, the Whitehouse and our 2nd, 3rd and 9th Circuit courts.

  8. chuckb Says:

    Hope springs eternal. I’ve been following AI advances (??) since I was an engineering student at the University of Illinois in the early 70′s. The one constant has been the claim that we’re on the verge and that any day now (actually, the guess is usually 10 years or so) the required breakthrough(s) will come, whether it is technical or economic. The AGW people learned their lesson. They no longer project 10 to 20 years in the future. They’re found that it can be unpleasant when your prognostications come back to bite you in the butt. Now they push their projections to 100 or more years.
    I’m as fascinated as the next guy with human intelligence. I just wish you guys would admit, at least to yourselves, that we don’t have a clue what it is. We can imitate all kinds of behavior and that will, like so many other technological advancements, help to make the world a better place to live. But the idea that we can create intelligent machines is an exercise in faith and nothing more.

  9. Rich Vail Says:

    We’re well past that…just look at Washington DC…congress is a great example of AI…

    I think that it will be at least another decade before we’re there…even a super computer can’t match the pure computational power of a human brain.

  10. glenn Says:

    Has anyone considered the difference between an artificial intelligence and an artificial conscienceness?

  11. FGH Says:

    Harvard economics professor Kenneth Rogoff recently penned an essay of AI. It’s available at the following link:
    http://www.project-syndicate.org/commentary/rogoff64

  12. Alex Kilpatrick Says:

    “I think we have the techniques now to build an AI at the hypo/dia border, equivalent to a dull but functional human.”

    I did my PhD research in AI (used Peter Norvig’s book in my graduate studies, by the way). I wrote a program for my dissertation that “learned” by itself, but ultimately I left the field. All of the so-called gains in AI are still a million miles away from the “dull but functional human” There are some things like playing chess that computers do really well. And intelligent humans do those things too. But that in no way means the computer is remotely intelligent.

    The whole AI field is nothing but clever programming. Some of those programs are quite clever indeed, but they represent the intelligence of their creators, not the programs. Some programs may appear intelligent in very narrow domains, but they are extremely brittle — they will not be useful at all even on the borders of the domains for which they were designed. I have yet to see a program that has a modicum of intelligence or adaptability outside of a very, very narrow domain. They are more like an idiot savant that can add up sums of large numbers but can’t figure out how to open a door.

    People really underestimate the magic of human intelligence, even in the dull but functional humans. Humans are a miracle of adaptability that a computer will never even approach. It isn’t a question of FLOPS or GPUs. We have such an incredibly limited understanding of our own intelligence, how can we have the arrogance to think we can make an intelligence computer?

  13. TMavenger Says:

    This interesting discussion suffers from the fundamental flaw in the field of Artificial Intelligence: It is focused on the wrong subject. Consequently we have wasted decades groping for an acceptable definition of intelligence when the answer is obvious:

    Intelligence is the ability to solve problems through mental effort.

    Unfortunately this definition undermines the entire field of AI, for several reasons:

    1. Computers HAVE NO PROBLEMS. They are neither alive nor self-aware. Therefore the concept of “problem” cannot apply to them. Nevertheless, this is not a significant objection, because we are simply trying to get them to solve OUR problems.

    2. Use of the word “Intelligence” in AI is a misnomer. Many of the characteristics commonly understood to constitute human intelligence were trivial problems, solved very early in the development of computers (for example, the ability to do complex math in a very short time with absolute accuracy). Or the ability to generate logical results reliably from indefinitely complex premises. Or the ability to play a passable game of chess (substitute any other game with well defined rules.

    On the other hand, we have had very limited success in getting computers to replicate abilities which are trivial for humans, for example the ability to recognize a face, carry on a conversation (pass the Turing Test), or generate a mathematical proof.

    The field of “Artificial Intelligence” is largely concerned with human capabilities that are NOT generally considered characteristics of intelligent humans. Instead it attempts to reproduce those things humans do WITHOUT KNOWING HOW. This is the crux of the problem. Any activity that can be expressed as an algorithm can be coded onto a Turing Machine, but in order to produce an algorithm the coder must first understand how to perform the activity. Thus, complex mathematics was easily accomplished on computers, because WE KNOW EXACTLY HOW WE DO IT. Chess was not difficult to program because it has a small set of rules and a well-known set of strategies that lent themselves to algorithmic programming. Facial recognition, on the other hand, is something we do WITHOUT KNOWING HOW. The problem, therefore, is figuring out how we do accomplish an activity before we can tell a machine how to do it. We have made some progress on some of these problems, but we are very far from answering the fundamental questions such as the nature of consciousness. These problems have resisted understanding by philosophers for at least 2500 years, and are not likely to be solved by computer scientists in 50. For this reason I don’t expect human intelligence to be replicated on machines in the foreseeable future, if ever.

    This is also why the field of Artificial Intelligence should more correctly be called Artificial Instinct.

  14. mb Says:

    AI which can work as a software engineer — how close are we?

    This seems to be the crucial question.

  15. Jared Says:

    A very interesting and thought-provoking article. One of many that I have read over the span of my life. Many of of the comments are also quite thought-provoking.

    It is interesting how closely the debate, research and execution of AI follows the patterns of the debate between free will and determinism (which could also just as easily be stated: the philosophy of consciousness).

    However, my one minor quibble is with your final statement where you speculate that things might reach a state by the year 2020 that would enable effective implementation of AI. I find that statement ironic only in context of the history of AI. About once every 5 or 10 years some expert comes out ans says “We’ll see true AI in 10 to 15 years” – and they’ve been saying that since the 50′s.

    I think the difficulties are far greater than anyone connected with the field is able to comprehend. Why else is there such optimism over such a long period of time? Technological advances grant us an even deeper sense of optimism, and why not? We are creating things that people wouldn’t have dreamed possible even 10 years ago. But despite the advances, we’re still at least 10 years away from AI – and we always have been (apparently).

    It is my opinion that I will not see true AI in my lifetime (which, hopefully, will be at least another 40 years or so). Despite advances in computer technology (even assuming the advent of quantum computers) we are left with a fundamental gap in the ability of our programing languages to breach the gap between “act in accordance with your programing” (which as a previous commenter said can be very clever indeed) and what humans evaluate as “creativity” or “genius”.

    Perhaps I will be deemed to fall into the fold of “mysterianism” which holds that human consciousness is unique and has some magical quality that can never be imbued into computers. I favor a much more simplistic definition: we are not binary.

    The complexity of the human brain includes: approximately 100 billion neurons, with each neuron intertwined and connected with upwards of 10,000 other neurons, each one capable of using 7 different known neurotransmitters used in solo or combination to convey specific messages across the network. I’m afraid to even try and calculate the math for how that works out in raw computational power.

    As soon as you’re able to develop a computer that can even approach that kind of messaging complexity, then I’ll begin to believe that AI is possible.

  16. dz Says:

    Jared,

    100 billion neurons x 10,000 connections = 1 quadrillion connections firing at 200 Hz. 7 neurotransmitters can be represented by 3 bits of data. So say, 600 quadrillion bits flying around per second. That’s 600 petaflops, roughly. 10 petaflop supercomputers are under construction today so we will be well within the 600 petaflop range in 10 years.

    Unless you are a dualist, then you must concede that an artificial brain can be built at least as blackbox, even if we don’t understand how it all works. Cells in mouse hypothalamus have been replaced by silicon chips – we aren’t sure what is being done with the signals sent out of the chips, but they mimic exactly what the cells were doing before they were replaced. The mice function normally.

    For 50 years people have made predictions regarding AI, but have not been able to substantiate those claims. Today we are already able to replace neurons with silicon and fully simulate a neocortical cortex column. Rather than trying to create an expert system that can manage millions of rules and somehow look intelligent, we are now copying the structure and function of brains directly.

    AI researchers are not so much building an airplane as they are building an artificial bird. Hopefully, the plane will come later, once the bird can fly 30,000 kph :-)

  17. Jeremy Roberts Says:

    This is a fascinating topic, and I have a optimistic perspective. I agree that we are a miracle of adaptability, but machines could eventually be so too, and maybe they will need time to evolve, if we recreate our own setting that triggered the way we evolved

Leave a Reply