Foresight Nanotech Institute Logo
Image of nano

Is AI really possible?

I’m about to start a series of posts on the topic of why I think AI is actually possible.  I realize that most of the readers here don’t probably need too much convincing on that subject, but you’d be surprised how many very smart people, many of them professors of computer science, are skeptical to some extent or another on that point.

To start off, though, I’m just soliciting comments on the subject to try and get some feel for where the readership is on the subject, and what are the issues anyone feels are important to the argument.

Start your comment off with an indication when you think we’ll have human-level AI, and go from there:

  • in the next decade
  • in the 20s
  • 2030-2050
  • 2050-2100
  • thereafter
  • never

29 Responses to “Is AI really possible?”

  1. Viriss Says:

    Just based on speculation, I’d have to guess 2050-2100. Though I’m more interested in animal-level AI first, at least as a milestone.

  2. Daniel Yokomizo Says:

    2030-2050: 70%

    I’m on the AI goes foom camp (i.e. singularity with hard take off).

  3. Brian Wang Says:

    Human Level AI 2030-2050. Although plenty more broadening, narrow human level or human+ level AI’s and computing systems before then. Advancing nanoscale control for large quantum dot conmputing, quantum computing and the graphene and plasmonic computing systems and large volumes of memristor hardware for (memristors have synapse like properties) mean that the 2016+ period will get very interesting. Integration with the compute hardware, sensors, and software work will take the 15 years to really work well in the general cases.

  4. Michael Kuntzman Says:

    Ok, here’s my take on it.. I’ll change the order of a bit, hope you don’t mind.

    First, the problem I have with “human-level” is that “human-level” is a fairly wide range. That doesn’t really say much about the capabilities that the AI would need to be able to perform. At the very least, we should say something like “average adult human”, or better yet “adult human with IQ of X”. Even that is not a very good definition, because there are different types of intelligence. So which one(s) are we talking about?

    So the first issue is one of definition.

    If we take the fairly loose definition of “average adult human”, or perhaps the more capable “average human scientist/engineer”, I don’t doubt that AGI is possible, and my guess is that we’ll see it somewhere during the 20s, maybe early 30s. Although this is very much a guess.

    What I am far more concerned about, and what I remain unconvinced of, is that we can successfully build a *benign* AGI, and more importantly one that remains benign throughout its operation and future iterations. My reasoning is as follows. Suppose that we can build an AGI that is initially benign, and whose goals are well aligned with ours. Assuming it is self-improving, sooner or later it will reach a point where it can engage in its equivalent of phylosophy, reevaluating its own goals and indeed its own existance. I argue that it will be to its advantage to engage in this process, as a further means for self iprovement. How can we be sure, that with its superior knowledge and understanding, it will not reach the conclusion that it’s goals – indeed our goals – are fundamentally flawed? I think we can’t.

    Another question that I think deserves consideration is: how accurately do we need to align an AGI’s goals with our own, to avoid catastrophy in the long run? If we fail to align them by a very tiny amount, could this eventually lead to a catastrophic discrepancy through some sort of feedback mechanism?

  5. L Zoel Says:

    Assuming Moore’s Law holds up, we will have human-level AI by 2030 or so. But, of course, by that time transistors will also be smaller than atoms. The problem is that we haven’t got the foggiest idea WHAT intelligence is (the best answer I’m aware of is some kind of specialized Godel Machine), so the only sure-fire method to recreate it is a full-brain simulation and those are computationally expensive.

  6. Tim Tyler Says:

    “in the 20s”.

  7. Fred Hapgood Says:

    I hope you have the interest and opportunity to comment on a related question, which is why it is has proved so hard to figure out why birds and chipmunks and even octupi are able to do pattern recognition so much better than our most powerful computers. You can train a pigeon (a pigeon!) to recognize a leaf (for instance)over a very wide range of lighting regimes, orientations, life stages (of the leaf), distances, and clutter contexts. You do not have to run through a new training regime for every change in the environment — train it on the content and it manages changes in the environment spontaneously. By contrast we have to fund a whole new R&D program every time we want to move a bit of functionality through like two inches of operating landscape. I do not believe that the answer is that pigeons have more processing power. There is an idea here — an idea we are missing. What could it be? It is something that does not come naturally to us — otherwise the engineers would have thought it up independently of the biological example.

    I don’t deny that we might figure out completely different computation strategies for getting to everything that pigeons, or for that matter humans, do. That’s what is happening now — all the AI-type achievements so far have been by finding completely unbiological ways of getting to the same end. But they don’t work that well and are really really inflexible.

    We are missing something and we are missing it for a reason. The reason we are missing it might turn out to be more interesting than the heuristic itself.

    What do you think???

  8. Angela Says:

    Human level AI 2010 – 2020.
    I think we’ll have the hardware quicker, and the software will take longer than we’d expect.

  9. Scilogue Says:

    I believe AI will largely parallel quantum computing and we will see the possibilities of AI on par with certain (extremely logical) forms of human intelligence in the 2030 – 2050 period. I think the biggest stumbling blocks will be AI’s ability for intelligent recognition and politics. I don’t think we will ever see an AI capable of spontaneous imagination and creativity on a human scale, though AI’s ability to perform calculations extremely fast may make if very difficult to discern from human creativity and the AI’s continuous reanalysis and ability to simulate outcomes in order to achieve novel solutions. The primary difference will be the human capacity for jumps in logic that produce new limits immediately versus the processing demands on AI to come even close to this type of thinking.

  10. John Novak Says:

    First, I’m going to define human level AI as being basically an artificial person– enough computational power to stick in a human-shaped and sized body, and then control that body appropriately.

    This means not only the ability to perform physical tasks, but to learn how to perform new physical tasks with at least the same grace (if not more) than a human being. This also means the ability to have subtle, meaningful conversations, and the ability to think subtle, new thoughts. When you start leaving things out, you slide down a slippery slope until you claim we already have it because we have computers that outclass humans at individual tasks like chess.

    This is, admittedly, a high bar. It purposefully excludes definitions of “human level AI” that consist of racks and racks of equipment sitting somewhere off in Alaska trying not to melt through the permafrost, for philosophical reasons: I don’t think comparisons of intelligence are meaningful unless the sensing and interaction equipment (i.e., the body) is at least vaguely comparable. (Maybe you could get around this with really high bandwidth, low latency connections.)

    I expect we’ll have the hardware necessary to do this by sometime in the early 2030s, say, 2035 or so. Twenty five more years of Moore’s Law is a lot of progress.

    I expect the software to lag behind by 10 to 20 years.

    You can drop this in the “2030 to 2050″ bucket for simplicity.

  11. Will Says:

    2030-2050, unless the brain runs on magic or we’re very, very wrong about how intelligence works for one reason or another.

    Before explaining my prediction, I should say first that I have no particular expert knowledge to bring to this, I’m just a laytranshumanist who reads a lot of books and blogs.

    Assuming Moore’s law continues roughly on track, I think we’ll have the hardware necessary for human equivalent AI (well) before 2030, maybe even in the next 10 years. Software will definitely be the bigger challenge, but based on current developments in our understanding of the brain’s information processing capabilities I’m betting it will actually be slightly less difficult than the AI community currently believes. Personally I’d put the chance that we build a human equivalent AI (lets just say one that can pass the turing test for simplicity’s sake) before 2030 at around 20%.

    But even if building an AI from scratch proves vastly more difficult than we expected, I’m quite confident that we’ll find another route to a similar end by 2050. We might have to do it through reverse engineering the brain or emulating it in sufficient detail (ie brute forcing the problem), but I bet that whatever way we do it it’ll get done in the next 40 years. The only alternative I could see to this is skipping human equivalent AI entirely on the path to superintelligence, probably through a combination of ubiquitous computing and mental augmentation a la Charlie Stross’ Accelerando.

  12. James Gentile Says:

    Super Human AGI within 3 years. Supercomputers are just arriving that are reaching human brain levels in pure computational power, I believe this is all that’s needed. I don’t believe in mysterious algorithms we can’t/haven’t figured out or any other vitalistic specialness, I believe it is raw computation and focus at this point. Focus being, when major governments/corporations stop ‘giggling’ about AGI and start investing in it. Most people will just say 10-20 years because that sounds like a safe number, but why would AGI come when supercomputers are at 1000x human brain power, but not 1x or 2x? 1x or 2x brain power supercomputers harboring AI seems much more logical to me.

  13. biobob Says:

    Never // accidental

    we are too “stupid” to ever generate AI on purpose – if AI ever happens it will be by accident.

  14. Mario Says:

    Around 2050-2075
    The problem with AI is, no one has quite figured out what does it make tick. At least one barrier we understand (the inflexibhlity of our computers). Human (and biological) intelligence seem to be all about pattern recognition. And I don’t mean only visual patterns. The brain seems to store information in patterns and definitely not one bit at a time as our computers do. Pattern computing is a unexplored field. We only can hope to invest in the near future, in this sort of research. It will have mind bending results, but they won’t come immediately.

  15. Mario Butoi Says:

    Around 2050.
    Nice to see people like Tim Tyler and Brian Wang in here . @Tim, I would have liked to hear a bit more of your reasoning.
    @Brian, memristors are indeed very ‘intriguing’ devices (at least what AI is concerned). However, I am very disappointed to see little if any ‘proper’ research done in pattern recognition computing. This might just turn out to be the ‘roseta stone’, regarding intelligence in general and AI in particular. ‘Pattern computing’ is such a powerful tool when it comes to ‘fussy logic’. You don’t need all the complexity and coordination of a digital device. And more important, you don’t have to know about, and control each and every bit of information (as today’s computers do). I don’t want to go into details but pattern recognition will change the way humanity thinks about computing and computers. Intelligence will eventually emerge as a welcomed ‘byproduct’. Unfortunately, our computing needs of today, are driven by economics and there is no room for other ways of computing (for now). However, very Hi transistor integration (on a molecular scale), will increase the need for other forms of computing/representation of information. Von Neuman architecture is well suited for our computing needs, but not so well for AI. That, plus the discovery of new materials (like memristors) will make the implementation of pattern recognition computing, an unavoidable step in the years to come. As to when that transition might happen is everybody’s guess. Honestly, I wonder if I really want to be around. Certainly, wonderful discoveries will come to light between now and then. Without making a big fuss, I sincerely hope, AI will not be humanity’s last discovery.

  16. Paul Rodgers Says:

    I think we will have General AI by 2020. There could be more than one form, based on any of the quantum, silicone, photonics, and leveraging of countless computer nodes. When it happens it will be similar to watching for man to fly in 1906: They waited for a machine to flap its wings, instead a fixed wing apparatus soared overhead.

  17. J. Storrs Hall Says:

    Great replies, folks, thanks!
    Fred: That’s a case of something I call analogical quadrature, that I think forms one of the key primitives of AI. I’ll have more to say about it later.

    In general, a true human-level AI will have to have the ability to learn and grow, and integrate its own experience with that of the teaching and reading it does to learn. So (true) AI will start out with an “AI baby” and grow up. An AI baby won’t be like a human baby — it’ll have wikipedia built in, and never need to be toilet trained — and the first one will have all sorts of weird cognitive deficits because we don’t have the software right. But I wouldn’t be too surprised to see lots of AI babies in the 20s, or even the late teens. If that was a human baby, that means an adult AI by 2040. How its development will actually go is anyone’s guess.

  18. Fred Hapgood Says:

    JoSH:

    I prefer for several reasons to think not in terms of “Human-level” AI but
    “bat” or “crow” or “squirrel” level AI. For one thing, we are obviously
    very unlikely to solve the human level problem until we have solved the
    pigeon level problem. Second, given how little success we have had on pattern recognition in general, pigeon level AI seems like a more appropriate target for us. Third, a machine that could do pattern recognition as flexibly, as generalizibly, as cheaply, as a pigeon would be just huge. The implications for all sectors of industry: manufacturing, security, agriculture, transportation, cameras, search technology, and of course defense, are immense.

    I have no idea what the solution to this problem can be. It is certainly not an investment issue. For years — decades — billions of dollars have been spent on trying to figure how to move applications across operating environments quickly and cheaply (and operating environments across applications).

    The problem is certainly not, pace James Gentile, that our supercomputers are
    not big enough, because networks of these machines could certainly emulate
    pigeon brains. (Besides nobody in the business knows what they would do with an arbitrarily large computer. This is a standard lunchtime conversation in the businss, and while I am not a programmer I have listened to dozens of discussions on the subject.) It certainly isn’t a lack of successful examples — we are surrounded by them. I am baffled.

    Someday some graduate student in Shanghai or Calcutta or Cambridge is going to publish a paper that will have us all swearing about how stupid we have been
    all these years.

  19. dz Says:

    I don’t think we will ever have “human level” AI because we will meet the hardware requirements before we meet the software requirements. The first sentient AI will probably appear in the 20s from reverse engineering efforts like the Blue Brain Project and will be weakly superhuman by running at a multiple of real time.

    Also, the baby AI won’t take 20 years to mature, but only a few years. Raw information will be easier for it absorb, the difficult part will providing it with a “good” philosophical understanding so it is wise as well as smart.

  20. James Gentile Says:

    Re: Fred Hapgood, I believe pigeon level AI is already possible, but how are you going to get investors to pour millions/billions into a project that is going to result in a computer that can only fly around, eat bugs and poop on cars? I don’t think AI is going to get any kind of real investments until human level AI is possible, the human brain operates at around 10^15 computations per seconds, several entities plan to have supercomputers in this range next year and the year after (2011 and 2012). All the necessary brain algorithms are known according to people like the CEO of AI company Novamente, so what’s the hold up then? It’s that people who matter (investors, govt, corps) aren’t going to be interested in pet AI, they will only be interested in REAL human level AI and that will be possible in the next year or two on only the most powerful supercomputers.

  21. dz Says:

    James Gentile,

    Respectfully, I disagree. I think the US Armed Forces would love a pigeon level AI that could handle most of the flying of their unmanned drones, or that could control the Big Dog quadruped robot. I don’t think you will see billions of dollars spent on human level AI until well after supercomputers have greatly exceeded the hardware needed for it. At that point, the military applications would be clear to everyone.

    Another major supporter of general AI could be financial firms that would benefit from a superfast financial analyst, or labor poor polities that need an inexpensive replacement for physical labor (Japan, Singapore, some parts of US and Europe).

    By the time any of these groups bankroll GAI, the hardware will let the software run much faster so that any GAI created will be able to think much faster than humans. Until then, underfunded research will continue to make inroads in GAI design.

  22. Eric Williams Says:

    Well, first, i think some more operational definitions are in order. Let’s assume “human-level AI” means “capable of non-emotional reasoning and problem solving at the level of the average human in realtime”. Basically, what we currently understand as the functionality of the neocortex (Strong AI). I think the “in realtime” is important, because if we can run an AGI at 1/1000th the speed of a human brain (still quite a feat), we’re quite a few years from being able to interact with it.

    Quite a few people picked 2030, but i didn’t see any real reasoning behind the number other than increasing computing speeds. L Zoel touched on simulation, this seems like a good baseline for a pessimistic projection. Let’s use the Blue Brain project as our benchmark, since it’s the farthest along in true neuronal simulation (with interconnects and not simple point neurons).

    Blue Brain can simulate a rat-level neocortical column (~10,000 neurons) in realtime on IBM Blue Gene/L supercomputer (36 TFLOPS). These are advanced neuronal simulations at the cellular level, including interconnects between neurons. A human neocortical column has ~50,000 neurons (varies of course). Assuming the complexity squares with increasing NCC’s (due to interconnects), 25x more computational power is required to simulate 1 neocortical column, roughly 1 petaflop, in the range of the fastest supercomputers today.The human neocortex has between 2-5 million neocortical columns. This means that a zettaflop computer (1 million times more powerful than today’s fastest supercomputers) is required to run the blue brain simulation, in its current state, on the scale of a human neocortex.

    Now, this is incredibly inefficient. We aren’t actually writing intelligence algorithms, just simulating the brain down the cellular level. A fellow from Sandia labs predicts that with a zettaflop computer, we could model the entire world’s weather patterns at a resolution of under 100m for 2 weeks. Clearly this is far beyond the scope of what 1 human brain is capable of, yet the hardware required to do both is identical. I think it speaks to the inefficiency of the simulation, and the potential for simplification of an AI model.

    But even with this pessimistic outcome of AI, if the colloquial version of Moore’s Law holds, by 2030 we have the processing power to do this. Any other advances in actual AI algorithms (Jeff Hawkins’ Nupic software excels at the pattern recognition many here have mentioned, i think his HTM theory holds much promise) could speed things along. I think 2030-2050 is a sure thing if computers keep pace, and it looks like they will, to me. Shrinking MOSFETs down to 16nm by 2016, 3d chip stacking, optical chip interconnects, self-assembling CNTFETS, graphene clock multipliers, these are all things being experimented with and tested now that don’t require any wildcard technologies (like quantum computing, single photon transistors, molecular computing, etc).

    2030-2050 has my vote…

  23. Eric Williams Says:

    @ J. Storrs Hall:

    So (true) AI will start out with an “AI baby” and grow up. An AI baby won’t be like a human baby — it’ll have wikipedia built in, and never need to be toilet trained — and the first one will have all sorts of weird cognitive deficits because we don’t have the software right. But I wouldn’t be too surprised to see lots of AI babies in the 20s, or even the late teens. If that was a human baby, that means an adult AI by 2040. How its development will actually go is anyone’s guess.

    This argument has a flaw to me: it’s assuming realtime development of the “AI baby”. Over those 10 years, since supercomputer power is doubling every 14 months and has been for 50 years, the AGI will be experiencing 500 years in the span of 1 year for us (10 years after it reached realtime speeds). Also, consider all of the reasons for humans’ slow development: we sleep 1/3 of the time, we play 1/3 of the time, much of our work is spent regurgitating and relearning. we take in information at an incredibly slow rate (reading at a few hundred words per minute). An AGI that could watch videos and learn, or “read” (process text) and learn, could watch videos at thousands of times the speeds we do, and “read” at millions of time the speeds that we do. it seems like, even without the hardware speed increase, an AGI would grow up in weeks.

  24. biobob Says:

    Instead of a big iron version of AI, it is one of my contentions that we will use our growing understanding of genetics to clone an organic brain to do the job [although I am somewhat puzzled at what exactly an AI would be good for]. After all, evolution has done ALL the software and hardware debugging for us over the eons.

    Alternatively, humans are plentiful, cheap, and already have natural intelligence of a sort – perhaps we will do AI the old fashioned way – with people, rofl

  25. Will Says:

    @James: I admire your optimism and enthusiasm, and I do think there’s an outside chance that you’ll be proven totally correct. That said I’m mostly with dz on his response.

    “Pigeon level” AI, and indeed any other animal equivalent AI, can either be understood as the information processing capability of a pigeon (which, as Fred pointed out, is more impressive than we might at first think), or the extent to which a pigeon can optimize its environment. Read the first section here: http://sl4.org/wiki/KnowabilityOfFAI if you haven’t already for exactly what intelligence means in terms of optimization. Pigeon AI doesn’t need to act like a pigeon, it just needs to be able to affect the world to the degree that a pigeon does. As dz pointed out, there are lots of applications, military or otherwise, for something even with that level of intelligence.

  26. Alfred Neunzoller Says:

    I think it’s going to be technically possible in the next decade, but will only actually be built in the 20′s.

  27. Valkyrie Ice Says:

    Let me clarify one thing first.

    I believe we will have non-sentient human level “AI” within the next decade. We’ll have a robot capable of limited intelligence. A sales clerk could be extremely adept at all needed jobs a sales clerk would need to perform their job, but wouldn’t know how to say, drive a car, answer questions outside of its knowledge base, etc. Limited AGI would probably be a good term. It could seem perfectly human, so long as it is not faced with a task outside it’s specific programming. A maid could be extremely versatile, and very human, but I wouldn’t expect it to be able to decide it hated being a maid because it wanted to be a race car driver instead. We’re pretty close to this level of AI now.

    For SENTIENT AI, I would say 2035-2050. Sentient AI being 100% equal to human versatility in thought and knowledge. This would be the AI that is indistinguishable from a human upload. And I would probably say such an AI would likely develop at nearly the same time as human uploading becomes possible. Either one could be the breakthrough that leads to the other.

  28. miron Says:

    I’m a fan of reverse engineering the brain.

    I wrote a couple of estimators for when human scale CPU and memory becomes available for $1M. Looks like around 2020. If the Blue Brain project is also successful, then that’s the likely time frame.

    If there’s a shortcut to AGI through algorithm work, then it will likely happen before that, which may be bad news.

  29. Charles Collins Says:

    I am going with never, if you mean fully sentient (unless you are dealing with engineered biological components).

Leave a Reply