Foresight Nanotech Institute Logo
Image of nano

Nanotech and AI

With the Singularity Summit fast approaching, it’s worth spend a little time pondering the perennial question of nanotechnology vs AI: which will happen first, will they be independent, symbiotic, or synergetic, and so forth?
I say perennial because this is a question that has been discussed at Foresight meetings ever since the first Conference 20 years ago. AI was mentioned as a potentially disruptive technology in Engines of Creation — whether or not it was autonomous, human-level intelligence, automated design systems would enable the creation of highly complex nanosystems, well beyond the capabilities of mere human designers.
How did that prediction pan out? I would have to say that it was so accurate, and happened so soon, that it’s taken for granted today — human designers with pencil and paper would have no chance of designing any or today’s complex engineered systems. Like many areas, complex design is one that was once considered AI but isn’t any more. In the 90′s, as part of a big AI project at Rutgers, I wrote a program that designed pipelined microprocessors given a description of the desired instruction set. That kind of thing was already beginning to be considered merely “design automation” rather than AI by then, and it certainly would be now.
How about “real” AI, or AGI as it is beginning to be called now?
First of all, it is interesting to note that there are some strong similarities between real AI and real nanotechnology. Historically, they both begin with a clear technological vision of great power. Great excitement and intellectual ferment grew up around the ideas. As the fields began to grow, however, the availability of funding attracted many people whose goals had more to do with getting money than following through to the vision. On different timescales, both fields experienced a Diaspora where most — most — of the actual research dealt with near-term applications and did little to advance the central vision.
However, evolution works, and so eventually both fields will break through to their visions by trying everything. We can’t even say which of the sidetracks will turn out to have been dead ends and which ones are essential detours around roadblocks.
Foresight was strongly involved in the Productive Nanosystems Roadmap and we are now a sponsor of the AGI Roadmap. My futurist’s intuition tells me that real AI is about a decade off, and real nanotech a decade after that. But we shall see.
Neo-luddites tell us that giving humanity whatever powerful new technology they fear, would be like “giving loaded guns to children.” And there’s certainly a strain of truth to that — our political process, for example, has managed to take nuclear power, which could have provided clean energy for everyone, and instead create weapons sufficient to wipe us all out pointed at each other on hairtriggers. (Of course the neo-luddites themselves bear a lot of the blame for that — at least the lack of clean energy part.)
This would doubtless be the case with nanotech too. The power, in the sense of the capability of create or destroy, of nanotech exceeds nuclear significantly. We could easily wind up in a world of nano-weapons but no nanofactories.
One could even say the same for AI. But AI is capable of being different from other powerful technologies, if we build it right. There are a lot of likely pathways to widespread, useful, non-weaponized AI.
One could argue that if there are children lost in a woods full of wolves and bears, they would be better off with guns than without. But it’s still a tough call. With AI, however, we have an option totally new in the history of powerful technologies. We can give them … an adult.

With the Singularity Summit fast approaching, it’s worth spend a little time pondering the perennial question of nanotechnology vs AI: which will happen first, will they be independent, symbiotic, or synergetic, and so forth?

I say perennial because this is a question that has been discussed at Foresight meetings ever since the first Conference 20 years ago. AI was mentioned as a potentially disruptive technology in Engines of Creation — whether or not it was autonomous, human-level intelligence, automated design systems would enable the creation of highly complex nanosystems, well beyond the capabilities of mere human designers.

How did that prediction pan out? I would have to say that it was so accurate, and happened so soon, that it’s taken for granted today — human designers with pencil and paper would have no chance of designing any or today’s complex engineered systems. Like many areas, complex design is one that was once considered AI but isn’t any more. In the 90′s, as part of a big AI project at Rutgers, I wrote a program that designed pipelined microprocessors given a description of the desired instruction set. That kind of thing was already beginning to be considered merely “design automation” rather than AI by then, and it certainly would be now.

How about “real” AI, or AGI as it is beginning to be called now?

First of all, it is interesting to note that there are some strong similarities between real AI and real nanotechnology. Historically, they both begin with a clear technological vision of great power. Great excitement and intellectual ferment grew up around the ideas. As the fields began to grow, however, the availability of funding attracted many people whose goals had more to do with getting money than following through to the vision. On different timescales, both fields experienced a Diaspora where most — most — of the actual research dealt with near-term applications and did little to advance the central vision.

However, evolution works, and so eventually both fields will break through to their visions by trying everything. We can’t even say which of the sidetracks will turn out to have been dead ends and which ones are essential detours around roadblocks.

Foresight was strongly involved in the Productive Nanosystems Roadmap and we are now a sponsor of the AGI Roadmap. My futurist’s intuition tells me that real AI is about a decade off, and real nanotech a decade after that. But we shall see.

Neo-luddites tell us that giving humanity whatever powerful new technology they fear, would be like “giving loaded guns to children.” And there’s certainly a strain of truth to that — our political process, for example, has managed to take nuclear power, which could have provided clean energy for everyone, and instead create weapons sufficient to wipe us all out pointed at each other on hairtriggers. (Of course the neo-luddites themselves bear a lot of the blame for that — at least the lack of clean energy part.)

This would doubtless be the case with nanotech too. The power, in the sense of the capability of create or destroy, of nanotech exceeds nuclear significantly. We could easily wind up in a world of nano-weapons but no nanofactories.

One could even say the same for AI. But AI is capable of being different from other powerful technologies, if we build it right. There are a lot of likely pathways to widespread, useful, non-weaponized AI.

One could argue that if there are children lost in a woods full of wolves and bears, they would be better off with guns than without. But it’s still a tough call. With AI, however, we have an option totally new in the history of powerful technologies. We can give them … an adult.

3 Responses to “Nanotech and AI”

  1. flashgordon Says:

    Intelligence is an attitude; there was a recent article at physorg which mentioned something similar about the correspondence of curiosity and intelligence; those whose personal religion is curiosity become intelligent. Those who don’t; we need to be allowed to leave those who don’t. You people have chosen an extreme mechanical paradigm; and, you’ve chosen to mix yourselves up with the anti-science even though crn has championed ‘the system of three ethics.’ Just read Bill Joy’s ‘why the future doesn’t need us.’

    These robots of yours doesn’t know the value of human beings for figurin out how the universe works; and, apparently, you guys don’t either! It and you all don’t seem to understand that there’s a vast majority that have chosen the easy way out; believe and you will believe! Believe in me and you will go to heaven! Death is the easy way out fo the supernatural religionists! And, they feel they have to drag everyone else with them! They will replicate! They have nothing better to do! What is your robot going to do about that! What is your robot going to do when they claim we should not figure out the universe because they’ve collectivelly decided that will destroy they’re religion?

  2. Tim Tyler Says:

    Brains before bodies – but once we have the brains, body development will probably happen rapidly – within a few years.

  3. Nanoman Says:

    Josh are you going to be at the Singularity Summit?
    I wonder if we could build an AI with DNA based nanotech even before diamondoid nanotech? Possible?

Leave a Reply