Foresight Background 2, Rev. 1
Accidents, Malice, Progress, and Other Topics
© Copyright 1987-1991, The Foresight Institute.
The novel power of nanotechnology springs from two sources: the ability of assemblers to make almost anything and their corollary ability to make copies of themselves. Self-replicating molecular machines will bring novel power to do good, obviouslythey will enable us to produce material abundance using clean, unobtrusive processes, and they will give new tools to science and medicine. Equally obviously, they will bring novel power for doing harman abundance of conventional weapons could be destabilizing, and new weapons, such as programmable germs, could make possible whole new kinds of warfare.
A commonly discussed form of harm, though, is quite different from a military threat. Engines of Creation warns that "Tough, omnivorous 'bacteria' could outcompete real bacteria: they could spread like blowing pollen, replicate swiftly, and reduce the biosphere to dust in a matter of days." It refers to this threat as the "gray goo problem" (though the dusty version wouldn't be very gooey) and states that "we cannot afford certain kinds of accidents with replicating assemblers." This may be true, but should this kind of accident be a major focus of concern? Our attention is a limited resource and we need to allocate it wisely.
Nanotechnology has analogies to genetic engineering: both fields involve self-replicating molecular machines. in the early days of genetic engineering, the scientists involved became so concerned about the possible consequences of the escape of altered, independent replicators that they declared a moratorium on research, to give a chance for thought. Genetic engineers fiddle with natural organismsthat is, with organisms evolved to survive in the world outside the laboratory. This naturally raises concern about what might happen when altered organisms return to the outside world. As descendants of competitive organisms, but different, might they not compete, but better?
In practice, after consultation with microbial ecologists, genetic engineers realized that success at living in the natural world is no easy thing. Nature had been doing recombinant DNA experiments on bacteria for billions of years, "trying" to make the fiercest, most competitive possible bugs. Genetic engineers are unlikely to improve on this even if they tried, much less by accident, while trying to do something else. Further, laboratory bacteria are adapted to laboratory conditions, weakening them for ordinary life. Just to be sure, genetic engineers typically work with bacteria that are made yet more dependent on laboratory conditionswith bacteria lacking the ability to make many compounds, giving them a need for many special "vitamins." Concerns remain. Even dependent, laboratory bacteria can exchange genetic material with independent, wild bacteria. New genes might then find their way into competitive organisms, and maybe (just possibly), do them good instead of simply burdening them with wasted metabolic effort.
What about nanoreplicators? They will differ greatly from existing organisms, and this cuts both ways. It means that they could be built to outcompete any bacterium, spreading vigorouslybut it also means that they could be built to be utterly unlike anything that can survive in nature. It was plausible that genetic tinkering might accidentally make a superior wild replicator, because it begins by fiddling with descendants of wild replicators. But for a nanotechnologist, starting from scratch, to make an independent replicatorthis would be no accident, but the result of a long and difficult engineering process.
Imagine you are an engineer designing a replicator. Is it easier to design for a single, stable environment, or for a whole set of diverse environments? Is it easier to design for an environment rich in special raw materials, or for one containing some haphazard mix of chemicals? Clearly, design for a single, special, stable environment will be easiest. The best environment will likely be a mix of reactive industrial chemicals of a sort not found in nature. Thus, regardless of concerns for safety, the most straightforward kind of replicator to build would be entirely safe because it would be entirely dependent on an artificial environment.
In reality, there will surely be an active concern for safety. Any replicator developed with these concerns in mind can be checked to ensure that it is, in fact, dependent on a mix of raw materials not found in nature. This will be easy, like checking to make sure that a car won't run on sap, mud, or seawaterthat it needs gasoline and oil and transmission fluid and...
If nanoreplicators can be utterly unlike living things, they can be utterly unlike anything that can survive independently. If dependent replicators are easiest to build, then we can easily avoid building anything like a "gray goo" replicator. We need only avoid solving the challenging problems posed by building replicators that can operate in the alien, unpredictable environment of nature. Rather than ensuring that elaborate, reliable safeguards are build into every replicator (to keep danger chained like the energy of a nuclear reactor) we can simply avoid building in any dangerous capabilities in the first place. And the chance of a dependent replicator somehow accidentally gaining the abilities needed for independent operation would be about the same as the chance of a car accidentally gaining the abilities needed to fly to the Moon.
But what about the advantages of building things using natural raw materials? There will be incentives to build assembler systems that can convert natural raw materials into useful products, but these systems need not build copies of themselves. Dependent replicators in vats of chemicals could build assembler systems able (given enough cleverness in design) to operate in nature, if we want such things. It is hard to imagine a practical application of nanotechnology that would demand independent replicators; for its success or practicality.
There are real threats from nanotechnology, butgiven a trace of sense and controlthese threats have nothing to do with accidentally creating gray goo. The real threats stem not from accidents, but from deliberate abuse. The gray goo threat, though a fine symbol of the destructive power of nanotechnology, is likely to receive too much attention rather than too little. A policy that treats it as primary will be a policy that misdirects efforts and may lead to dangerous decisions, such as attempting to block advances and thus relinquishing leadership to others with fewer concerns.
When asked, "What about accidents with uncontrolled replicators?" the right answer seems to be "Yes, that is a well recognized problem, but easy to avoid. The real problem isn't avoiding accidents, but controlling abuse."
For more on safety and abuse of nanotechnology, see Policy Issues.
Progress in technologymeaning "increasing abilities," for good or illis not a one-dimensional thing like time. It has as many dimensions as there are distinct characteristics of materials, devices, and systems. Progress can push back the frontiers of material strength, transistor speed, aircraft fuel efficiency, or a frontier falling across any one of a host of other dimensions.
N-dimensional spaces are hard to draw, and progress is often pictured as one-dimensional, so a crude picture showing two dimensions of progress may have something to contribute. The article "Will Change be Abrupt?" (Foresight Background No. 1) shows graphs of a more typical sort, with one dimension depicting technical progress and the other depicting time. Figure 1 of this article shows two dimensions of progress, in design ability and in fabrication ability, and attempts to represent states of technology in terms of these. The point marked "today" represents our current levels of design and fabrication ability, and the block marked "current capabilities" represents everthing that can be done within these levels of ability. It includes chipping flint, building supercomputers, and flying to the Moon.
This graph has obvious handicaps. Design is not a single dimensiondifferent abilities are needed in designing supersonic aircraft, software, and molecules. Likewise, different abilities are needed to fabricate ceramics, semiconductors, and molecular assemblers. This means that there is no obvious way to decide how to order different bundles of abilities along either of these axes, much less to decide what a scale might mean. Worse, solving fabrication problems is often a matter of design, muddying the distinction between the axes. Still, as abilities accumulate, we clearly move toward greater design and fabrication abilities, and there are important landmarks in both of these directions.
In moving along the fabrication dimension, we will eventually reach the threshold of assembler technology and move into the domain of nanotechnology beyond. This threshold is fuzzy, since early assemblers may be extremely crude and limited, and since progress in assembler technology will continue long after powerful abilities have been achieved. In practical terms, however, there is reason to think that the threshold will be fairly distinct: assembler technology is highly self-applicable, encouraging the use of assemblers to build better assemblers. This suggests that we can expect to cross this band fairly rapidly, once we enter it.
If we remain limited to using ordinary matter, the fabrication dimension will be limited: one cannot do better than the ability to arrange atoms as one wishes. This limit is represented by a vertical bar to the right.
In moving along the design dimension, we will eventually reach the threshold of artificial intelligence (which is chiefly a software design problem). As usual in these publications, artificial intelligence (AI) refers to what is known in the computer community as "strong AI"systems that can genuinely think and learn, rather than simply following a set of laboriously engineered rules. This threshold is again fuzzy, but again fairly distinct. We can expect to cross it fairly rapidly, once we reach it, because AI will be a highly self-applicable technology. Since design deals with complexity, to which there seem no obvious limits, no bar is shown across the top of the graph.
Figure 1 labels several regions. The region without assemblers or AI (including all of today's impoverished abilities) contains class 1 technologies. Developing assemblers without Al would carry us into the region of class 2 technologies; developing AI without assemblers would carry us into class 3. Eventually, the development of AI and assemblers will carry us into class 4.
Figure 2 plots some technologies within this conceptual framework; details of relative location should be taken with a large grain of salt. In class 1, protein machines are within our ability to fabricate (hence they lie to the left of the fabrication-edge of "current capabilities"), but they are currently beyond our ability to design (hence they lie above the design-edge of "current capabilities"). Nonprotein molecular machines (based on novel polymer systems) seem less challenging to design, but their fabrication will require some problem-solving. Improvements in software design and computer design and fabrication will give us powerful computer-aided design (CAD) systems, speeding further progress in design.
Class 2 technologies include all assembler-built systems of a degree of complexity that can be managed by conventional means. Unlike borderline systems (such as crude assemblers) these systems will be built by assemblers, relieving designers of the challenge of designing self-assembling molecular components. Thus, advanced assemblers are shown as being easier to design than crude assemblers, though based on more sophisticated fabrication technologies.
Class 3 technologies include genuine Al systems and anything that requires Al-aided design (but not assemblers). The speed of AI systems will depend on fabrication ability for computers. Since it seems likely that AI-aided design would swiftly lead to improved Al and to assemblers, it is natural to consider this region as no more than a transition to region 4.
Class 4 technologies include very fast Al systems; on physical grounds, it seems that assembler-built Al systems can operate at least a million times faster than brains. Powerful design and fabrication capabilities will make possible such ambitious goals as general-purpose systems for repairing tissues and organs cell-by-cell and molecule-by-molecule. With a large investment in design, gaming, and analysis, they should even enable the construction of lasting active shieldsdesigned with large enough margins of safety to give a good chance of stabilizing peace in the face of any feasible attack.
As technology advances, it may follow any of a number of paths, even in the simplified picture shown here. As shown in Figure 3, advance may lead to genuine Al first, leading to class 4 technologies via class 3, or assemblers may come first, to class 4 via class 2. Other (less likely?) paths would lead through both transition zones simultaneously, piling revolution on revolution.
For thinking clearly about nanotechnology, AI, and the future, a mental map like this seems useful. It emphasizes that not all nanotechnologies are in the same categorythat some will be simple, some will be as complex as computers and automated factories, and that others will be more complex and challenging to design than anything yet attempted. Further, it reminds us of the unknowns on the path ahead. Even a knowledge, however imperfect, of what sorts of revolutions lie ahead does not tell us in what order they will occur. The transition to nanotechnology will look different if it is aided by Al designers; the transition to Al will look different if it occurs on a foundation of abundant assembler-built hardware.
by: K. Eric Drexler
Originally published in 1988
When faced with something as novel as nanotechnology, it makes sense to look for familiar analogies. Previous publications have compared nanomachines to conventional macromachines, but in important ways nanomachines more closely resemble software systems. Consider the properties of software and conventional machines, then the parallels with assembler-built nanomachines.
Macromachines are made of parts which contain vast numbers of atoms in ill-defined patterns. Having so many atoms, these parts can be made in what amounts to a continuum of sizes and shapes, formed by continuous, analog techniquesmolding, cutting, grinding, etching, and so forth. These parts are always imprecise. Machines are made by fitting parts together; in a good design, imprecisions won't add up to exceed overall tolerances. In operation, parts typically change shape slowlythey wear out and fail.
Software mechanisms differ radically. Their parts consist of discrete bits in defined patternsthey do not form a continuum. There is no need to make bits, as there is to make mechanical parts. The fabrication of bit-patterns is a precise, digital process; it is either entirely correct or clearly wrong, never "just a little off." The position of one bit with respect to another is as precise as the mathematical position of "two" with respect to "three."
The digital mechanisms which underlie this precision are made of imprecise devices, but these devices have distinct patterns of interconnection and distinct "on" and "off" states. Failures in the underlying devices can cause sporadic errors in memory and logic, yet if the devices operate within their design tolerances, errors (give or take an occasional cosmic ray) will be completely absent. Digital precision emerges from imperfect devices through a process like that of the automatic alignment found in many computer graphics programs: a device in any state that is nearly-right snaps into a neighboring state that is entirely-right. Each entirely-right state follows from a previous entirely-right state, with no buildup of small errors in, say, the size or alignment of the bits.
Nanomechanisms do have obvious similarities to conventional mechanisms. Unlike software, they will be made of parts having size, shape, mass, strength, stiffness, and so forth. They will often include gears, bearings, shafts, casings, motors, and other familiar sorts of devices designed in accord with familar principles of mechanical engineering. In most respects, nanomechanical parts will resemble conventional parts, but made with far, far, fewer atoms. They will little resemble the algorithms and data structures of software.
And yet their similarity to software and digital mechanisms will be profound. As software consists of discrete patterns of bits, so nanomechanisms will consist of discrete patterns of atoms. Atoms, like bits, need not be made; they are both flawless and available without need for manufacture. The parts of nanomechanisms will not form a continuum of shapes, built by inaccurate analog processes; they will instead be chosen from a discrete set of atom-patterns, and (like bit patterns) these patterns will be either entirely correct or clearly wrong. In stacking part on part, there will be no buildup of small errors, as there is in conventional systems.
As in digital circuits and computer graphics programs, a principle of automatic alignment comes into play. When an assembler arm positions a reactive group against a workpiece, forcing a reaction, imprecision of the arm's alignment won't cause imprecision in the position of the added atoms. In making a well-bonded object, molecular forces will snap the atoms either into the proper position, or into a clearly wrong position. (As Marvin Minsky remarks, quantum mechanics doesn't always make things more uncertainquantum states can be extraordinarily definite and precise.) Assembly can with high reliability yield a perfect result.
And again like software, nanomechanisms won't wear out. So long as all the atoms in a mechanism are present, properly bonded, and not in a distinct, excited state, the mechanism is perfect. If an atom is missing or displaced (say, by radiation damage) the mechanism isn't wornit is broken.
In their shapes and functions, nanomechanisms will be much like ordinary machines. But in their discreteness of structure and associated perfectionto say nothing of their speed, accuracy, and replicabilitynanomechanisms will share some of the fundamental virtues of software.
by: Ralph Merkle
Originally published in 1988
"One may now reasonably ask if it is possible to move and alter matter predictably on an atomic scale...we have evidence that we can remove a portion of a pinned molecule, effectively performing transformations on single molecules using the tunneling microscope," say John S. Foster, Jane E. Frommer, and Patrick C. Arnett of IBM's Almaden Research Center in an article in Nature.
The scanning tunneling microscope, as most of you know, is conceptually quite simple. It uses a sharp, electrically-conductive needle to scan a surface. The position of the tip of the needle is controlled to within 0.1 Angstrom (less than the radius of a hydrogen atom) using a voltage-controlled piezoelectric drive. When the tip is within a few Angstroms of the surface and a small voltage is applied to the needle, a tunneling current flows from the tip to the surface. This tunneling current is then detected and amplified, and can be used to map the shape of the surface, much as a blind man's stick can reveal the shape of an object.
In the Almaden work, the surface is atomically smooth graphite with a drop of dimethyl phthalate (a liquid) on its surface. (The type of organic liquid does not seem critical; many other compounds have been used.) The needle is electrochemically etched tungsten, and is immersed in the liquid. Not only can the graphite surface be imaged in the normal way, but a voltage pulse applied to the needle (3.7 volts for 100 nanoseconds) can 'pin' one of the organic molecules to the surface, where it can be viewed in the normal fashion. A second voltage pulse applied at the same location can remove the pinned molecule (though it often randomly pins other molecules in an as-yet uncontrollable way). in some cases, the voltage pulse will remove only part of the pinned molecule, leaving behind a molecularly altered fragment.
The first application that comes to mind is a very high density memory. The minimum spot-size demonstrated in the new work is 10 Angstroms, though a somewhat larger size might be required in practice. If we assume that a single bit can be read or written into a 10 Angstrom square, then a one square centimeter surface can hold 1014 bits. That's one hundred terabytes. The 100 nanosecond pulse time sets a 10 megabit/second maximum write rate, though this might be degraded for other reasons. At this rate, it would take several months to a year of constant writing to fill a one square centimeter memory. Access times will probably be limited by the time needed to move the needlewhich might be a significant fraction of a second to travel one centimetergiving access times similar to those on current disk drives. The manufacturing cost of such a system is unclear, but the basic components do not seem unduly expensive. It seems safe to predict that someone in the not-too-distant future is going to build a low-cost, very large capacity, secondary storage device (disk replacement) based on this technology.
The larger implication of this work, however, is that it may put us on the threshold of controlled molecular manipulation. While we can easily imagine more powerful techniques than poking at objects with a sharpened stick (we clearly want a pair of molecular-sized hands) the great virtue of this technique is that we need not imagine it at allit is real and is being pursued in many laboratories. Even better, we can imagine incremental improvements in this technique that ought to be achievableusing, perhaps, two sharpened sticks (chopsticks, anyone?) and shaping the tip of the stick in a more refined and controlled way. The tip, viewed at the atomic scale, is rather rough and there seems no reason why we cannot do better perhaps by examining and modifying one stick with the other stick.
These larger implications have not been lost on the scientific community. In an editorial on atomic-scale engineering in the same issue of Nature, J. B. Pethica of the Oxford Department of Materials Science says that the scanning tunneling microscope has "...become one of the principle gedanken tools for nanotechnologythe proposed direct manipulation of matter, especially biological, on the atomic scale," and "The work of Foster et al. represents a significant attempt at the much more important and difficult problem of the direct manipulation of the structure of biological materials."
Dr. Ralph Merkle's interests range from neurophysiology to computer security. He currently works in the latter field at Xerox PARC.
Compiled in 1988
Howard Rheingold, Tools for Thought (Simon & Schuster, 1985)
Fun book covering past, present, and future computer tools for augmenting human mind. Includes hypertext (Nelson, Engelbart, Xanadu), ARPAnet, "epistemological entrepreneurs," future of network culture. Photos, personality profiles. For general readers.
Gerald Feinberg, Solid Clues: Quantum Physics, Molecular Biology, and the Future of Science (Simon & Schuster, 1985)
Feinberg, a Columbia physics professor and Foresight Institute Advisor, covers a broad range of topics in science, focusing on physics and biology, including careful thinking on where science is going. Rarely does a scientist discuss the future of science: a special treat. Glossary. Accessible (but challenging in parts) to non-technical readers.
John Sculley, Odyssey: Pepsi to Apple, a Journey of Adventure, Ideas, and the Future. (Harper & Row, 1987)
A "business" book for those who don't read business books. The story of a top Pepsi executive (read about a strange but amusing corporate domain one hopes never to enter) and how he escaped to Apple Computer to make a difference in the real world. Epilogue on Sculley's dream, the Knowledge Navigator, which sounds like hypertext and hypermedia publishing with some Al.
Ted Nelson, Computer Lib/Dream Machines (Microsoft Press, 1987)
Newly revised version of the classic work which revolutionized the way we see computers, by the man who is widely regarded as the father of hypertext.
Bernardo Huberman (ed.), The Ecology of Computation (Elesvier Science Publishers, 1988)
Open-systems perspective on advanced computing. Includes a set of three papers on agoric market-based computation. For the computer literate.
Proceedings of the IEEE Micro Robots and Teleoperators Workshop: an Investigation of Micromechanical Structures, Actuators, and Sensors. Held November 6-11, 1987. Hyannis, MA
28 papers on topics such as "gnat robots," micromotors, and "Nanomachinery: Atomically Precise Gears and Bearings." Available through technical libraries or call the IEEE in New York City.
Foresight materials on the Web are ©1986–2017 Foresight Institute. All rights reserved. Legal Notices.