Foresight Background 3, Rev. 1
Dialog, Exploratory Engineering, Bioarchive
© Copyright 1988-2000, The Foresight Institute.
by: K. Eric Drexler
Originally published in 1988
Powerful technologies such as nanotechnology and genuine artificial intelligence are prospects that stir different feelings in different peopleindeed, they can stir discordant feelings within a single individual. What are the issues, and how might we respond? The urge to debate these questions has brought three persons into imaginary existence:
Foresight Moderator: Welcome. Our topic today is the dangers and benefits of some of the advanced technologies we foresee, and what response we should make. You're both familiar with the anticipated capabilities of nanotechnology, and of genuine AI. Why don't we begin with your views on their likely consequences?
Pro-Progress Advocate: I'm optimistic and excited. The opportunities ahead are simply amazing. With nanotechnology we'll be able to make almost anything we want in any amount we want, and do it cheaply and cleanly. Poverty, homelessness, and starvation can be banished. Pollution can be eliminated. We can finally open the space frontier. With the help of powerful AI systems, we'll be able to tackle more complex applications of nanotechnology, including molecular surgery to repair human tissue. And that can eliminate aging and disease. People everywhere struggle for greater wealth and better health. With these advances, we can have themfor all of us.
Pro-Caution Advocate: I wish I could believe what my opponent does, but I can't. We face frightening dangers, and I'm afraid we won't survive them. Powerful technologies give people power, and power can be abused. Nanotechnology will be based on self-replicating machinesimagine using them to build missiles and other automated military equipment. Imagine adapting them to use as programmable germs for germ warfare. The prospect of nanotechnology could easily prompt a preemptive war. The list goes on. And as for real AI systemswhat if they have their own goals, which don't include our survival?
Moderator: I notice that neither of you seems concerned about runaway nanotechnology accidents, the so called "grey goo threat."
Pro-Progress: If no one builds it, it's not a threat. A replicator designed to build pocket computers in a vat of chemicals isn't much like a replicator that can run wild in nature. You won't get something that dangerous by accident.
Pro-Caution: Accidental grey goo is a red herring. The real threat is deliberate abuse. And if everyone has this technology, then if it can be abused, it will be abused. And if it can destroy the biosphere, that's threat enough for me.
Moderator: I know nanotechnology has gotten unusual reactions from peopleat least, reactions that seem to violate stereotypes. Have you seen this among your friends?
Pro-Caution: Yes. For example, some of my environmentalist friends are pretty enthusiastic about the idea of replacing modern industry with something completely different, and betterwhen they're not worried about replacing it with something infinitely worse.
Pro-Progress: Some of my technological friends seem more than a little worried, too. That's not unusual. I agree that there are real dangers, but I think we can handle them when we come to them.
Moderator: Should people be talking about the dangers now? This has been a matter of some disagreement in the Foresight community.
Pro-Progress: I think talk about dangers is premature. These technologies are years away, and they'll have vast human benefits. Talking about dangers today will just bring out opponents and slow progress. And that means slowing progress in medicine, which will cost livesmaybe many millions of lives.
Pro-Caution: But think of the millions of lives at stake if things go wrong. These are dangerous technologies, and if talking about their dangers slows them down, so much the better. It would be best if they never happened at all.
Pro-Progress: Seemy adversary would condemn millions, even billions, to continuing poverty, starvation, disease...
Pro-Caution: And you'd have us risk the destruction of the world!
Moderator: You both agree that there are real risks that these technologies will be abused, but disagree about the likelihood of disaster. Superpower relations are important here, and we all know that there have been moves toward reform in the Soviet Union. How does this affect our prospects?
Pro-Caution: I think that it's a very favorable development from the point of view of controlling these technologies. The big risk has always been that they would become the focus of an arms race. With reform and lowered tensions, it should be easier to negotiate treaties to ban these technologies entirely.
Pro-Progress: Good luck with that! How would you enforce a bilateral treaty banning secret work on microscopic technologies and software? Satellites won't do it, and I doubt anyone will put up with secret police everywhere. And what about other countries? The world is a big place, and more and more countries are getting into the technology race. Besides, why try to ban technologies of such great benefit, even if it were really possible? If tensions are lower, then the risks are lower, so they seem even more attractive. If the Soviets reform, and stay reformed, I'd say we should cooperate with them.
Moderator: The idea of banning these technologies seems like a natural response to their dangers. If the dangers really are largeand I know you disagree on thisshould we try to ban them?
Pro-Caution: We can't guarantee that these technologies will be used responsibly, and if they aren't, they could be used to destroy the world. That's a big danger. Some of my friends argue that we have to try to head off this danger by heading off the technology itself. What choice do we have?
Pro-Progress: But what happens if you try to stop the advance of technology in these directions. It's a big world, and a long future we face. Take nanotechnology: there are so many ways it could be developed, and so many reasons for trying, and so many groups moving in that direction...If you try, how likely are you to succeed in stopping everyone?
Pro-Caution: Not very likely, I suppose, but
Pro-Progress: And if you convince all your friends and sympathizerseveryone who shares your concerns, and everyone who is within their political controlthen who will take the lead and end up controlling the technology?
Pro-Caution: ...well...err...people who don't share our concerns, I suppose...
Pro-Progress: Precisely. And if abuse is the problem, that's the most dangerous outcome. A moment ago, it sounded like you were asking for a guarantee of safety, but trying to block the technology can't guarantee safety, either. If anything, I'd say it comes closer to guaranteeing the opposite.
Pro-Caution: That's a nasty situation...
Pro-Progress: You're in favor of cautionin the name of caution itself, will you agree that attempting to block the advance of these powerful technologies is a dangerous course?
Pro-Caution: Yes, it seems attractive, but I guess it's really pretty reckless. If we drive research into hiding, the whole thing could end up being controlled from a secret laboratory somewhere. I agree; its better to try to guide the technology than to try to stop it, and you can't have both.
Pro-Progress: Good! And if it's dangerous to try to stop it, isn't it likewise dangerous to stir up opposition by focusing on the risks too soon? I think we should hold off on criticism.
Moderator: Should we present just one side, then? That doesn't strike me as very moderate.
Pro-Caution: Certainly not! The dangers are real, and it would be irresponsible to hide them.
Pro-Progress: A lot of my friends argue that we'll do better if we at least try to play them down, for now. We should at least try.
Pro-Caution: This seems somehow familiar ...Here's a fine argument! You've persuaded me that it would be dangerous to try to block advances, basically because we would fail, but our attempts would put us in no position to guide advances. And so you ask me to be silent about the dangers. But how likely are you to convince everyone to be silent?
Pro-Progress: It's about as likely as getting the whole world to renounce research, I suppose.
Pro-Caution: Precisely! And if you convince all your friendseveryone who shares your concernsthen who will publicize the dangers, what will they say, and what will they say to do?
Pro-Progress: ...well...people who don't share our concerns, I suppose...This sounds unpleasantly familiar...
Pro-Caution: So, in the name of progress itself, will you agree that it would be dangerous to...
Pro-Progress: All right, all right. I'd prefer responsible technology critics who listen to me, just as you'd prefer responsible technology leaders who listen to you. It would be dangerous to present just one side. Who knows what crazies would rush in to fill the vacuum?
Pro-Caution: Welcome to the critic's club!
Moderator: You're both saying "try to guide, don't try to block." And you're both saying that it's not too early to talk about the dangers that motivate efforts to guide these technologies. But what about the issue of speed? Should we attempt to delay these technologies, or promote them?
Pro-Caution: My gut feeling says to delay these technologies: if they're dangerous, we need time to prepare.
Pro-Progress: I'm looking forward to them, so you know my gut position. Besides, even if technology moves rapidly, we're likely to have many years to prepareif we use them. And I think promotion fits in here: if we promote work on these technologies today, that will make people think and prepare. Remember, you've convinced me that it makes sense to talk about the dangers, but people won't really listen unless they see real progress toward the technology.
Pro-Caution: But consider how much can be done just by sketching a case. What we need now is advance planning, not mass persuasion. We've seen that just showing that there are paths to nanotechnology and AI is enough to get some people thinking seriously about them. And in nanotechnology, we've seen how much one can learn about what's coming, independent of doing work on the tools to build the tools that will build the actual technology. We can promote understanding without building the actual tools.
Pro-Progress: Yes, I suppose some sorts of work do more for understanding than for technical progress, and others do more for progress than for understanding. But still, each advances the other. And focusing present day research more tightly on nanotechnologysince its going in that direction anywaymay do what you want. I'd expect it to accelerate technical progress, but to do a lot for people's understanding of where that progress is leading.
Pro-Caution: But at the cost of bringing dangers sooner.
Pro-Progress: And benefits too, remember. I'm bothered by what you said a moment ago, some remark about wanting to "delay the technology." If that just means you'd be happier if it turns out to be a long, hard process to solve the problems, that's fine. But if you're advocating something like a half-hearted attempt to block the technology, I've got a problem with that. It runs the same sort of risks as real blocking attempts, and its going to lose allies for your position. You're not saying that there are any special dangers in the preliminary work, are you?
Pro-Caution: No, there's nothing special about it except what it leads to. My concerns are with the powerful technologies that come later.
Pro-Progress: OK. So imagine that you're a researcher, and I come along and try to interfere with your worknot to make it safer, since what you're actually doing right now is pretty ordinary, but just to make your work slower and harder. How is that going to make you feel?
Pro-Caution: Pretty frustrated. Hostile, I suppose, unless I had the sense to work on something else in the first place.
Pro-Progress: But if we want the research to be done in the open, by people who share our concerns... Well, it would be a sad thing to make people like that our enemies, just to buy a little time. I don't think it's fair to anyone to put off a risk, if that just makes the risk bigger.
Pro-Caution: Yes. And the way to minimize our risks is to look ahead, think things through, and mobilize support for sensible policies when the hard choices have to be made later. I'd like to have the researchers with us when the time comes.
Pro-Progress: And that's a good reason to take a more positive attitude toward their work. Especially since a lot of them will be trying to save lives and put an end to old nightmares.
Pro-Caution: You're not going to get me to change my preferences on this one. This stuff scares me for good reasons, and I'd rather not have it happen too soon. But, yes, the real reason to buy time is to build understanding and consensus, and if a delaying tactic would turn people against each other, it probably isn't worth it. We're going to need allies.
Pro-Progress: If we can get enough allies, there'll be hardly anyone on the other side.
Pro-Caution: Good luck with that! But if that's how you feel, remember this: I and a lot of other people are going to feel a lot better about your "progress" if we see the researchers talking about how to keep these technologies from being misused. If you join the discussion, you'll find your critics are more friendly.
Pro-Progress: OKIf you'll put up with advocates who recognize that technologies have dangers, then I guess I can put up with critics who recognize that trying to stop advances won't work. If we're going to find out what will work, and make it happen, we've got a lot to talk about.
Pro-Caution: OKBut I'm still scared.
Pro-Progress: And I'm still optimistic.
Pro-Caution: I'd be more optimistic if your technical problems were the hard partbut the real hard problems are going to be political and social. And they're going to be tough.
Moderator: I'd like to thank both of you. There's a tension between "caution" and "progress," but as we've seen they're far from being opposites. The Foresight Institute welcomes both of you, and your friends of varying views, to help us find our way along a path of cautious progress.
Readers' comments on this dialog are welcome. Editor
by: K. Eric Drexler
Originally published in 1988
In his 1992 book Nanosystems: Molecular Machinery, Manufacturing, and Computation Dr. Drexler introduced the term theoretical applied science to refer to what is discussed here as exploratory engineering. --Editor
To think productively about future technologies (including nanotechnology) is largely a matter of exploratory engineering. To do a better job of understanding the future, we need to do a better job of understanding, practicing, and judging efforts in exploratory engineering. This essay examines this field, comparing it to science and standard engineering.
Exploratory engineering involves designing things that we can't yet build. This may seem a dubious proposition: "If we can't build something, who will pay for designing it? Surely the design will be sketchy and inadequate. And if we can't build it, how will we test it? Surely any conclusions about its workability will be tentative and inadequate." These are natural questions, and the answers to them revolve around the counter-question, "Adequateor inadequatefor what?"
One shouldn't expect the exploratory engineer to concoct specific designs and propose them as the definitive machines for the 2006 model year. Not only are present engineers too ignorant to do so (a basic problem), but we lack the resources. Modern industrial designs are often complex and sophisticated; future designs will likely be more so. On a small budget, one could not possibly design today's machines, much less the future's.
So why bother with exploratory engineering? Because, just as one can have a general knowledge of today's machines and what they can dowithout knowing their detailed designsso one can, perhaps, gain a general knowledge of some of tomorrow's machines. A general knowledge can include important facts, and detailed, sophisticated designs are not essential to a general understanding. It is one thing to have a general knowledge of cars, roads, petroleum, and suburbs; it is another thing to understand how the mechanical and chemical details of internal combustion engines affect a car's design, acceleration, and gas mileage.
Likewise, consider nuclear bombs. The key points in a general understanding of them are simple: they work by nuclear reactions, initiated by fission, releasing nuclear levels of energy and producing active nuclear debris. This general understanding (and more) was possible before the Manhattan Project, and hence before the first bomb. Sophistication was of secondary importance: the first, crude bombs were grossly inefficient by today's standards, yet they beat conventional explosives by orders of magnitude. Even the incomplete, exploratory designs for these bombs must have shown the possibility of this, because the basic potential lay not in the details or the sophistication of the designs but in the fundamental principles of the technology.
The lesson in this is simple: in powerful new technologies, even clumsy, conservative designs can sometimes give awesome performance. Exploratory engineering works best in these new domains, where primitive designs can beat the most sophisticated systems possible with present technology. Nanotechnology is, of course, an outstanding example of such a domain.
By its very nature exploratory engineering has nothing to say about the timing of events. There is nothing in the design of a machine that tells how long a community of human beings will take to develop all the technologies needed to build it, or even whether they will try. Dates do not fall out of design calculations.
These uncertainties limit the value of exploratory engineering, if one seeks predictions of future events rather than estimates of future abilities. One might guess at matters of timing, but it is wise to be cautious in these guesses. What "cautious" means, of course, depends on the issue at hand: a technophile's optimism is a technophobe's pessimism, and vice versa. If one considers the unprecedented economic and health care benefits promised by nanotechnology, the cautious assumption is that it will take a long, long time to arrive. But if one considers the unprecedented potential for abuse of nanotechnology, the cautious assumption is that it will arrive with startling speed. For those chiefly concerned with the direction of progresswith choosing productive lines of research, for exampleuncertainties about the timing of long-term goals are less important.
Science vs. engineering
Exploratory engineering, more than most engineering, builds on scienceyet this does not make it a branch of science, any more than bridge design is a branch of science. And this is important to recognize, because the confusion between science and engineering is fatal to understanding the future of technology. To judge by newspaper and television coverage, spaceflight is a great achievement of science, and "rocket scientists" spend a lot of time trying to make rocket engines work. Any scientist or engineer, of course, will tell you that spaceflight though it has benefited from science and in turn yielded scientific knowledgeis an achievement of engineering. Understanding the difference between these fields is vital: if engineering were a science, then exploratory engineering would be impossible.
Science and engineering build on each other and use similar tools, but they have different goals. Science aims to understand how things work; engineering aims to make things work. Science takes the thing as given and studies its behavior; engineering takes a behavior as given and studies how to make something that will act that way.
This difference makes foresight impossible regarding scientific knowledge, but not regarding engineering ability. The limit on foresight regarding knowledge is simple and logical: if one were to know today what one will "discover" tomorrow, it wouldn't be a discovery. Since engineering is about doing rather than discovering, no such logical problem arises. There is no contradiction in saying, "We know that we will be able to land a man on the Moon," as Kennedy's advisors did in the early 1960s. When scientists do make predictions about their future knowledge, they predict what they will learn about rather than what they will learn. And this is often a matter of engineering: "We will learn about the composition of the lunar surfacebecause engineering will take us there."
Confusion about science and engineering hinders understanding of future technologies. If we confuse engineering with science, then we will think that little can be said about its futurethat engineering projections are as poorly founded as scientific speculations. And we will tend to think that scientists (with their proper and ingrained distrust of speculation) are the right experts to ask about the future of technology. Scientists have little reason to ponder the nature of engineering and many misunderstand it.
"Thus we conclude that engineers can, through the discipline of exploratory engineering, give us a broad survey of the future of technology. We need only ask them and listen to their answers." This is, of course, nonsense.
Standard engineering has a short term perspective for a simple reason: employers will not pay engineers to think about what can be built in another fifty years because there is no money in it. In the U.S., companies seldom pay engineers to think about what can be built in ten years. Accordingly, medium- and long-term exploratory engineering are little practiced today. What is more, the discipline of exploratory engineering differs so greatly from that of standard engineering that standard engineers may be excused for doubting whether it even makes sense.
Standard vs. exploratory engineering
Engineering is about designing thingsordinarily, things that one can build, test, and redesign in the short term. Exploratory engineering is about designing things that can be built, but only with tools that we don't yet have; this makes it a different sort of endeavor.
The differences begin with motive. Standard engineering receives massive funding to help achieve a competitive advantage in the worldto build a more attractive CD player or a more aggressive jet fighter, and to do it soon. Exploratory engineering, to the extent that it is practiced at all, seeks to construct not a physical artifact but a rough understanding of future technological capabilities.
The exploratory engineer must still do design work of some sort, or there would be nothing to discuss, no real ideas to criticize. But those designs can make a solid case for a future capability while omitting many details. In standard engineering, in contrast, the job isn't done until every detail is specified, since every detail must be built. This makes the exploratory engineer's job simpler.
Since exploratory engineering aims only to build a solid casenot a competitive piece of hardwareit need not try to push the limits of the possible. This has profound consequences for the nature of the intellectual enterprise: again, it makes work simpler.
In standard engineering one seeks a net advantage in any way that works, regardless of whether we understand why it works. Engineers must seek lower-cost manufacturing, which forces them to work with all the complexity of factory operations. They must seek better materials, which drives them to confront all the complexity of metallurgy and polymer chemistry, as in manufacturing turbine blades for jet engines and composite materials for wings. They may have to push the limits of precision, cleanliness, purity, and complexity, as in state-of-the-art microprocessor production. And almost any production process is likely to use a big bag of tested, reproducible black-magic tricks: add a pinch of this, a dash of that, and clean the glass with AlconoxTM detergent before step 5.
Though engineers eagerly use (and produce) scientific knowledge, they no more need to understand how a process works than a bird needs to understand aerodynamics. Cut-and-try works in engineering as it works in other evolutionary systems. Once discovered, a process may work, prove its reliability in testing, and provide a real competitive advantageyet remain utterly beyond analysis and simulation based on current knowledge. Competitive pressures encourage engineers to increase their understanding, but those same pressures do not allow them the intellectual luxury of staying on well-understood ground.
Competition pushes engineers beyond what can be understood, analyzed, and simulatedsuccess in standard engineering requires experiment, not only to get the details right but to discover valuable-yet-mysterious processes. But exploratory engineering must do without this aid: one can't test and learn from what one can't build. The standard engineer, looking at this situation, has the strong gut feeling that analysis and simulation will prove inadequateas indeed they would, if the goals and designs were those of standard engineering. But when a design need not be competitive, thenat least in some instancesit need not go beyond what can be understood in terms of well-understood laws of nature and close analogies to known systems.
In these instances, analysis and simulation can give strong reason to think that a rough, exploratory design could in fact be made to work. To make such a case, the designer must pay attention (explicitly or implicitly) to a host of questions. For example, what are the relationships among materials properties, component shapes, strengths, forces, speeds, energies, temperatures, voltages, currents, and chemical reactions? What about radiation damage, electron tunneling, and vibrational frequencies? The list is long, but (for any particular class of physical system) it is still finite.
Confusion about uncertainty
Uncertainties in analysis and simulation pose problems. Different fields suffer different problems from uncertainty, and once again confusion about the differences separating science, standard engineering, and exploratory engineering can impede our efforts to understand the future of technology.
In exploratory engineering, one can't test and measure, so uncertainties may remain large. Some can be dealt with by leaving large margins for error in a design. If you don't know how strong the material will be, assume the worst and beef up the thickness of the part to match. Where a standard engineer would be forced by competitive pressures to leave only an adequate margin for safety (perhaps testing to probe the limits of workability), the exploratory engineer can often design in a huge margin for ignorance, just to make a more solid case.
In an unknown environment, uncertainties may also be unknown. Accordingly, exploratory engineering is easier when the designer can assume a simple or well-defined environment. For parts inside a machine, the machine is the environment and is itself a known part of the design.
A more subtle problem arises when a combination of uncertain quantities must yield a precise result. For example, a spinning part may require perfect balance yet be made of two parts of unknown density bolted together. Here one must ask if there are enough "degrees of freedom" to satisfy the constraint. In this case, the unknown density-ratio between the parts need be no problem, so long as we are free to adjust the size of at least one part to bring the assembly into balance. This gives us the degree of freedom we need to compensate for the uncertainty.
This line of reasoning shows how the exploratory engineer can make a solid case for a device despite uncertainties about many of its properties and design details. It also shows how uncertain properties can create compensating uncertainties in design details (as in the example given, where an uncertain density leads to a corresponding uncertain size). This shows that "uncertainties" can come in planned sets that cancel out rather than adding up. The notion of uncertainty in exploratory engineering plays other tricks. Ignorance of these can lead one to confuse confidence-building factors with confidence-eroding factors. We need to avoid confusions about uncertainty, especially in understanding large systems of ideas.
Uncertainties play different roles in science, standard engineering, and exploratory engineering. The usual intuitive rule about uncertainty in large sets of ideas or proposals is simple: if a conclusion or design rests on layer upon layer of shaky premises, it should not be trusted. But this intuition sometimes misleads. To see where it works and where it doesn't, consider an imaginary proposala theoryin science, and anothera designin engineering. Each proposal will have five essential parts and ten equally plausible possibilities for each part.
The theory might have to explain (1) what something is, (2) where it came from, (3) how it survived the last million years, (4) why it hasn't been detected with x-rays, and (5) what it does when baked. Only one possibility can be right for each part of the theory, so given our assumption of ten equally plausible possibilities for each part, a choice will have only a one-in-ten chance of being right. For a specific version of the theory, the chances of getting all five parts right (assuming no additional data) are no better than 1/10 to the fifth power. For a theory to be true, all its parts must be true, and so for any specific version the odds against it are at least 100,000 to one.
This artificial example shows how uncertainties combine in building real scientific theories: adversely. A scientific theory is a single-stranded chain that can break at any link, and a chain with many dubious links is almost certainy worthless. This shapes the scientist's attitude toward uncertainty.
Uncertainties in exploratory engineering work in a different way. Consider a superficially similar design problem: a mechanism requiring five essential parts, with (again) ten equally plausible possibilities for each part. The design might require (1) a power supply, (2) a motor, (3) a speed controller, (4) a locking device, and (5) an output shaft. But here, more than one possibility may work: unlike theoretical proposals, engineering possibilities are not mutually exclusive. What is more, in exploratory engineering the typical problem is to build a case for the workability of the mechanism, not to specify a detailed, workable designit is enough to show that one working possibility can be found.
In accord with these points, imagine that each of the ten possibilities for a part has a 50-50 chance of working. The chance of all ten possibilities failing, leaving no workable design for the part, is 0.5 to the tenth power less than one in a thousand. With five parts facing this risk of unworkability, the overall chance that some essential part won't be possible is less than five in a thousand, making the overall probability of a workable design better than than 99.5%. Real examples can give even better results: there may be a hundred ways to build each part, and several may be essentially sure bets.
In exploratory engineering, the "uncertainty" resulting from many possibilities may lead to a virtual certainty that at least one will work. An exploratory design concept can be like a massive, braided cable, in which many strands must fail before the link is severed. Uncertainties do not combine adversely, as do the superficially similar uncertainties of science (in science, a closer parallel would be a claim that some correct theory can be found, but even this suffers from the problem that only one choice can be right, and that choice may not be known).
Standard engineering, however, is a bit closer to science in this regard. In an idealized competitive world, only the best design would do. And choosing the one best design, like choosing the one true theory, would mean choosing the uniquely right possibility for each part. In the real world, of course, the best isn't necessary, but competitive pressures still narrow the acceptable choices.
Further, in standard engineering it isn't enough to establish that there is a workable design, somewhere in a forest of alternativesone must propose a specific design, build it, and live with the consequences. Time and budgets are limited, and the failure of a large system may leave no resources for another try. In our model above, this would mean making five choices with a 50-50 chance of success with each part, making the overall chance of success about 3%. (This motivates careful testing of parts before building systems.)
All these factors combine to make exploratory engineering more feasible than it might seem. Designs need not be competitive with other, similar designs; they need only be workable. To make them workable, they can be grossly overdesigned to compensate for uncertainties. Since their purpose is to provide a case for a possibility, not a blueprint to guide manufacture, they can omit details and include room for adjustment. All this makes it easier to build a solid case for specific kinds of mechanisms, yet concepts for whole systems of mechanisms can be solid even if they are built on layer upon layer of shaky cases for specific parts.
In the first half of the twentieth century, work in exploratory engineering pursuaded knowledgeable individuals that spaceflight would be possible. Today, the most important field for exploratory engineering is perhaps nanotechnology: it is clearly foreseeable and will be of immense practical importance. It can serve as a prime specimen of the process.
Nanotechnology is (or, rather, will be) a technology based on a general ability to build objects to complex, atomic specifications. We live in a world made out of atoms, and how those atoms are arranged makes a tremendous difference. This is why nanotechnology will make a tremendous difference. Nanotechnology will be based on molecular machines and molecular electronic devices. With computers and robotic arms smaller than a living cell, it will enable the construction of almost anything, building up structures atom by atom. Among the products will be:
How can one draw such conclusions? A more detailed exposition is spread over several papers and a book (Engines of Creation, Anchor/Doubleday, 1986 [Editor's note: for a more technical analysis, see Dr. Drexler's 1992 book Nanosystems: Molecular Machinery, Manufacturing, and Computation]), but the outlines are straightforward.
The idea of nanotechnology resulted from applying an engineering perspective to the discoveries of molecular biology, and one path to nanotechnology lies through further advances in biotechnology. Regardless of how nanotechnology emerges, however, the facts of molecular biology provide a direct demonstration of principles that can be used by future molecular machines. (A rule of exploratory engineering: if one knows that it happens, one can assume it is possible.)
Multiple pathsincluding advances in organic chemistry and micromanipulationlead from present technology toward a technology able to build complex molecular structures, including molecular machines able to build better molecular machines. The conclusion that we can build such machines gains strength from what may be termed our "uncertainty regarding how to proceed"that is, from the presence of many apparently workable options. This uncertainty does not spill over into nanotechnology itself, however, because all these developmental paths lead to the same destination, to molecular assemblers able to manipulate reactive molecules to build complex structures atoms by atom.
To explore the domain of nanotechnology means exploring the world of thingsespecially molecular machinesthat can be built using atoms as individually-arranged building blocks. Knowledge of the forces within and between molecules tells us of the forces within and between the parts of molecular machines. The field of "molecular mechanics," developed by chemists, describes these forces and the resulting molecular motions, often quite well. The exploratory engineer can compensate for inaccuracies in modern molecular mechanics descriptions by overdesigning parts, by allowing large margins of safety, and by paying attention to the number of degrees of freedom in a design.
The most fundamental fact about molecular mechanics is that molecules can be thought of as objects. They have size, shape, mass, strength, stiffness, and smooth, soft, slippery surfaces. Large objects are made of many atoms; molecules are objects made of only a few atoms.
Molecular mechanics describes what happens when the atomic bumps of one surface slide over the atomic bumps of another and shows, surprisingly, that the resulting motion can be so smooth as to be almost frictionless, at low speeds. This makes possible good bearings. Molecular mechanics can also describe how friction builds up with speed, but the analysis is complex and has not yet been done. Until it is, the exploratory engineer can design using low sliding speeds, and only then assume low sliding friction.
The story continues through other molecular devices. Meshing atomic bumps can serve as gear teeth. Helical rows of bumps can slide smoothly over other helical rows, serving as threads on nuts and screws. Tightly bonded sets of atomslike tiny bits of ceramic, diamond, or engineering plasticscan form strong, rigid parts. Rotors, bearings, and electrodes can form electrostatic motors a few tens of billionths of a meter in diameter, and producing an incredible amount of power for their size (many trillions of watts per cubic meter).
Motors, shafts, gears, bearings, and miscellaneous moving parts built in this way can combine to form robot arms less than a tenth of a millionth of a meter long. Owing to a fundamental law relating size to rate of motion in mechanical systems, a robot arm this size can perform operations in one ten-millionth of the time required for an analogous device a meter long. Equipped with suitable tools, these arms can work as assemblers, building other machines at a rate of millions of molecular operations per second.
This sketches some of what seems clear from the exploration of nanotechnology. Many uncertainties remain, particularly in molecular electronics. Molecular mechanical devices can be analyzed using molecular mechanics and the familiar, Newtonian laws of motion (augmented by statistical mechanics, to describe thermal vibrations). Molecular electronic devices, in contrast, demand a quantum mechanical analysis, which is far more complex. Until a useful set of devices is designed and subjected to a clear, sound analysis, the exploratory engineer cannot design systems that assume the use of molecular electronics.
This might seem a grave limitation, since computer control has become so important in conventional engineering. Nonetheless, molecular mechanical computers (with properties supporting the above projection of "pocket computers with more memory and computational capacity than all the computers in the world today put together") can readily be designed. Although molecular electronic devices should be orders of magnitude faster, and may even outcompete molecular mechanical computers in all respects, the analyzability of molecular machines gives the mechanical approach a decisive advantagenot to the standard engineer of the future, but to the exploratory engineer of today. Analysis shows that molecular mechanical computers can be made to tolerate the jostling of thermal vibrations, and that these computers can run at about a billion cycles per second (somewhat faster than today's electronic machines), while consuming (roughly) tens of billionths of a watt of power. This technology will pack the memory and computational ability of a mainframe computer into the volume of a bacterial cell.
These mechanical nanocomputers can control molecular assembler arms, directing their work. If a computer contains instructions (on molecular tape, say) for constructing a copy of an assembler and its raw-materials feed system, a copy of the computer, and a copy of a tape-duplicating machine, then the whole system can build a copy of itself. In short, it could replicate like a bacterium. Calculations indicate that a replicating assembler system of this sort could copy itself in less than an hour (remember the speed of assembler arms), using cheap industrial chemicals as raw materials. It is left as an exercise for the reader to calculate how long it would take one replicator, with a mass of (say) one trillionth of a gram, to convert a million tons of raw materials into a million tons of replicators. (Hint: a ton is a million grams, and the answer is measured in days.) Slower-working replicators could run on air, water, sunlight, and a pinch of minerals. For safety they would need reliable, built-in limits to growth, but that is an issue addressed elsewhere.
These results, plus the observation that molecular machines can build large things, such as redwood trees, lead to the conclusion that teams of replicating assemblers can build large objects. By building atom by atom, they can build these objects from materials that are today impractical for structural engineeringmaterials such as diamond. This supports the above projection that nanotechnology will enable the construction of "large objects, such as spacecraft, made from light, superstrong materials and as cheap (pound for pound) as wood or hay." This, in turn, will make possible inexpensive housing, consumer goods, spacecraft, and so forth. These can be as inexpensive as other, more complex products of self-reproducing, solar-powered molecular machines: crabgrass is harder to synthesize than diamond, unless one has crabgrass seeds to help. With seeds, making crabgrass is no trouble at all; with suitably programmed replicators, making simple things like spacecraft will be equally convenient.
The last projection listed above, "machines able to enter and repair living cells, giving medicine surgical control at the molecular level," is in a special category of difficulty and importance. The essential argument is simple. We observe molecular machines working within cells and able to build anything found in a cellthis is, after all, how cells replicate themselves. We observe that molecular machines can enter tissues (as white blood cells show), enter cells (as viruses show), and move around within cells (as molecular machines inside cells show). Molecular machines can also recognize other molecules (as antibodies do) and take them apart (as digestive enzymes do). Now combine these abilitiesto enter tissues and cells and recognize, tear down, and build molecular structureswith control by nanocomputers, and the result is a package able to enter and repair living cellsalmost.
The only substantial reservations about this conclusion involve knowledge and software. Knowledge about cells, and the difference between diseased and healthy cells, is not the problem. Though this will be new scientific knowledge, and hence not predictable in its particulars, acquiring that knowledge is a problem amenable to engineering solution. In the early 1960s one could project that we would learn the composition of lunar surface through rocketry; today, one can project that we will learn the particulars of cell structure through nanotechnology (if not sooner, by other means). Knowledge of how to build software able to perform complex cell diagnosis and repair processes, however, is harder to project. This will involve building software systems of greater complexity than those managed in the past, requiring new techniques. Progress in these aspects of computer science has been swift but is hard to project.
Simple cell repair systems are within the range of confident projection today. Repair systems able to tackle more complex problems (such as repairing severe, long-term, whole-body frostbite) seem likely, and can be analyzed in many details today, but discussing them appears to involve an element of speculation regarding future progress in software engineering.
The complexity frontier
As the last example shows, on the frontier of the domain of exploratory engineering lie problems characterized more by their complexity than by their physical novelty. These are problems whose solution will demand new design techniques.
Attempts to project new design techniques can run afoul of a problem like that of trying to project future scientific knowledge: new design techniques will often stem from new insights, and if we could say what the insights will be, we would have already had them.
Other attempts pose fewer problems. For example, if faster, cheaper computers are the key to a new design technique, then the possibility of that technique becomes fair game for exploratory engineering. Curiously, the idea of building devices that can think like engineers, but far faster, can be examined in this way. While this capability seems most likely to be achieved in some novel way based on new insights, it could be achieved through the use of nanotechnology to study and model, component by component, the functions of the brain. Then, without necessarily understanding how the brain works, one could build a fast, brain-like device. This would not really be artificial intelligence, however; it would merely provide a new physical embodiment for the intricate patterns of naturally-evolved intelligence.
Issues of complexity are not central to nanotechnology itself. Assemblers need be no more complex than industrial robots. Nanocomputers need be no more complex than conventional computers. Even replicating assembler systems seem no more complex than modern automated factories. The fundamental capabilities of nanotechnology thus entail no more complexity than we have already mastered. Though nanotechnology will permit engineers to build systems of unprecedented complexity in a tiny volume, it does not demand that they do so.
Exploratory engineering has limits, but there is much to be achieved within those limits. Successful exploratory engineering can be of great value from a variety of perspectives. For the technophile, it can reveal directions for research that promise great benefit, increasing the returns on society's investment. For the technophobe, it can reveal some of the dangers for which we must prepare, helping us handle new abilities with greater safety. Success in exploratory engineering, and in heeding its results, may be a matter of life and death. If so, then it seems we should make some effort to get good at it.
The first step is to recognize and criticize it. No field can flourish unless it is recognized as having intellectual standards to uphold; to have a discipline, one must have discipline. Exploratory engineering has too often been seen as not-science and not-(standard)-engineering, and hence lumped together with scientific speculation and science fiction. Since speculation and fiction make no pretence of solidity, they are not subject to rigorous criticism to separate the solid from the erroneous. Exploratory engineering is different, and should be criticized on the basis of its aims. Those aims, again, are not to prophesy new scientific knowledge, not to prognosticate the details of the competitive designs of the future, but to make a solid case for the feasibility of certain classes of future technology.
In the absence of criticism, nonsense flourishes. Where nonsense flourishes, sense is obscured. We need to recognize and criticize work in exploratory engineering, in order to make a bit more sense of our future.
Adapted from "Exploring Future Technologies," in The Reality Club, vol. 1 (October 1988), by Edge Foundation, Inc., © Copyright 1988 by K. Eric Drexler. All rights reserved.
by: Chris Peterson
This article was originally published in Foresight Update 4, 15 October 1988. Go to original article.
Compiled in 1990
Books are listed in order of increasing specialization and level of reading challenge. Your suggestions are welcome. Editor
Doris Lessing, Prisons We Choose to Live Inside (Harper & Row, 1987, paper)
A small book with high impact, as asserted on the cover. An eloquent plea for integrating what little we know of the social sciences into education, to help us primates stop repeating Milgram experiment-type horrors.
Henry M. Boettinger, Moving Mountains (Collier Macmillan, 1975, paper)
A practical treatise on convincing others to share your ideas. "The first truly modern and truly searching essay on rhetoricin the classical meaning of the termin the last three or four hundred years." Peter Drucker
Michael S. Gazzaniga, The Social Brain (Basic Books, 1987, paper)
A neuroscientist argues that the brain is more a social entity, a vast confederacy of relatively independent modules, each of which processes information and activates its own thoughts and actions. This view has some similarity to Minsky's Society of Mind theory. The writing is anecdotal and enjoyable.
Edward R. Tufte, The Visual Display of Quantitative Information (Graphics Press, 1983, hardcover)
A beautiful book explaining the right ways (and ridiculing the wrong ways) to present numerical information. Amusing and visually enjoyable, it inspires the reader to support Tufte's high standards. Fun to browse; makes a great gift.
John C. Burnham, How Superstition Won and Science Lost (Rutgers, 1987, paper)
Tracks the decline in the quality of science popularization by the media over the past century and shows how this has undermined the impact of science and strengthened the forces of irrationalism.
Jonathan Glover, What Sort of People Should There Be? (Pelican, 1984, paper)
An Oxford philosopher looks at the emotional and ethical issues raised by (hypothetical) advanced technologies able to alter the human form, control the brain, and create artificial intelligences. Covers such topics as possible abuse of the technologies, and what people will do once there is no need to "work."
Patricia Smith Churchland, Neurophilosophy: Toward a Unified Science of Mind/Brain (Computational Models of Cognition and Perception) (MIT Press, 1986, hardcover)
Begins with the neurosciences, then proceeds through AI, connectionist research, and philosophy to give a picture of how the brain works. Skillfully written and very readable.
Gerard Radnitzky and W.W. Bartley, III, eds., Evolutionary Epistemology, Theory of Rationality, and the Sociology of Knowledge (Open Court, 1987, paper)
A collection of essays on a powerful theory of how knowledge grows: by evolution through variation and selective retention. Treats knowledge as an objective evolutionary product, and offers insights into evolutionary processes in general. Authors include Sir Karl Popper.
Gerald Edelman, Neural Darwinism: The Theory of Neuronal Group Selection (Basic Books, 1987, hardcover)
Having won a Nobel Prize for his work in immunology, the author now examines how the brain works, presenting his theory of neuronal group selection. A difficult book with significant ideas.
Foresight materials on the Web are ©1986–2017 Foresight Institute. All rights reserved. Legal Notices.