Foresight Nanotech Institute Logo
Image of nano

Self-replicating machines and risk

Engineering and analysis in the field of SRMs is unusual in many ways.  Eric Drexler has posted a paper about differences in evolutionary capacity in mechanical and biological systems that’s worth a look.

Purely coincidentally, we at Foresight have been discussing self-replication in the context of the Feynman Path and I came up with an example that shows just how counter-intuitive self-replication, if you try to view it as a capability, can be.

Self-replication is a poor criterion to use to judge risk, either of autonomous runaway or hijackability.  Consider, for example, two versions of the Drexler/Burch nanofactory:

1) as shown: the input is pressurized cannisters of fairly pure acetylene and possibly other refined chemical feedstocks.

2) the input is cassettes of nanoblocks, as output by the next-to-last stage in (1).

Now I claim that isn’t too hard to make (2) self-replicating.  All it does is slap nanoblocks together in the right patterns; maybe 10% of the total functionality of (1).  And it’s a lot more likely you can design a machine that does that entirely out of nanoblocks.  Bingo, a self-replicator.

On the other hand, it’s quite difficult to make a self-replicating version of (1).  From the lowest, mechanosynthetic levels of (1), it’s a hardwired, cast-in concrete gadget that builds nanoblocks.  To build all the gadgetry in (1) as well, it’d take probably 100 times as much mechanism.

Now to us, (1) is the much more capable machine.  After all, look at all it’s doing.  But to the user, (2) is much more capable.  Both machines require the user to go out and buy feedstock containers — pressurized acetylene pods don’t grow on trees.  Cost difference between pressurized cylinders and cassettes would be minimal: given the technology, it would be about as cheap to run the feedstock through a nanoblock maker and packer as to pump it into cylinders.

But machine 2 could make copies of itself and machine 1 could not.

And yet we know that not only does machine 1 do more stuff, but the range of outputs for the two machines is exactly the same! (Note that machine 1 can make a machine 2. Neither can make a machine 1.)

And yet which is more dangerous?  Consider which one would do the government of, say, Iran, today, the most good in terms of bootstrapping itself to full nanotech capability if one of each fell into its hands?  Obviously (1).

So I claim that self-replication is essentially worthless as a criterion by which to judge risk of accidents or abuse.


5 Responses to “Self-replicating machines and risk”

  1. JamesG Says:

    Both machines require the user to go out and buy feedstock containers — pressurized acetylene pods don’t grow on trees.

    They don’t? Won’t one of the first things built, be something that automatically makes these ‘containers’ and brings them to the user?

  2. KenB Says:

    “Won’t one of the first things built, be something that automatically makes these ‘containers’ and brings them to the user?”

    I’m not 100% sure I understand all this, but if I do, that’s the question, isn’t it. Maybe I’m naive, but it seems to me we’re not in danger of a runaway situation, if we have to keep feeding either acetylene or nanoblocks to the machine. But if we set it up to acquire those things on its own, or if it somehow “learns” how to do so, then perhaps we have a problem.

  3. Mark Buehner Says:

    Point is- building a system capable of going out an procuring resources is an order of magnitude more complex and fragile than the nanotech machine itself. That part seems trivial for us because we have about half a billion years of evolution behind us. From a nanobots point of view, identifying, finding, acquiring, gathering, and returning resources is about as simple as us collecting interstellar gasses. Some hyper-advanced civilization would obviously consider that trivial, but for us its unspeakably complex and inneficient.

  4. Kralizec Says:

    Some of the preceding comments make clear that one must discover or make a definition of “self-replication” before determining what risks are associated with it. Some readers may be surprised to learn that Aristotle is quite helpful on this point in his work, On the Soul. The capacity recognized by Aristotle of taking in, digesting, and integrating food into an organism is spoken of as its “nutritive” capability, in some translations. In a second step of his exposition, Aristotle broadens the meaning of “the nutritive” to include reproduction. The nutritive, then, is a single activity of taking in food and bringing forth another of one’s kind.

    Aristotle speaks of all of the capabilities of animals, beyond those of plants, as “nutritive in potency.” He makes a comparison to reduction of polygons to triangles: In much the same way that any polygon can be reduced to triangles, all animal capabilities can be reduced to their capability for nutritive activity. That is, they all relate to the single, comprehensive activity of replication of the animal’s kind. In an especially striking passage, Aristotle, usually understated and sober-seeming, makes his point quite dramatically by speaking of all the ends of nutritive activity as an animal’s gods. The animal stretches itself forth to the divine in all the ways it presents itself to the animal: food, shelter, mates, babies, according to the animal’s capabilities and way of life. The animal reaches out toward “what always is and is divine,” and this is “continuity as one in number” and “continuity as one in kind.” But continuity as one in number can’t be maintained indefinitely; thus, all animate activity tends toward continuity as one in kind, or “self-replication,” as some would have it.

    Aristotle seems to have given us a starting point for the risk assessment we have in mind: A kind of machine becomes dangerous as it becomes recognizably an animal. It becomes recognizably an animal as it orients its capabilities and activity toward ends that tend to its reproduction. It is fully animal when it is fully at work maintaining itself and being what it must be in order to go on being. A machine kind will be most dangerous when it works as if the continuity and well-being of its kind is its religion.

  5. Tim Tyler Says:

    It seems like there’s lots of stuff in the:

    “Foresight Guidelines for Responsible Nanotechnology Development”

    http://www.foresight.org/guidelines/current.html

    …document about not using replicating agents – e.g.:

    “When molecular manufacturing systems are implemented, they use inherently safe system designs with no autonomous replicators.”

    Since this risk is more imagined than real, can we now classify that as a PR exercise?

Leave a Reply