Foresight Update 10
page 1
A publication of the Foresight Institute
Nanotechnology
Progress: Evolution in a Drum
by K. Eric Drexler
Advanced molecular nanotechnology will use strong, rigid
molecular components. These can be assembled to form molecular
systems much like the machinery found in manufacturing plants
today--far from identical, yet much more similar than one might
naively expect. Machines like these will typically be
straightforward to design, confronting the engineer with only the
familiar stubbornness of physical devices. In taking the early
steps toward this technology, however, the problems are messier.
If they weren't, we'd be there by now.
Natural molecular machines are made from proteins and nucleic
acids, long molecular chains which can fold to form knobby,
elastic objects. Protein design has made great strides in the
last ten years, but remains difficult. Yet how can we copy
nature's molecular machines without mastering the art of protein
design?
One answer is to copy nature further, by replacing (or at least
supplementing) design with evolution. Here, technologies have
made great strides in the last ten months.
Evolution works though the variation and selection of
replicators. It works most powerfully when the best results of
one round of selection can be replicated, varied, and selected
again. Molecules lend themselves to evolutionary improvement
because they are cheap to make and handle: a cubic centimeter can
hold well over 1016 protein molecules. With so many
variations, even one round of selection can often find molecules
that behave as desired. Biomolecules, in particular, lend
themselves to evolutionary improvement because they can be made
by bioreplicators: a cubic centimeter can hold well over 1010
bacterial cells, programmed by genetic engineering techniques to
make on the order of 1010 variations on a chosen
molecular theme. All the techniques developed so far produce
molecules having a useful property from a nanotechnological
perspective: they are selected to stick to another, pre-selected
molecule.
An earlier issue of Update reported on work by Huse et
al. [Science, 246:1275, 1989] in which
bacteria were used to make ~107 different antibody
fragments. Selection in this system involves growing a film of
bacteria, infecting them with phage particles bearing genes for
the fragments, and sampling from areas where a labeled molecule
is observed to be bound. In only two weeks of work, this
procedure generated many different antibody fragments able to
bind a predesignated small molecule. Standard techniques can be
used to fine-tune protein molecules like these by further
variation and selection.
This approach, as presently implemented, uses human eyes and
hands to do the selection. Two more recent approaches do not.
Scott and Smith [Science, 249:386, 1990] have
made many millions of phage particles having surface proteins
with different short peptide chains dangling from them. These
phage particles can be poured through an affinity purification
column, a tube filled with a porous medium having molecules
attached to it which in turn will be sticky for some
complementary sub-population of peptide chains. The phage
particles which display such chains don't wash away with the
rest; they can be recovered and allowed to multiply in a
bacterial culture. Again, further rounds of variation and
selection are feasible, if there is room for improvement in
molecular stickiness. Scott and Smith rapidly found novel
peptides that bound to their molecule of choice.
Tuerk and Gold have developed a procedure they term systematic
evolution of ligands by exponential enrichment (SELEX). They make
a diverse population of RNA molecules, then use an affinity
column to select molecules that bind (at least weakly) to a
target molecule. Those that bind are recovered and enzymatically
replicated via reverse transcription to DNA. The result after
four rounds was a population of RNA molecules with strong,
selective binding to the target molecules.
At the end of their article, they suggest that the same basic
approach be applied to the variation and selection of protein
molecules: it is well known that many proteins fold and achieve
function while still being manufactured by--and hence attached
to--a ribosome, which is in turn still attached to the RNA
molecule which encodes the structure of the protein. By applying
SELEX methods to this protein-translation complex, the selective
stickiness of the protein molecules can be used to recover the
RNA replicators needed for further evolution (and eventual
production). The article ends with the observation that these
methods could be used to generate "nucleic acids and
proteins with any number of targeted functions."
What does this mean for molecular systems engineering on the path
to molecular nanotechnology? To build molecular devices, or to
attach molecular tools to atomic force microscope systems,
researchers will find it useful to make molecules that
spontaneously adhere in a planned manner. This is the basic
requirement for molecular self-assembly. The growing ability to
package the evolutionary process and use it as a reliable,
convenient tool may substantially accelerate progress in
molecular systems engineering.
Market-Based
Foresight: a Proposal
We need to evolve better fact-finding institutions to speed
the growth of foresight. A better way for people to "stake
their reputations" might help.
At present, when a technological question becomes a matter of
public concern, advocates often engage in a trial by media
combat. Opposing experts fling sharp words, and accuse each other
of bias and self-interest. Debates quickly descend into hyperbole
and demagoguery. Paralysis or folly often follows, seriously
undermining our ability to deal with important issues like space
development, nuclear energy, pesticides, and the greenhouse
effect. Greater issues lie ahead, such as nanotechnology, where
the consequences of such folly could be devastating.
We want better institutions for dealing with controversies over
science facts, so we can have better inputs for our value-based
policy decisions. Yes, with enough study and time, most specific
science questions seem to get resolved eventually. But we want to
form a consensus more quickly about which facts we're sure of,
and what the chances are for the rest. Rather than depending on
the good nature of the people involved, we want explicit
procedures that provide a clear incentive for participants to be
careful and honest in their contributions. And we want as much
foresight as possible for a given level of effort, with the
temporary consensus now correlating as closely as possible with
the eventual resolution later.
One institution for doing all this is the "fact forum"
(or "science
court"), proposed by Arthur Kantrowitz. In this,
competing sides would agree to an impartial but technically
knowledgeable jury, present their cases, and then submit to
cross-examination. The jury isolates areas of agreement as
specifically as possible, and writes a summary at the end.
Advocates not willing to submit their claims to such criticism
are to be discounted. This is a clever suggestion, worthy of
attention and further exploration.
Even so, I would like to offer an alternative proposal, and
encourage people to think up yet more ideas. Fact forums have
problems which alternative institutions might be able to remedy.
Forum participants have an incentive to avoid making claims that
will look silly under cross-examination, but as every lawyer
knows, that is not the same as trying to get the truth out.
Debates favor articulate intellectuals over those with good
"horse-sense." Who should get to represent the
different sides for questions of wide interest? What if few
potential jurors are both knowledgeable and impartial? These
ambiguities, and the non-trivial costs involved, give excuses for
the insincere to decline participation.
In contrast, the alternative I will describe can, for a
well-posed question, create a consensus that anyone can
contribute to, with less bias against the inarticulate. It offers
a clear incentive for contributors to be careful, honest, and
expert. Such a consensus can come much cheaper than a full
debate, and once created can continuously and promptly adjust to
new information. Any side can start the process, and leave the
resulting consensus as an open challenge for other sides to
either accept or change by participating. And there is reason to
believe that such a consensus will express at least as much
foresight, as defined above, as any competing institution.
You may be skeptical at this point. But, in fact, similar
institutions have functioned successfully for a long time, and
are well-grounded in our best theories of decision. I'm talking
about markets in contingent assets, more commonly known as
"bets." Bets have long been seen as a cure for
excessive verbal wrangling; you "put your money where your
mouth is." I propose we create markets where anyone can bet
on controversial scientific and technological facts, and that we
take the market odds as a consensus for policy decisions.
Can this make any sense? Consider how it might work. Imagine
(hypothetically, of course) that there was a disagreement on
whether a programmable nanoassembler would be developed in the
next twenty years, and that policy makers were in danger of not
taking this possibility seriously enough. What could markets do?
Policy makers could take the position that they don't know much
about technology, or even about who the best experts are. They
would simply use the market odds in making policy decisions. If
some market said there was a 20% chance of nanoassemblers by
2005, policy makers might decide the issue was serious enough for
them to set up their own market. They would carefully form a
claim to bet on, such as:
By 2005, there will be a device, made to atomic specifications,
fitting in less than 1 cubic mm, able to run C programs requiring
10MB memory at 1 MIPS, and able to replicate itself in less than
one year from a bath of molecules, each of which has less than
100 atoms.
They would choose a procedure for selecting judges to decide the
question in 2010, and a financial institution to "hold the
stakes" and invest them prudently. And then a market would
be set up, where offers to trade would be matched; policy makers
could even subsidize the market to encourage participation.
Ordinary people could take the attitude that those who claim the
consensus is mistaken should "put up or shut up" and be
willing to accompany claims with at least token bets. (Statistics
about how well people do could then be compiled.) When people on
the "pro" side buy bets, they would drive the consensus
price up, toward what they believe. "Con" people would
have to accept this or buy compensating bets to push the
consensus down; they could not just suppress the idea with
silence.
|
| Offers an incentive for care,
honesty and expertise |
|
If the different sides soon came to largely agree, they could
gradually sell and leave the market, leaving the consensus
standing. Judges need only be paid when they actually judge, and
incentives to "settle out of court" (not described
here) can make the need for formal judging rare. Thus, even
obscure questions could afford expensive judging procedures.
Individuals or groups who believe they have special insight could
use it to make money if they were willing to take a risk.
Arbitragers would keep the betting markets self-consistent across
a wide range of issues, and hedgers would correct for various
common human biases, like overconfidence. Traders could base
their trades on the results of other relevant institutions, like
fact forums, and so the markets should reflect the best insights
from all co-existing institutions.
Risk reduction, i.e. insurance, is also possible. Policy bodies
and anyone else could bet that things will go against them, and
so be less sensitive to uncertainties surrounding, for example, a
nanoassembler breakthrough.
Of course, like fact forums, betting markets have problems and
limitations. There is no escaping the costs of thinking carefully
about what exactly one is claiming; a clearly worded claim is
much easier to judge impartially. Science bets can take longer
than most to resolve, making investments in them less attractive.
There are tradeoffs in how long to wait to resolve a bet, and in
how many variations on a question can be supported.
"Moral hazard," where someone might do harm to prevent
a claim like "A person on Mars by 2030" just to win a
bet, should be avoided. Judges should be kept impartial, though
judging in hindsight should be easier than foresight. Market
procedures should discourage conflict-of-interest cheating, such
as brokers who trade for both others and themselves. Perhaps most
limiting, explicit betting markets on science questions seem to
be legal only in the U.K.
Some apparent problems are really not problems. Markets may look
like opinion polls where any fool can vote and the rich get more
votes, but they are actually quite different. In practice,
markets like corn futures are dominated by those who have managed
to play and not go broke. Explicit betting markets cannot be
cornered or monopolized. So, rich people who bet large sums
carelessly or insincerely give their money away to those with
better information. If word of this behavior gets out, they lose
this money quickly, as anyone can make money by correcting such a
manipulation.
While betting markets may be untried as a way to deal with
policy-related fact disputes, they are not untried as a human
institution. Bets are a long-established reputation mechanism and
phrases like "you bet" are deeply embedded in our
language. Scientists have been informally challenging each other
to reputation bets for centuries, with a recent wave of such bets
about "cold fusion." Illegal sports betting markets are
everywhere, and England has had science betting markets for
decades. Many people there won bets on the unlikely claim that
men would walk on the Moon.
Since June 1988, astrophysicist Piers Corbyn has bet to gain
publicity for his theory of long-term weather prediction, betting
against London bookies who use odds posted by the British
Meteorological Service. Over the last six months alone, there is
less than a one in 1010 chance of someone randomly
winning his 25 bets a month at his over-than-80% success rate.
Yet the Service still refuses to take Piers seriously, or to bet
against him. Bookies have taken on the bets for the publicity,
but are tired of losing, and have adjusted their odds
accordingly. These are the odds that should be used for official
British agricultural policy.
Betting markets are also well established in economic theory. A
standard way to analyze financial portfolios is to break them
into contingent assets, each of which has value in only one
possible world. In fact, stock and other securities markets can
be considered betting markets relative to the return one can get
by buying the "market" asset. A "complete
market"--where one can bet on everything--is best, allowing
investors to minimize risk and maximize expected return. Explicit
betting markets, usually on elections, are widely used to teach
MBA students about markets, and one experimenter claims they
predict the final vote tallies better than national polls.
A famous economics article argues that Eli Whitney would have
benefited more from his cotton gin by speculating in
cotton-bearing land than by trying to enforce his patent.
Finally, in the presence of betting markets, perfect decision
theory agents will make all external actions as if they agreed
with the market odds, making those odds a real
"consensus."
Our biggest problem may be how we solve problems. I suggest that
policy makers would do well to use estimates on technology issues
that are as unbiased and predictive as the odds at most any
racetrack. If you agree, help me find a way to give the idea a
try.
References
K.E. Drexler, Engines of Creation, Doubleday, New
York, 1986.
"Feedback Column," New Scientist, 7/14/90.
See also 2/10, 6/23, 7/28.
R. Forsythe, F. Nelson, G. Neumann, J. Wright, "The
Explanation and Prediction of Presidential Elections: A Market
Alternative to Polls" Economics Working Paper 90-11,
U. of Iowa, Iowa City, 4/12/90.
R. Hanson, paper to
appear in Proc. Eighth International Conference on Risk and
Gambling, 8/90.
J. Kadane, R. Winkler, "Separating Probability Elicitation
from Utilities," Journal of American Statistical
Association, June 1988, 83(402), Theory and
Methods, pp. 357-363.
J. Hirshleifer, "The Private and Social Value of Information
and the Reward to Inventive Activity," American
Economics Review, 61(4), Sept. 1971, pp. 561-74.
W. Sharpe, Investments, 3rd Ed., Prentice Hall, NJ,
1985.
Robin Hanson researches artificial intelligence and Bayesian
statistics at NASA Ames, has master's degrees in physics and
philosophy of science, and has done substantial work on hypertext
publishing. To receive his longer paper on the above topic, send
us a self-addressed envelope (with 45 cents postage within the
US), or send electronic mail to hanson@charon.arc.nasa.gov.
For current information and more on idea futures, see Robin
Hanson's Home Page and Idea
Futures - The Concept.
For
reader response to this article, see Update 11.
Upcoming
Events
Economics of the Information Age: the Market Process
Approach, an Agorics Project Seminar Series, Sept.
11-Dec. 9, 1990. Sponsored by the Center for the Study of Market
Processes, George Mason University, Fairfax, VA. Contact
Compcon, Feb. 26-28, 1991, San Francisco,
sponsored by IEEE. Includes plenary talk on nanotechnology, Feb.
26, 9:30 AM. Contact Michelle Aden, 408-276-1105.
Molecular Graphics Society Meeting, May 14-17,
1991, University of North Carolina, Chapel Hill, NC. Interactive
graphics, presentation graphics, interfaces networking, novel
display techniques; includes vendor exhibition. Contact Molecular
Graphics Conference Office, c/o Dr. Frederick P. Brooks, Jr.,
Dept. of Computer Science, University of Computer Science, Univ.
of NC, Chapel Hill, NC 27599-3175.
Space Development Conference, May 22-27, 1991,
Hyatt Regency, San Antonio, TX, sponsored by National Space
Society, Southwest Research Institute. Cosponsored by Foresight
Institute. Will have a session and possibly a workshop on
nanotechnology. Talk abstracts due Nov. 15 to Bob Blackledge,
719-548-2329. Register before Jan. 1 at cosponsor rate of $60:
contact Beatrice Moreno, 512-522-2260.
STM '91, International Conference on Scanning Tunneling
Microscopy, August 12-16, 1991, Interlaken, Switzerland.
Contact Ch. Gerber, fax (1) 724 31 70.
Second Foresight Conference on Nanotechnology,
Nov. 1991, a technical meeting sponsored by Foresight Institute,
Stanford Dept. of Materials Science and Engineering, University
of Tokyo Research Center for Advanced Science and Technology.
Dates and details to be determined; please wait for future
announcements.
Science and Technology at the Nanometer Scale,
American Vacuum Society National Symposium, Nov. 11-15, 1991,
Seattle, WA. See STM '90 article elsewhere in this issue.
From Foresight Update 10, originally
published 30 October 1990.
Foresight thanks Dave Kilbridge for converting Update 10 to
html for this web page.
|