The road to nanotechnology consists of several converging
paths, each leading independently to the Assembler
Breakthrough--the building of the first general molecular
fabricators. Biotechnology is one of these paths, but not
necessarily the shortest one.
Biotechnology seeks to understand and manipulate the molecules we
have inherited through traditional evolutionary processes,
focusing particularly on two chainlike molecules: proteins
(chains of amino acids) and nucleic acids (chains of sugar and
phosphate molecules with pyrimidine and purine bases
Nanotechnology, by contrast, deals with any material, chainlike
or not, that can be designed and assembled atom-by-atom. In this
sense nanotechnology is broader than biotechnology.
What materials will form the basis of the Assembler Breakthrough?
One could argue that proteins and nucleic acids have the best (in
fact, the only) "track record" as substrates
for nanomachinery, and that these are therefore the materials of
choice for building nanomachinery. But the qualities that made
nucleic acids and proteins good choices as biological materials
on the Earth several billion years ago are less relevant to
nanotechnology today. Evolution selected them because of their
chainlike structure and the ready availability of their component
parts on prebiotic Earth. Molecular chains are favored over other
structures because they can be copied and repaired by relatively
simple molecular machines; Earth's evolutionary process places a
premium on simplicity by emphasizing individual
self-reliance--each individual organism is forced to contain most
of the machinery needed for its own maintenance and replication.
Nanotechnology presents a very different situation: we do not
want self-reliant assemblers. We will build assemblers that rely
on us for support, and cannot function without externally
supplied information, energy, or assistance in replication. This
freedom from traditional evolutionary constraints opens up design
possibilities that have never been exploited biologically. Even
if, for historical reasons, the easiest route to
nanotechnology turns out to lead through protein-based assemblers
programmed with information conveyed by nucleic acid molecules,
we should expect a rapid transition to better materials.
Let's look at where we stand in understanding and using
traditional nanomachinery, then look at some developments in less
The ability to redesign existing proteins (e.g., enzymes,
regulatory proteins, receptor proteins), or to design new ones,
depends on understanding the detailed relationship between
function and configuration.
The amino acid sequences making up proteins are determined by
direct analysis or from translation of the DNA or RNA sequences
that encode them. These methods generate data rapidly.
On the other hand, 3D maps of proteins in their functional
configurations are obtained by X-ray crystallography, sometimes
with the aid of nuclear magnetic resonance (NMR). These are
The different rates at which these techniques can be used has
given rise to a growing gap between the availability of sequence
data and its interpretation and application:
data is available for more than 8000 proteins and is
accumulating at an exponential rate (doubling time about
2 years) .
Only about 400 proteins have been spatially mapped. The number of these maps
increases linearly (about 40 proteins per year). [1, 4]
Sequence data alone gives little indication of function.
Progress in understanding protein function requires spatial maps,
but proteins are difficult to crystallize in forms suitable for
X-ray crystallography. This obstacle is now being surmounted by
growing protein crystals on a mineral substrate, such as
magnetite. The atomic spacing in the mineral surface seems to
affect the pattern of deposition of protein molecules; the result
has been the ability to grow some protein crystals with
unprecedented ease, and in forms never before seen .
THE FOLDING PROBLEM
Proteins fold up into their functional conformations with
little or no outside help; this implies that the amino acid chain
itself contains the information needed to specify the folding
pattern. A fast way to acquire useful data on protein function
might therefore be to compute the most stable spatial
configuration of protein chains from energy considerations and
sequence data alone. This approach, known as "the folding
problem", has slowly been yielding to efforts to solve it . The general case has proved
too difficult to carry out with present-day computers, but the
problem size can be reduced in several ways [1, 4]:
For proteins with sequences similar to proteins of known
structure, take parts of the known structure as givens.
Statistical properties of a sequence can identify
segments that lie inside or outside the folded protein,
or segments that make contact with a lipid matrix
(suggesting a protein destined for a cell membrane).
NMR data can put constraints on distances between
specific amino acids residues in the folded protein.
Exon shuffling (the swapping of DNA segments within the
genome that is known to occur in genes associated with
the immune system, and may turn out to be a much more
general phenomenon) suggests that proteins are actually
composed of a relatively small number of modular units. A
number of such modules have already been identified, but
it is not known to what extent all proteins are modular
in this sense. To the extent that they are, the folding
problem would reduce to a calculation of the packing
configuration of a given set of prefolded modules.
THE ACTIVITY PROBLEM
Some investigators, ignoring spatial conformation, are trying
to determine the functions of proteins from statistical
properties of their sequences. They have determined, for example,
that antigenic activity correlates with certain periodic
variation of hydrophobic residues along a sequence. 
THE DESIGN PROBLEM
Despite difficulties with the folding problem and the activity
problem, progress has been made (as predicted in 1981 ) in solving the design
problem: to design a protein sequence that will give rise to a
given activity. Several approaches are being pursued:
Limit the design to include only those aspects of
protein folding which are already understood.
For example, W.
DeGrado at duPont has designed and built a protein
that self-folds into a 4-helix bundle. It might be
modified to incorporate biological functions .
Design a protein from native components.
T. A. Jones of Univ. of Uppsala has used this approach to
build retinol-binding protein by fitting together 22
fragments from other proteins. The resulting protein has
a different amino-acid sequence than the protein it
mimics, but has the same shape . Similar modeling of
triose phosphate isomerase and lactate dehydrogenase has
been done by S. Wodak of l'Universite Libre de Bruxelles.
Modify existing proteins.
One recent effort at protein modification involves a redesign
of the antimicrobial drug trimethoprim (TMP) to make it less
toxic. Toxicity results from TMP attacking human dihydrofolate
reductase (dHFR) in addition to bacterial dHFR, its intended
target. The strategy being taken is to reduce the floppiness of
the TMP molecule, so that it fits only its target and not human
Another example is a redesign of glucose isomerase (commercially
important in corn syrup production) to improve its efficiency, by
taking cues from the structure of triose phosphate isomerase, an
enzyme that catalyzes a different reaction but does so 10,000
times faster. 
Genex has developed a technique for redesigning antibody
molecules. The result is a much smaller antibody that consists of
a single chain instead of four chains, is much easier to produce
in quantity, elicits fewer side effects when used in patients, is
more stable, and binds better to the target molecules. The trick
is to use computer-designed sequences of amino-acids to link
together binding sites which formerly were located on separate
protein chains. The technique may lend itself to the redesign of
many other useful protein molecules besides antibodies. 
Nucleic acids are sequenced either by chopping them into
pieces of all possible lengths, or by causing them to grow into
such a set of pieces in the first place, and then separating the
pieces by electrophoresis. The sequencing procedure is even
easier than that of proteins, and some of the steps have been
About 20 million nucleotides from hundreds of organisms
have been sequenced and the number is increasing
exponentially. The doubling time, currently 2 to 3 years,
is expected to decrease sharply soon. A sequencing rate
of one million bases per day is anticipated by 1996. 
As with proteins, to know the sequence is not to know the
function. Some of the most interesting and useful biological
information resides in the local geometry of nucleic acids:
information about gene boundaries, regulatory binding sites,
polymerase binding sites, ribosomal sites, posttranslational
modification sites, etc. While the average spatial architecture
of nucleic acids is known in detail, local variations in this
architecture are hard to study and data is sparse .
The number of nucleic acid structures known from
crystallographic studies is less than 40.
Statistical analysis of nucleic acid sequences can
identify some of these structures in DNA, and can be done
by computer. Reliability varies greatly, but is as high
as 90% in some cases.
A traditional cell membrane is like a sea of inert material
with, here and there, a floating island of protein machinery. The
sea is a mixture of fatty molecules (phospholipids, like
lecithin) and cholesterol molecules, the relative proportions of
which determine how wavy and flexible the surface is. Typically
the protein machines extend all the way through the cell
membrane, providing specialized communications links (or in some
cases pores) between the inside and outside of the cell.
By determining what goes in and comes out of a cell, the cell
membrane defines the relations a cell has with the external
world. It is therefore intriguing to think of what might be
possible if such membranes could be deliberately altered, or if
entirely different kinds of active membranes could be designed
At the Weizmann Institute of Science a group led by Israel
Rubinstein is making membranes from molecules chosen for their
ability to mimic one function of biological membranes: the
ability to recognize ions in the solution surrounding the cell.
These investigators have found that a mixture of
2,2'-thiobisethyl acetoacetate (TBEA) and n-octadecyl
mercaptan (OM) will spontaneously assemble into a layer one
molecule thick on a gold electrode. TBEA is the active element;
OM plugs gaps between TBEA molecules preventing direct access to
the gold substrate. When the coated electrode is put in a
solution with copper and iron ions, it is found that copper ions
are reduced to elemental copper, whereas iron ions are
unaffected. The mechanism depends on the fact that TBEA molecules
have two arms that open just wide enough for a copper ion to slip
in and bind to four oxygens projecting from the arms. This brings
the copper ion to within 7 angstroms (.7 nm) of the gold
substrate--close enough for electrons to pass by
quantum-mechanical tunneling from the substrate to the copper.
Because of their geometry, iron atoms are not accepted into the
arms of TBEA. [13, 14]
A group at UCLA led by Donald J.
Cram has launched a full-scale attack on the problem of
nano-effector design .
Working entirely away from the protein/nucleic acid path blazed
by terrestrial evolution over the past several billion years,
this group has designed hundreds of molecules of varying shapes,
hoping to learn how to make molecules with desired catalytic
properties. Cram's co-workers synthesized more than 75 of these
designed molecules and subjected them to X-ray crystallography to
check the correspondence between design and actual structure. A
series of compounds of gradually increasing complexity was then
tested for the intended activity: in one case the ability to
selectively bind certain ions (lithium, sodium, potassium, and
others). The compounds performed extremely well.
In another set of experiments, the aim was to build molecules
able to discriminate between D-and L-amino acids and ester
salts--a task that seemed intractable earlier in this century. So
successful were their efforts that the investigators were able to
build a machine based on the designed molecules; when a 50-50 D-L
mixture was poured into the machine, the machine delivered two
solutions with 86 to 90% separation of the two substances.
In yet another branch of their work, Cram's group is designing
molecules that mimic the actions of enzymes. Free of the
requirement to build everything out of amino acids, they have
been able to come up with molecules far smaller (though not
easier to make) than the enzymes being imitated. Their mimic for
the enzyme chymotrypsin has been synthesized and tested; it
proved to have some, but not all, of the functionality of
Diamond is in the news, and this is good news for
nanotechnology. Diamond is a prime candidate material for
building nanomachines for several reasons: the tetrahedral
geometry of its bonds lets it be shaped in three dimensions
without becoming floppy; it is made of carbon, the chemistry of
which is well understood; and carbon atoms make a variety of
useful bonds with other types of atoms. Diamond research may
therefore advance nanotechnology even when it is pursued for its
short-term commercial potential. Progress in understanding and
making diamonds has been driven mainly by work done in the Soviet
Union [8, 9]:
In the 1930s Soviet scientists calculated a phase diagram
for diamond and began looking for easy ways to synthesize
In the 1950s, while American industry started
manufacturing diamonds at 2,000 degrees C and 55,000
atmospheres pressure, Soviet scientists developed a vapor
deposition method for growing diamond fibers at 1,000
degrees C and low pressures.
During the 1960s and 1970s, the Soviet group improved on
this process, aiming to produce diamond films.
The technological implications of diamond films have recently
been realized in Japan and the U.S., and so a race has begun to
develop this technology. Dramatic discoveries are being made:
At the University of Texas 10-nanosecond laser pulses are
being used to vaporize graphite, which then deposits as a
film 20 nm thick over areas as large as 10 square
centimeters. The film is diamond-like, but may turn out
to be something new. 
Soviet researchers report the discovery of a new form of
carbon much harder than diamond, called C8.
They use an ion beam of low energy to produce thin films
of the substance. Carbon atoms in C8 appear to
have tetrahedral bonds, but the lattice is somehow
different than in diamond--it may simply be somewhat
random, resembling a glass rather than a crystal. 
Much of the new interest in diamond is motivated by near-term
commercial applications like diamond-coated razor blades,
scratch-resistant windows and radiation-resistant semiconductors
for nuclear missiles. The C8 results, however, are of
special relevance to nanotechnology, showing us that diamond is
just the default form of more general tetrahedral bonding
patterns for carbon. Choosing from among the many possible departures
from crystalline regularity may turn out to be an important of
Speaking of crystallinity ... a "new state of matter"
has been announced, called the nanocrystal . The nanocrystalline state is
one in which roughly half the atoms occupy sites in crystal
grains, while the other half are free to move between and around
the grains. Both populations of atoms have the same chemical
composition (titanium oxide, for example), and atoms are easily
exchanged between the grains and the matrix. The response of such
a material to strain is plastic rather than brittle, because
grains can change shape quickly instead of hammering against each
other or being forced apart (cracking). This flow of atoms and
restructuring of grains does not turn the material into a liquid
or a putty; at macro scales, nanocrystalline materials are as
solid as their ordinary counterparts.
Nanocrystallinity is a function of grain size. In nanocrystals
the grains are about 10 nanometers across--1000 times smaller
than in ordinary materials. Small grain size implies large
surface-to-volume ratio and short diffusion "circuits"
around the grains--hence, rapid response to strain. In the case
of nanocrystalline copper, self-diffusion at 20-120 degrees C is
increased by 19 orders of magnitude over ordinary copper!
and collaborators are studying the properties of bulk materials
as one or more dimensions of a system is reduced to the size of a
few molecules or less [10, 11]. Previous work has shown
that some properties remain similar to bulk properties: e.g.,
refractive index, dielectric constant, and surface energy. Now
they have undertaken to measure viscosity in thin films trapped
between two solid surfaces. They report that as the liquid layer
thins to less than 10 molecular diameters the liquid stops acting
like a continuum and comes to resemble a series of layers; the
principles of viscosity no longer describe the relationship
between shear forces and sliding motion.
Sliding parts in a mechanical nanocomputer require no
Above: Mechanism for two nanocomputer gates, initial
position. One control rod with two gate knobs is seen
laterally; two more more rods with knobs are seen end on.
Each rod with associated knobs is a single molecule.
Below: The lateral rod has been pulled to the left during
computation. Notice that one of the end-on rods has now
been blocked and the other one unblocked in mechanical
mimicry of the transistor action.
The amount of force required to initiate sliding (the critical
shear stress) is much greater in such systems than that
predicted by extrapolating from bulk properties. Taken at face
value this suggests that nanomachines with moving parts would get
stuck unless the parts remained in continuous motion, even when
lubricants are present. But a better interpretation is that the
concept of liquid lubrication becomes meaningless at the
Liquids, the atoms of which are not tied down, evade part of the
design process. This is acceptable in a bulk machine, but not in
a nanomachine, the design of which must specify the behavior of
every atom. "Lubrication" in a nanomachine would
consist of an optimization of the chemical type, location, and
orientation of each atom in the machine; it would inhere in the
design of the solid parts themselves rather than in a separate
liquid substance .
Dr. Mills has a degree in biophysics and runs a business in
Palo Alto. He also assists with the production of Foresight