Foresight Nanotech Institute Logo
Image of nano


Robust Self-Assembly Using Highly Designable Structures

by
Tad Hogg*

Xerox Palo Alto Research Center
Palo Alto, CA 94304

*Corresponding author: hogg@parc.xerox.com
Web page for corresponding author: http://www.parc.xerox.com/hogg

This is a draft paper for the
Sixth Foresight Conference on Molecular Nanotechnology.
The final version has been submitted
for publication in the special Conference issue of
Nanotechnology.


Abstract

Self-assembly creates structures through a statistical exploration of many possibilities. In some cases, these explorations give rise to a few highly designable structures that can be formed in exceptionally many ways. Using such structures for self-assembly tasks is a general approach to improving their reliability. This design principle can be applied to a variety of situations, including molecular devices and coordinated behaviors from collections of autonomous robots.

 

1. Introduction

Manufacturing often builds objects from their components by directly placing them in the necessary arrangements. Common examples include buildings, cars and electronic circuits. This technique requires knowledge of the precise structure needed to serve a desired function, the ability to create the components with the necessary tolerances and the ability to place each component in its proper location in the final structure.

When these requirements are not met, self-assembly offers another approach to building structures from components. This method involves a statistical exploration of many possible structures before settling into the final one. The particular structure produced from given components is determined by biases in the exploration, given by component interactions. These biases arise when the strength of interaction between components depends on their relative locations in the structure. These interactions can reflect constraints on the desirability of a component being near its neighbors in the final structure. For each possible structure, the interactions combine to give a measure of the extent to which the constraints are violated, which can be viewed as a cost or energy for that structure. Through the biased statistical exploration of structures, each set of components tends to assemble into that structure with the minimum energy for that set. In these terms, self-assembly can be viewed as a process using a local specification, in terms of components and their interactions, to produce a resulting global structure. The local specification is, in effect, a set of instructions that implicitly describes the resulting structure.

Self-assembly can be very precise in spite of the inherently statistical nature of the process. Examples include chemical reactions driven by diffusive mixing of the reactants, such as the creation of polymers [32, 33], proteins [37] and molecular assemblies [26], patterned mesoscale objects [3, 39] and structures consisting of tiny robots [2, 4, 14, 45]. This technique can also automatically reconfigure structures when their environments or task requirements change, or when a few components break.

While self-assembly can create a wide range of structures, it has some basic difficulties. First, the precise set of components and interactions that will construct a given global structure can be difficult to determine, especially if the components themselves have manufacturing defects or the environment surrounding the components is noisy. Second, the statistical exploration of different possibilities provides the power of self-assembly, but can also make it difficult to settle on a single final structure, or to resist continual environmental noise once assembled. Third, the assembly process can become stuck in local minima, thereby requiring a long time to identify the final structure.

To help overcome these difficulties, this paper describes some characteristics of the statistical distributions of self-assembled structures. These characteristics in turn suggest a general principle for designing self-assembly processes that minimizes these problems. This principle relies on the fact that many choices for the components and their interactions can produce the same end result. While such choices are often determined by ease of component construction, they also provide an opportunity to improve the performance of the self-assembly process.

 

2. Examples of Self-Assembly

Self-assembly is useful for a wide range of applications. Some of these are described in this section, giving a concrete basis for the discussion of aggregate statistical properties in Section 3. These examples are also used to suggest a number of extensions to self-assembly beyond the construction of specific structures.

For protein assembly, the components are amino acids arranged in a specific sequence. The global structures are the folded three-dimensional shapes of the proteins. The interactions between amino acids depend on their relative locations in the folded structure. In simple terms, the interactions can be expressed as constraints on the relative affinities of polar and nonpolar amino acids [24, 37]. These interactions are determined by the chemical properties of the components, so the relevant degrees of freedom for the local specification are the choices of the amino acids in the sequence. Engineering applications need not be restricted to naturally occuring amino acids, giving the possibility of using additional components, with different interactions. Various folding polymers offer an even larger range of possibilities for engineered structures [32, 33].

This example suggests a general manufacturing strategy: lay out the components connected in a simple structure, such as a chain or, more generally, a planar graph, and then allow it to fold into the final shape. Because the initial connected structure is much easier to create directly than a complex three-dimensional structure, this technique is a useful combination of directed manufacturing of a relatively simple structure followed by self-assembly of the final, more complex, structure. Alternatively, a nominal initial configuration could be directly constructed while subsequent adjustments to the actual environment or failed components takes place through self-assembly [15].

A wider range of interactions is possible with manufactured particles of various sizes. At a molecular scale, suitable interactions can produce self-assembled structures with new properties [40]. At somewhat larger scales, attaching DNA to small particles allows using the selective binding interactions of DNA to create specific structures in response to environmental changes, which can act as sensors [9]. For larger particles, individual shapes and surface properties determine the interactions and hence the resulting structures [3].

The widest range of component interactions arises when the components are complex enough to have computational capabilities, e.g., programmable robots [5, 14, 22, 31, 34, 45, 46]. The interactions between robots are determined both by their physical properties (e.g., their weight) and the choices made in their programs (e.g., whether to hold on to a particular neighbor). The component robots assemble into various overall shapes, or global structures. By disconnecting from their neighbors, collections of robots can explore a wider range of topological possibilities than the fixed sequence of amino acids in the protein example.

Collections of robots offer an interesting contrast between self-assembly and direct construction. For instance, each robot could, in principle, be instructed precisely where to go through a predetermined program. While suitable for relatively small groups of robots and a well-understood environment, this programming task becomes increasingly difficult with larger numbers of robots acting in poorly defined or unpredictably changing environments [19]. An alternate and more robust approach is for the programs to specify only simple local interactions that together produce the final desired structure through an exploration of possibilities, i.e., through self-assembly.

As these examples show, the suitability of a global structure need not just depend on its particular physical shape. Instead, its functional properties may be more important for some applications. For example, a task may require exerting particular forces, e.g., to support or move other objects, rather than a specific shape. Thus, self-assembly can be viewed as a technique for finding combinations of components that satisfy some global constraints on behavior. That is, self-assembly can solve a combinatorial optimization or a constraint satisfaction problem [25] where the resulting structure should satisfy as many of the constraints as possible. In the case of modular robots, techniques such as simulated annealing [21] and genetic algorithms [30] have been applied to identify suitable structures [34, 5]. Finally, the statistical exploration of self-assembly is also useful in organizing purely software systems [35].

As a final observation on the application of self-assembly, complex artifacts often consist of a series of levels, where the components used at one level of the structure are in turn formed from a smaller set of components [38]. In such cases, some levels could be assembled through direct construction while self-assembly is used to create others. Such situations give further opportunities to design the components and levels to exploit statistical properties of self-assembly described in Section 3.

 

3. Statistical Properties of Self-Assembly 

Self-assembly can form precise structures beyond the current capability of direct manufacturing. Choosing appropriate components and interactions to take advantage of this capability can be difficult. The most straightforward technique for designing self-assembly is to examine, with a computer simulation, the neighbors of each component in the desired global structure, and then choose the interactions between components to encourage those neighbors to be close together.

Unfortunately, such a design may inadvertently make some other structure even more stable or introduce numerous local minima in the energies leading to a slow assembly process. Furthermore, the limited range of interactions and component types in some cases, such as for proteins, may not be able to produce the desired interactions among all the components. Even worse, the limited knowledge of the precise global structure desired, environmental noise and component defects may make it impossible to determine the best set of interactions in the first place.

Instead, the statistical nature of self-assembly gives rise to a variety of statistical regularities when the number of components is large. In many cases, these regularities can be exploited to design a robust self-assembly process. Examples of such properties are described in the remainder of this section.

 

3.1 Designability: Resisting Component Defects 

One difficulty with designing self-assembly processes is the indirect, or emergent, connection between the interactions and the properties of the resulting global structures. While this difficulty connecting local behaviors with global results arises in many different contexts, a particular consequence for self-assembly is the possibility of errors due to defective components or environmental noise. To help address this problem, it would be useful to arrange the self-assembly so the desired structures can be formed in many ways, increasing the likelihood they will be correctly constructed even with some unexpected changes in the components or their interactions. That is, the resulting global structure should not be too sensitive to errors that may occur in the local specification.

This notion is formalized by the designability of global structures. Specifically, if G is a global structure and L is a collection of components, or the local specification, then the interactions among the components based on their placement in the global structure can be viewed as defining an energy E(G,L). Small values for the energy correspond to global structures that satisfy many of the constraints. The best global structure for a given set of components L is the one that minimizes this energy. With a sufficient statistical exploration of the possible global structures the minimum energy structure will be the one eventually assembled from L. Notationally, let tex2html_wrap_inline481 denote the global structure G is the assembled result from L. That is, among all possible global structures, G has the minimum energy for L.

In this context, we can ask how many different component sets assemble to the same global structure G. We refer to this count as the designability of the structure, i.e.,

equation48

where tex2html_wrap_inline493 denotes the size of a set and tex2html_wrap_inline495 is the set of all component collections that assemble to G.

A given assembly process can then be characterized by a distribution of designability, i.e., the number of global structures with various designability values. This distribution is given by values of tex2html_wrap_inline499 over a range of choices for x.

   figure50
Figure 1: Schematic distribution of designability, i.e., the number of different component configurations producing a given global structure. Each point on the curve indicates the number of global structures with a given designability. The long tail of the distribution indicates a few global structures are highly designable, i.e., far more designable than typical cases.

A schematic example of such a distribution is shown in Fig. 1. Significantly, the distribution of designability illustrated here is extremely skewed: a few structures are much more designable than most others. These highly designable structures can be formed in relatively many ways in response to interactions among the component parts. Thus such structures are relatively more tolerant of errors in the choice of components than is typically the case, producing one form of robustness for self-assembly [36].

A simple generalization of this definition of designability is when a variety of global structures have the same functional properties. For instance, if the global structures are physical shapes, rotated versions of these shapes may be functionally identical. Or in the case of tiny robots, matching a shape to within a specific tolerance or delivering a required force, rather than exactly matching a specific shape, may be sufficient. In such cases, rather than requiring a single specific resulting structure, a self-assembly process that produces any structure with the desired behavior can be viewed as successful. This leads to considering functionally identical global structures as equivalent when defining designability. That is, an equivalence relation, tex2html_wrap_inline503 , can be defined on the global structures that give rise to identical behaviors for the particular application under consideration. Designability then becomes

equation59

Thus an additional ingredient to consider for designing self-assembly processes is the actual performance requirement for the resulting structure, rather than limiting attention just to forming a particular structure.

As a final note, from the viewpoint of constraint satisfaction problems, designability is related to studies of situations that give rise to stable solutions [42], i.e., solutions that remain solutions even with a few changes in the constraints. Failing this, solutions should at least be easy to fix when conditions change [12], corresponding to simple repairs that require only adjusting a few components.

 

3.2 The Energy Gap: Reducing Noise Sensitivity

As self-assembly operates, the components perform a biased statistical sampling of the possible global structures on the way to forming the final structure. This process allows a variety of complex structures to be formed. Unfortunately, this exploration can also lead to continual and undesired changes in the final global structure. In some situations it will be possible to terminate the exploration by an explicit command (e.g., a broadcast signal sent to a collection of tiny robots). In other cases, this will not be possible: either because the continual changes are driven by uncontrolled environmental noise or are needed to keep the system adaptable to further unexpected changes, e.g., additional weight added to a group of robots supporting a structure.

Designability addresses the sensitivity of the assembly process to errors in the local specification, or instructions, as described in Section 3.1. Another important property is the extent to which desired global structures can resist continual environmental noise once formed. This property of self-assembled structures can be formalized by their energy gap, the difference in energy, due to interactions among the components, between the global structure with the smallest energy for those components and that structure with the second smallest energy:

equation67

Structures with relatively large energy gaps will be more robust with respect to environmental noise than those with smaller gaps. One way to characterize this value [24] for a given structure G is with the average of the energy gap associated with all component configurations that produce G, i.e.,

equation72

This average will be a useful characterization when most values of tex2html_wrap_inline517 are fairly close to the average.

Designability reflects the behavior of a given global structure with respect to different sets of components. Thus it characterizes the effect of errors or other changes in the set of components. By contrast, the energy gap characterizes a given set of components with respect to the different global structures that set could form.

Significantly, self-assembly processes with skewed distributions of designability can also produce relatively large energy gaps for the highly designable structures, as illustrated in Fig. 2. This schematic example illustrates one possibility: an abrupt transition in the energy gap as designability increases. Such a situation would be another example of the transition phenomena commonly found in constrained systems, both physical [43] and computational [18]. Another possibility is for a smooth increase rather than an abrupt change.

   figure83
Figure 2: Schematic illustration of one way robustness of global structures can vary with their designability.

This association is easily understood since small changes in the components will usually result in only small changes in the energies of new configurations. This is because the overall energy E(G,L) is usually due to the combination of interactions among pairs of components, based on their location in the global structure. For structures consisting of many components, changing a few components will only modify a small fraction of the total interactions, and typically give a relatively small shift to the energy associated with each global structure. That is, a small change from L to L' will usually result in corresponding small changes in energy from E(G,L) to E(G,L'). Such shifts are illustrated in Fig. 3, which shows the energies associated with four global structures for L and L' as the black and gray points, respectively. Fig. 3a shows a large energy gap so small changes in the energies of all the global structures do not change the one with the minimum energy. On the other hand, small changes in a situation with a small gap are likely to change the minimum energy structure, as illustrated in Fig. 3b. Thus large gaps are likely to be associated with structures that are energy minima for many different local specifications, i.e., structures that are highly designable.

   figure91
Figure 3: Schematic illustration of the association between large energy gaps and high designability. Black points are energies for different global structures for a given set of components. Gray points are the shifted energy values due to a small change in the component set. The arrows indicate the original energy gaps. With a small gap, the shifts are likely to change the minimum energy structure as illustrated here.

Further insight into the possible changes in energies arises from the nature of the constraints involved in a structure. In many cases these constraints are somewhat frustrated, i.e., they cannot all be simultaneously satisfied completely. For example, we may want a structure that is light and strong. Typically in these cases a small change to a global structure G, producing a neighbor structure tex2html_wrap_inline535 , will introduce changes in the constraints involving just a few parts of the structure, giving rise to a change in energy tex2html_wrap_inline537 characterized by the energies involved in just a few constraints. If G is a low-energy structure, such changes are likely to introduce new frustrations and raise the energy, i.e., G has a lower energy than its neighbors and is a minimum in the energy with respect to small changes. Adjusting to these new frustrations will often require further changes, involving additional parts of the structure, to again find a new structure G' with a relatively low energy and, in particular, lower energy than its neighbors.

This discussion suggests two qualitatively distinct causes of the minimum energy gap. First, if G satisfies the constraints particularly well, its energy will be considerably less than any local minima, so the gap will be determined by the typical energy scale of changes in a few constraints, tex2html_wrap_inline537 . Second, if there are several structures that adjust reasonably well to the frustrated constraints in different ways, the energy differences among these local minima will determine the gap, which will then be considerably less than tex2html_wrap_inline537 , as illustrated in Fig. 4.

   figure103
Figure 4: Schematic illustration of energies associated with various global structures. The horizontal axis is ordered so neighboring structures, i.e., those with similar shapes, are close together. In this case the energy gap is determined by a local minimum shown by the solid arrow. The gray arrow indicates the gap due to a small change in structure.

When this distinction holds, we can expect large gaps, characterized by tex2html_wrap_inline537 , to arise when some structure can satisfy the constraints particularly well. In other cases, the gap will be smaller and characterized by energy differences among alternate ways of satisfying frustrated constraints. Such situations will result in two distinct regimes for the energy gap as illustrated in Fig. 3.

 

3.3 Diversity: Avoiding Local Minima

Designability and the energy gap determine the robustness of the final structure with respect to variations in the components, interactions or environment. Another important question is how long self-assembly takes. While structures can be explored rapidly, especially for molecular sized components, the number of possible structures often grows so rapidly with the number of components that they cannot all be examined in any reasonable amount of time. Fortunately, the biases introduced in the exploration by the interactions can greatly reduce the number of structures that need to be examined.

In some cases, these interactions can also give rise to local minima, structures with lower energy than all their neighboring structures but still with relatively high energy. Such situations lead to sluggish behavior in both physical [13] and computational [18] systems because the system can remain in a local energy minimum a long time. In such cases, the bias in the exploration toward lower energy structures tends to keep the system in the local minimum.

Models of protein folding suggest this need not be a severe limitation for simple interactions [37]. More generally, the extent to which local minima are a problem depends on several factors. First, for some applications a structure corresponding to a local minimum may be quite adequate, even though it does not satisfy the constraints as well as the true minimum. Second, the number of initial structures leading to the true minimum may be much larger than those leading to local minima. This is often the case when the true minimum is associated with a large energy gap and the individual constraints are relatively weak. Third, if the system allows a fine degree of control over the component behaviors, as with modular robots, it may be possible to explicitly plan a sequence of motions to avoid local minima [23]. Finally, local minima may be fairly rare.

One way to reduce the number of local minima is through another statistical property of systems with many components: with a sufficient number of ways to change a structure, i.e., a sufficiently high dimensional space of configurations, there is likely to be at least one direction in the space allowing the energy to decrease. This observation can be formalized by examining the statistical distribution of stability, particularly the existence of energy minima, in the space of configurations. Specifically, the stability is determined by the eigenvalues of the matrix of derivatives (the Jacobian) of the forces acting on the system, e.g., due to component interactions. Generally, increases in the size of such matrices or the variation in their values lead to instabilities, i.e., the local minimum becomes a saddle point with at least one direction of change giving lower energy [11, 17, 27, 28]. In this case, the variation in the matrix values could arise from a diversity of interactions among the components, e.g., due to a large number of relatively weak constraints rather than a few strong ones. These mathematical properties of large matrices involve systems that can change continuously, which will apply to self-assembly tasks where components can move continuously. For systems with discrete positions, with a sufficiently large number of components the behavior in many cases will be similar to that of a continuous system. For other cases, extensions of these techniques to discrete dynamical systems will be necessary [16].

Finally, if local minima are a problem, it will be necessary to reduce the bias in the exploration to make changes that temporarily increase the energy more likely. This can be done through an annealing process of slowly cooling the system from a relatively high temperature or, for programmable interactions among robots, using the software equivalent: simulated annealing [21]. However, by requiring the system wait to find a series of improving changes by chance, such techniques can still be quite slow. When the components are fairly sophisticated, e.g., with programmable robots, a more powerful alternative is the use of computational markets [29] that allow the system to exploit locally coordinated groups of changes that move around the energy barriers [20].

 

4. Decomposability: Enhancing Robustness

The nature of designability and energy gap distributions affect the robustness of self-assembly. Particularly favorable cases occur when highly designable structures are associated with large energy gaps. While such distributions are seen in models of protein folding [24], an important practical question is how widespread such distributions are.

While the full extent of this question remains to be explored, a particularly simple case is when the system decomposes into several independent parts. That is, suppose the local specification consists of parts tex2html_wrap_inline561 and the global structure consists of corresponding parts tex2html_wrap_inline563 . Independence means the energy is the sum of that from the independent parts, i.e.,

equation123

With this decomposition, if G is the minimum energy global structure for L, then each part tex2html_wrap_inline569 must be the minimum for tex2html_wrap_inline571 , otherwise another choice for the jth structure will reduce the energy further. This observation means the number of local structures L for which G is the minimum energy configuration, i.e., the designability d(G) of G, is just the product of the designability of the individual parts:

  equation126

On the other hand, the smallest energy gap would be due to a change in just one of the parts, since changes in any other parts will increase the energy further. Thus

  equation130

These relations result in enhanced tails for the distributions of designability and the gap. In particular, for a wide range of distributions for the individual parts, the product in Eq. (6) gives rise to a lognormal distribution [1, 7] for d(G). This distribution has a particularly long tail, giving a few highly designable cases as illustrated schematically in Fig. 1.

Similarly, tex2html_wrap_inline517 will be governed by the extreme value distribution [44], again with a relatively extended tail. Significantly these relations can also act to enhance the relation between those cases with high designability and relatively large energy gap. This is because G will be highly designable when most of the parts tex2html_wrap_inline569 are so. Further, to have a relatively large gap, the gaps associated with all of the parts tex2html_wrap_inline591 must be large. Thus if designability and gap size are somewhat correlated for the parts, these values can become even more related at the high end of the distributions for the combined structures.

Of course interesting self-assembled structures will not have completely independent parts. Nevertheless, this example provides insight into structures with some dependence between the parts, i.e., situations where the energy is nearly decomposable into a sum of contributions from different parts. In such cases Eq. (6) and (7) will be approximately correct and still lead to distributions with long tails. This extension to nearly decomposable structures is significant because many structures, both natural and engineered, are nearly decomposable [38]. That is, they consist of a set of parts with relatively strong interactions within each part and weaker ones between them. The example of independent parts is the extreme case where there are no interactions between parts. Furthermore, this near decomposability often extends through a series of levels in the structure, giving hierarchical structures where interaction strengths decrease with the distance between components in the hierarchy, i.e., the number of levels up the hierarchy required to find a common ancestor of the components. At each level in such a hierarchy, Eq. (6) and (7) will apply approximately, leading to distributions with enhanced tails, at least provided the interactions between parts are relatively weak. This argument suggests highly designable structures will occur in a variety of self-assembly processes.

 

5. Discussion

Self-assembly of highly designable structures is particularly robust, both with respect to errors in the specification of the components and environmental noise. Thus we have a general design principle for robust self-assembly: select the components, interactions and possible global structures so the types of structures desired for a particular application are highly designable.

Applying this principle requires two capabilities. The first is finding processes leading to highly designable structures of the desired forms. That is, even when a few highly designable structures exist, there remains the question of identifying those structures so as to make use of them for particular applications. Or conversely, finding choices of components and interactions for which desired global structures are highly designable. Identifying such processes uses properties of the tails of statistical distributions, which are more difficult to characterize than those of their central parts. However, some specific examples have been identified, e.g., lattice-based models of protein folding [24] and analyses of the genetic code [41], that suggest evolution has taken advantage of this design principle. Whether or not such simplified models accurately capture the behavior of natural protein folding, they show such distributions exist in systems with fairly simple interactions and components. Furthermore, this framework is well-suited for genetic algorithms [10] to find appropriate processes where possible local specifications and global structures correspond to genotypes and phenotypes, respectively.

The second requirement for applying this design principle is the ability to create the necessary interactions among the components. For simple components, the range of possible interactions may be fairly limited and hence further restrict the search for suitable processes. More complex components, such as tiny robots, can have arbitrary interactions programmed into the components, subject only to restrictions on the timely availability of required information, which tends to enforce the use of local interactions. For example, the interactions could include negotiation among the components [46] or arbitrage opportunities in market-based systems [6, 29]. The ability to program desired interactions is particularly significant in allowing designed systems to more accurately reflect simplifications in the models than would be the case for describing naturally existing assembly situations. Thus, like the development of computational ecologies [19], designed self-assembly provides an example of how prescriptive use of simple models relating global to local behaviors can result in more accurate analyses than their approximate descriptive use for naturally existing systems. Moreover, the flexibility of programmed interactions makes this design principle particularly applicable to collections of robots.

Achieving a general understanding of the conditions that give rise to highly designable structures is largely a computational problem that can be addressed before actual implementations become possible. Thus, developing this principle for self-assembly design is particularly appropriate in situations where explorations of design possibilities takes place well ahead of the necessary technological capabilities [8]. Even after the development of precise fabrication technologies, principles of robust self-assembly will remain useful for designing and programming structures that robustly adjust to changes in their environments or task requirements.

 

Acknowledgments

I have benefited from discussions with Arancha Casal, Don Kimber, Rajan Lukose and Eric Mao.

 

References

1
J. Aitchison and J. A. C. Brown. The Log-normal Distribution. Cambridge University Press, Cambridge, 1957.

2
Andrew A. Berlin and Kaigham J. Gabriel. Distributed MEMS: New challenges for computation. Computational Science and Engineering, 4(1):12-16, January-March 1997.

3
Ned Bowden, Andreas Terfort, Jeff Carbeck, and George M. Whitesides. Self-assembly of mesoscale objects into ordered two-dimensional arrays. Science, 276:233-235, 1997.

4
Rodney A. Brooks. New approaches to robotics. Science, 253:1227-1232, September 13 1991.

5
I-Ming Chen and Joel W. Burdick. Determining task optimal modular robot assembly configurations. In Proc. of the Conference on Robotics and Automation (ICRA95). IEEE, 1995.

6
Scott H. Clearwater, editor. Market-Based Control: A Paradigm for Distributed Resource Allocation. World Scientific, Singapore, 1996.

7
Edwin L. Crow and Kunio Shimizu, editors. Lognormal Distributions: Theory and Applications. Marcel Dekker, New York, 1988.

8
K. Eric Drexler. Nanosystems: Molecular Machinery, Manufacturing, and Computation. John Wiley, NY, 1992.

9
Robert Elghanian et al. Selective colorimetric detection of polynucleotides based on the distance-dependent optical properties of gold nanoparticles. Science, 277:1078-1081, 1997.

10
Stephanie Forrest. Genetic algorithms: Principles of natural selection applied to computation. Science, 261:872-878, August 13 1993.

11
M. R. Gardner and W. R. Ashby. Connectance of large dynamic (cybernetic) systems: Critical values for stability. Nature Lond., 228:784-785, 1970.

12
Matthew L. Ginsberg, Andrew J. Parkes, and Amitabha Roy. Supermodels and robustness. In Proc. of the 15th Natl. Conf. on Artificial Intelligence (AAAI98), pages 334-339, Menlo Park, CA, 1998. AAAI Press.

13
David L. Goodstein. States of Matter. Prentice-Hall, Englewood Cliffs, NJ, 1975.

14
J. Storrs Hall. Utility fog: The stuff that dreams are made of. In B. C. Crandall, editor, Nanotechnology, pages 161-184. MIT Press, Cambridge, MA, 1996.

15
James R. Heath, Philip J. Kuekes, Gregory S. Snider, and R. Stanley Williams. A defect-tolerant computer architecture: Opportunities for nanotechnology. Science, 280:1716-1721, 1998.

16
T. Hogg and B. A. Huberman. Attractors on finite sets: The dissipative dynamics of computing structures. Phys. Rev. A, 32:2338-2346, 1985.

17
T. Hogg, B. A. Huberman, and Jacqueline M. McGlade. The stability of ecosystems. Proc. of the Royal Society of London, B237:43-51, 1989.

18
Tad Hogg, Bernardo A. Huberman, and Colin Williams. Phase transitions and the search problem. Artificial Intelligence, 81:1-15, 1996.

19
Bernardo A. Huberman and Tad Hogg. The behavior of computational ecologies. In B. A. Huberman, editor, The Ecology of Computation, pages 77-115. North-Holland, Amsterdam, 1988.

20
Bernardo A. Huberman and Tad Hogg. Communities of practice: Performance and evolution. Computational and Mathematical Organization Theory, 1(1):73-92, 1995.

21
S. Kirkpatrick, C. D. Gelatt, and M. P. Vecchi. Optimization by simulated annealing. Science, 220:671-680, 1983.

22
Keith Kotay, Daniela Rus, Marsette Vona, and Craig McGray. The self-reconfiguring robotic molecule. In Proc. of the Conference on Robotics and Automation (ICRA98), page 424. IEEE, 1998.

23
Jean-Claude Latombe. Robot Motion Planning. Kluwer, Boston, 1991.

24
Hao Li, Robert Helling, Chao Tang, and Ned Wingreen. Emergence of preferred structures in a simple model of protein folding. Science, 273:666-669, 1996.

25
Alan Mackworth. Constraint satisfaction. In S. Shapiro, editor, Encyclopedia of Artificial Intelligence, pages 285-293. Wiley, NY, 1992.

26
Tomas Martin, Ulrike Obst, and Julius Rebek Jr. Molecular assembly and encapsulation directed by hydrogen-bonding preferences and the filling of space. Science, 281:1842-1845, 1998.

27
R. M. May. Will a large complex system be stable? Nature, 238:413-414, 1972.

28
Ross E. McMurtrie. Determinants of stability of large randomly connected systems. J. Theor. Biol., 50:1-11, 1975.

29
Mark S. Miller and K. Eric Drexler. Markets and computation: Agoric open systems. In B. A. Huberman, editor, The Ecology of Computation, pages 133-176. North-Holland, Amsterdam, 1988.

30
Melanie Mitchell. An Introduction to Genetic Algorithms. MIT Press, Cambridge, MA, 1996.

31
Satoshi Murata et al. A 3-D self-reconfigurable structure. In Proc. of the Conference on Robotics and Automation (ICRA98), page 432. IEEE, 1998.

32
M. Muthukumar, C. K. Ober, and E. L. Thomas. Competing interactions and levels of ordering in self-organizing polymeric materials. Science, 277:1225-1232, 1997.

33
James C. Nelson et al. Solvophobically driven folding of nonbiological oligomers. Science, 277:1793-1796, 1997.

34
Amit Pamecha, Imme Ebert-Uphoff, and Gregory S. Chirikjian. Useful metrics for modular robot motion planning. IEEE Transactions on Robotics and Automation, 13:531, 1997.

35
A. J. Perlis. Another view of software. In Proceedings of the 8th International Conference on Software Engineering, 1985.

36
Victoria A. Russell et al. Nanoporous molecular sandwiches: Pillared two-dimensional hydrogen-bonded networks with adjustable porosity. Science, 276:575-579, 1997.

37
Andrej Sali, Eugene Shakhnovich, and Martin Karplus. How does a protein fold? Nature, 369:248-251, 1994.

38
H. Simon. The Sciences of the Artificial. M.I.T. Press, Cambridge, MA, 1969.

39
Elisabeth Smela, Olle Inganas, and Ingemar Lundstrom. Controlled folding of micrometer-size structures. Science, 268:1735-1738, 1995.

40
S. I. Stupp et al. Supramolecular materials: Self-organized nanostructures. Science, 276:384-389, 1997.

41
Gretchen Vogel. Tracking the history of the genetic code. Science, 281:329-331, 1998.

42
Richard J. Wallace and Eugene C. Freuder. Stable solutions for dynamic constraint satisfaction problems. In Principles and Practice of Constraint Programming (CP98), 1998.

43
Ken Wilson. Problems in physics with many scales of length. Scientific American, 241:158-179, August 1979.

44
Michael Woodroofe. Probability with Applications. McGraw-Hill, New York, 1975.

45
Mark Yim. Locomotion with a Unit-Modular Reconfigurable Robot. PhD thesis, Stanford University, 1994.

46
Mark Yim, John Lamping, Eric Mao, and J. Geoffrey Chase. Rhombic dodecahedron shape for self-assembling robots. Technical Report P97-10777, Xerox PARC, 1997.



Tad Hogg
Thu Oct 8 10:52:12 PDT 1998



Donate Now

 

Foresight Programs

Join Now

 

Home About Foresight Blog News & Events Roadmap About Nanotechnology Resources Facebook Contact Privacy Policy

Foresight materials on the Web are ©1986–2014 Foresight Institute. All rights reserved. Legal Notices.

Web site development by Netconcepts. Email marketing by gravityMail. Maintained by James B. Lewis Enterprises.