Foresight Nanotech Institute Logo
Image of nano

Must-see gallery of nanomachine simulations

[Welcome Instapundit readers -- subscribe at "Free Registration" in the right-hand column to get nanotech email news deliveries. --CP]

From Mark Sims of Nanorex we heard about these seven nanomachine simulations, all on one web page and operating successfully despite the jiggling of thermal noise. The eighth graphic is a cutaway showing the internals of a speed reducer gear design.

As I explain in my talks when I show these kinds of things: they are neither artists’ conceptions nor illustrations of nanomachines that have been built. Instead, they are engineering designs made using software based on scientific laws — designs that cannot be built today, but that (as far as we can tell now) would work properly if built.

You can hear more about the Nanorex software at their luncheon seminar “Computational Modeling and Simulation for Nanomechanical Engineering” on October 26 at the Foresight Conference. See also the three research lectures by Nanorex staff and affiliates: a busy group. –CP

9 Responses to “Must-see gallery of nanomachine simulations”

  1. Novak Says:

    I have to ask this question, which I raised on sci.nanotech a few weeks ago:

    If this work is so great– and after reading JoSH’s book, it does seem pretty impressive– why is it not finding a venue for publication in scholarly journals? (Which reminds me, I still need to review JoSH’s book in detail.)

  2. Mark Sims Says:

    There are many reasons for this. Here are my top four:

    1. Lack of formal research in this specific area of nanotechnology. This is due in large part to the lack of NNI funded programs studying nanosystems. This is beginning to change, however. Earlier this month, the NSF announced a $42M grant for the study of “Active Nanostructures and Nanosystems”, which specifically states “nanoelectromechanical systems (NEMS)”, “nanomachines” and “nanodevices” as primary research targets for the program. It is a very interesting read, which can be found here:

    http://www.fedgrants.gov/Applicants/NSF/OIRM/HQ/05-610/Grant.html

    BTW, the deadline is November 29th, so you still have time to get involved on the ground floor.

    2. Very few scientists understand nanomechanical engineering. This area of research is inherently interdisciplinary, requiring a broad and deep knowledge set of chemistry, physics, mechanical engineering and computer science.

    3. Atomically-precise fabrication technology required to build structures of the type seen in the gallery are beyond the current state-of-the-art. Most scientific articles of the kind you refer most often involve experimentation and publication of measurement results, not just pure theory.

    4. Current computational chemistry tools are difficult to use and lack basic features required to aid nanomechanical engineering design, modeling, simulation and analysis. nanoENGINEER-1 will help change this.

  3. Adrian Wilkins Says:

    Please, please, confirm, Mr Sims, that the statement in your “About” page, that nanoENGINEER-1 will be licensed as GPL is true. I can’t wait to have a play with it :-)

  4. John Novak Says:

    I’m afraid I don’t see the same situation, Mark. The journal on the topic I read most regularly is the IEEE Transactions on Nanotechnology. That’s a very explicitly interdisciplinary journal, with some 18 or 20 supporting societies under the IEEE umbrella. It’s also not limited to experimental work only. I recall more than a few articles on simulation techniques, and architectural investigations. There are Quantum Cellular Automata theory papers, at all levels from basic device simulation to architectural approaches to CAD techniques. I’ve also seen various other nanoelectronic architectures investigated, and various other device level simulations, without experimental results.

    If nothing else, the recent paper they published by Freitas and Cavalcanti should indicate that the area of acceptance is pretty broad, even if I found it vague and hand-wavy.

    In fact, the 2006 IEEE Nano Conference has separate calls for: nano-robotics; modelling and simulation; nano-carbon, nano-diamond, and CNT based technologies; and nano-circuits and architectures.

    Now, I’ve mentioned the IEEE Trans. Nano., because that’s what I subscribe to and read on a regular basis. I’d be shocked if there were no other peer reviewed journals (and I mean peer reviewed by credentialed scientists and engineers, not credentialed attention-seekers) of similar scope and depth out there. It’s a persistant mystery to me why publications of this type are not sought more often.

  5. Mark Sims Says:

    Adrian, this is true.

  6. Andrey Khavryuchenko Says:

    Dear Mark,

    As far as I can see from your simulations you’re not concerned with chemical stability of designed devices. Are you using totally empirical force fields or some Hartree-Fock or DFT approximations?

  7. Damian Allis Says:

    Greetings, Andrey!

    As a theoretician (having run through your site, I know my answer could be much more specific than I will provide, but we can save the really in-depth discussions for further communications), you know the importance of your question concerning chemical stability and how computational chemists deal with it (or do not deal with it, as the case may be). I’m going to step back briefly and address some of the basic theory (in case anyone might want a little intro or refresher into the approximate methods) before addressing how, it is expected, nE-1 will allow those using the program to interface between the classical and quantum regimes and address the point you raise.

    Experimental chemists must differentiate between stability and reactivity. A stable system is one that would, in the absence of agents that would overcome thermodynamic or kinetic barriers to some process, remain unchanged. Chemical systems become reactive when agents (other molecules or some energy) are introduced to promote a chemical change. H2 and O2, when mixed in the same container, are stable (kinetically, anyway). These two gases become reactive when something is added to the system to overcome the energy barrier that results in the formation of water (sparks, flames, catalysts, whatever).

    Theoretical chemists must differentiate between simulation and modeling. Simulation is similar to chemical stability. A simulation is run assuming that components ARE NOT reactive, only interactive. One can answer questions pertaining to structure or electrostatic interactions, but not how molecules come together to form new molecules through chemical reactions (although methods do exist that demonstrate one could do such studies). The presumed persistence of the chemical bond over the course of a simulation is what allows theoreticians to use simple classical approximations of atomic interactions. These classical ball-and-spring approximations greatly reduce the calculation time of large molecules (proteins, small pieces of cell membrane, etc.). The large bearing in the Nanorex gallery takes about 35 seconds to minimize in the nE-1 force field on my laptop. That’s about 1/10th the time it takes to optimize a benzene molecule (with only 12 atoms) at a reasonable level of theory using quantum calculations. These ball-and-spring approximations are what we refer to as force fields, and many flavors exist, usually geared at answering different types of questions or obtaining different degrees of accuracy.

    At the core of your question is the validity of the classical approximation not for the simulation of the structures, but for the treatment of the structures as chemically inert objects that will not undergo spontaneous bond breaking and reforming to ruin the mechanical workings of the structures. With rare exceptions (that will require considerable effort to implement if the winds of theoretical popularity blow us all that way), classical treatments (molecular mechanics, molecular dynamics) will never give us that information directly. The only way to (begin to) obtain those answers is through quantum chemistry, where no assumptions of atomic connections are made and the electrons in their orbitals define a spectrum of interactions and interaction energies that range from strongly covalent (carbon bonds in diamond) to weakly electrostatic (dispersion).

    The divide that separates simulation and modeling is artificial, a consequence of a very large gap between our ability to select molecules to study and our ability to find enough computers in the office to perform the study. Over the time I’ve been active in computational chemistry, I’ve found that I’m always waiting the same amount of time for a calculation to finish. I’m just waiting on much larger systems now. The divide between simulation and modeling exists, and CAN always exist, because theoreticians can also differentiate between stability and reactivity in research design. Systems not PREDICTED to be reactive can be adequately handled by classical methods. This is to say, a stable system to a molecular dynamics researcher is one in which the springs connecting atoms never break, only stretch. This is, when the system is known to be stable, a perfectly fine approximation (with the only grievance in the simulation being the limitations of the classical force field you’re using). Theoretical biochemists studying protein dynamics live and die by this approximation, yet their abilities to model structural and mechanical properties of proteins, ion channels, ligand binding, etc., are far from limited.

    That said, it will be a better world when we can study everything quantum mechanically (IMHO. Don’t want to step on toes). So, to finally address your specific questions, the chemical reactivity and stability of these systems are ABSOLUTELY a concern.

    The classical simulations are, obviously, much faster to run and allow for far greater throughput of ideas. In that light, the classical simulations are the 2nd level of abstraction, hovering in their approximations above the quantum calculations. In the near-term, classical dynamics leads the way to what we believe to be plausible designs. With a design or, more usefully, a design motif in hand that we generate from classical dynamics treatments (such as the sulfur gears in the speed reducer gears in the gallery, which we use in many other systems), the procedure is to reduce the nanostructure to regions where we believe chemical stability is an issue. With those areas identified, we then apply quantum chemical treatments to try to identify where the holes in our armor are. The automation of such a process is far from trivial, so we’re doing most of the analysis by hand and using the procedures we develop that way to direct what will ultimately be a (hopefully!) user-friendly interface. The quantum has lagged behind the classical, but results are finally starting to trickle in and we (I) hope to have them and the procedure we’ve used to examine the gear systems available on the http://www.nanoengineer-1.com website in the near future. To date, the quantum calculations have involved only the binding energies of the teething interfaces between gears at a B3LYP/6-31G(d) level of theory, which are the regions in these designs we believe to be the most likely to exist in environments that might lead to chemical reactions (the point of the quantum being the identification of barriers to reactions, the binding energies of the stationary gears at equilibrium, the repulsive energies from having the gears too closely spaced, etc.). It’s important to note that even the quantum chemical methods we (“we” as in “everybody”) use have their limitations (why else would we call them the “approximate methods?”) and we have to gear the choice of theory to the problem being addressed (for instance, the sulfur-sulfur interactions in the gears appear to be entirely repulsive in the density functional theory calculations, but DFT will not provide you with binding energies associated with dispersion forces, so the same calculations must be ramped up into Moeller-Plesset perturbation theory, which EVERYONE knows (of course!) is a very cruel thing to do to a typical desktop computer).

    To close up with our general strategy and thoughts towards molecular machine design, we hold very hard to the proof of concept and what the current state of chemical theory will allow us to do, treating the devices you see as possible structures in their testing phase. The gallery is wonderful eye candy, but we’re not happy with them (or any) until they’ve run the gauntlet. They serve as much as testing grounds for our force field and software as they do for providing us with VERY GOOD QUESTIONS we need to address in the design process and testing phases of the design. Given what one might call the pre-embryonic state of advanced molecular manufacturing, pushing ideas and designs into the molecular manufacturing discussion that don’t hold up to the highest level of scrutiny modern science allows does nothing to promote the work or defend our arguments. Contributions to the discussion/ongoing debate that don’t hold up to scrutiny do much more harm than good. If there are chemical problems in the designs, we definitely want to know where they are so we can design around them or, if necessary, scrap the design for another. Much more to follow.

    sweating the small stuff,
    Damian Allis, Ph.D.

  8. Andrey Khavryuchenko Says:

    Damian,

    Initially I had no intention to follow up, but after reading your interview, I’ve changed my mind.

    First of all, we’re not theoreticians. Nor are we experimentalists. We’re practitioners – we use computational chemistry method in tight integration with an experiment.

    We don’t see your distinction between simulation and modelling to be useful (for our purposes). Or at least with your definition.

    We apply computational chemistry to real industry problems in the line of L. Gribov’s idea of different modelling levels each providing different approximations – and, thus, different understanding of the atom level process.

    So you’re not quite correct about my original “core question”. The core question is if the used modelling method reproduce the interesting properties of the studied system?

    Molecular mechanics and dynamics can represent the space structure for many systems well enough, BUT cannot reproduce ANY one-electron properties for the system under study like electronic spectra, chemical reaction and a lot of one-electonic properties.

    As the father of computation quantum chemistry (J.Pople –Nobel pr.) said: “For any quantum chemical method we can find the system, which cannot be simulated correctly”. Therefore, the real simulation for nanoscale object level must include the verification of the theoretical models using corresponding experimental methods.

    Dealing with vast different substances from polysiloxanes to carbon black to meta nanoparticles to pure silicon surface, we have adoped the only one rule:
    We should compute (mainly) properties we can measure and we should mesure the properties we can compute. If we can’t yet connect the computed and the measured, we should develop the tool (theory) to link the computational results to the measured ones.

    So, back to my original comment. What caught my eye was that the proposed models may be quite reactive in any realistic environment and may even change their structure even when operated as designed due to mechanochemical reaction. These structures are yet to be synthesized, so no experimental validation could be presented yet. And, since you’ve used only molecular mechanic force fields, you excluded from your modelling all one-electron properties, including the very possibility of a chemical reaction.

    Sincerely,
    Andrey

  9. www.somewhereville.com Says:

    [...] I think AMM is a living, breathing thing reacting to scientific progress. There’s a very solid concept of “self” but it will ultimately be driven where science takes it. A big part of what Nanorex and nE-1 are about is testing the physically realizable parts of AMM originally put forth by Drexler. I made this point on nanodot recently. We AMM proponents need to be as scientifically accurate as possible. By putting forth bad designs or flawed concepts, we would do more harm than good, so we need to make sure that what we think will work really WILL work. In the absence of a wet lab to fabricate this stuff, we have to use theory to test things out which, despite the many shortcomings (that’s why we call them the “approximate methods!”), manage to get the right answers when the right theory is applied to the right question. We’re stubborn in our belief in the plausibility, but we’re not THAT stubborn. If things aren’t going to work, we want to know so we can address the problems and work around them. As a practicing researcher, I’d much rather have someone criticize a speed reducer gear than the feasibility of molecular manufacturing, because we two parties can sit down and address the gear directly because the design and all the atoms are sitting in front of us. Ideally, the critics would put some energy into constructive alternatives. There is no greater thrill to me than having an idea brainstormed in a room full of people that know their stuff that are interested in finding a way from A to Z. It is my expectation that what Nanorex is doing will change the tone of the discussion in that respect away from catalytic handwaving and towards critique of potentially realizable systems. [...]

Leave a Reply