Nanotechnology Could Speed Internet 100x
Roland Piquepaille writes "Using a new hybrid material made of nanometer-sized "buckyballs" and a polymer, Canadian researchers have shown that nanotechnology could lead to an Internet based entirely on light and 100 times faster than today's. This material allowed them to use one laser beam to direct another with unprecedented control, a featured needed inside future fiber-optic networks. These future fiber-optic communication systems could relay signals around the global network with picosecond (one trillionth of a second) switching times, resulting in an Internet 100 times faster. Please note this discovery appeared in a lab: we'll have to live with our current networks for some time. This overview contains more details."



August 14th, 2004 at 3:54 PM
Molecular Nanotechnology?
From what I understand, molecular nanotechnology is the ability to create almost any arrangement of atoms allowed by chemistry quickly, inexpensively, and with atomic precision and reasonable efficiency. What puzzles me is why this article and the article ìSnapshots of Molecules in Movementî were placed in the category of ìMolecular Nanotechnology.î It is not very clear to me how these articles are related to MNT as defined above. Nanomaterials and snapshots of molecules seem to be nanoscale bulk technologies. My best guess with regard to the first article is that, perhaps, the femtosecond laser pulses could be used to bootstrap MNT (the article mentions that molecules are moved by the pulses). However, I have no guess as to how the second one is related to MNT. I would appreciate if someone could please explain this.
August 14th, 2004 at 6:46 PM
Re:Molecular Nanotechnology?
I'm not sure either – what I am sure about however, is that it's going to take a multitude of technologies to ultimately create a working MNT assembler. Who knows? This just might be one of the key components to get the Assembler up and running.
August 15th, 2004 at 4:22 PM
Re:Molecular Nanotechnology?
While this is somewhat offtopic, I am convinced that nanotechnology assembly will be a footnote, yes, a mere footnote, to the "really awesome stuff". Drexler and others here may disagree with me on this, but, I believe that direct matter control, through the use of various quantum processes, whether they be entanglement, or "scalar electromagnetics" or the "A Field", will enable us to have full scale general purpose NON-Mechanical assembler 'replicators', matter/mass converters, teleportation devices, and the ability to directly engineer such fundamental "constants" such as the Schroedinger Equation, Planck scale quanta, and more. Quantum Vacuum computers would make Nano computers second rate scrap. But, MNT should be fully funded, and, we can have near term fruits of MNT, such as diamond composites, convergent assembly factories, and nanocomputers. I sincerely want the corporations, governments, and academia to just accept the MNT proposals that Foresight has laid out, already, and stop wasting time on nanostructured bulk technologies.
August 16th, 2004 at 5:51 AM
Re:Molecular Nanotechnology?
You're an idiot who does a great disservice to MNT with your parroting of such quackery as 'scalar electromagnetics'.
Let me guess, your a sub 20 year old, uneducated schmuck who has never so much as glanced in any book on real science, who spends his time guzzling Mountain Dew and stuffing his fat face with Dorritos, fantasizing about the day his pop sci-fi books will come true and rescue him from the unbearable existence he calls his life.
Here's an idea for you. Take out a life insurance policy for at least a million, naming Foresight as the beneficiary. Wait two years, and throw yourself from the window of a 40 story building. Then, you too will be able to advance science.
August 16th, 2004 at 9:36 AM
Re:Molecular Nanotechnology?
Your ridiculous personal attacks aside, you clearly are the one who understands nothing about the fundamentals of electromagnetism, I would gladly be willing to educate you, if you were not so willfully ignorant, and full of hatred against someone who offers some new insights. Let's start with the real basics. The vacuum of space is not empty, it is seething with energy. Copious amounts of energy. Quantum potential. Standing waves. Some try to describe it as a collection of internested folded strings and tangles of vibrating fabric. The energy is not a solid substance, it is not the ancient ether, but rather, a dynamic aether full of waves of constantly-changing energy, like a sea, and all matter is a fold, a kink, a temporary swirl in the sea of energy. Want proof? Go check out the Casimir Effect, my friend. Go check out the existence of superfluid helium, and the very fact that it is impossible to freeze a substance to true zero motion, because that zero-point energy is constantly in motion. Infact, let me tell you something that should excite you greatly: Nanotechnology and Zero Point energy manipulation overlap. How, you may ask? It has been shown that the best way to use the Casimir Effect is to construct diodes at the nanoscale, and infact Robert Forward once put out a patent for a collection of nanoscale thin or microscale thin metal leaves that could extract some amounts of energy from the zero point vacuum source. Once we get better Microtech and early nanotech rolling, we can build such machines/systems. Let me give you a suggestion, Yoda, stop being so arrogant and willing to use ad hominem attacks against people. That attitude is what does a real disservice to MNT.
August 16th, 2004 at 11:49 AM
Re:Molecular Nanotechnology?
Before assemblers are developed, mass assembly of unlimited length carbon nanotubes is likely to be an industrial endeavor. Let us examine the world changes that the "mere" mass production of carbon nanotubes will be able to bring, if it happens: 1 Materials at least dozens of times stronger than steel at a fraction of the weight. This will bring atmosphere piercing buildings, conventional steels and alloys being made obsolete (perhaps steel will be as bronze is today, once the main societal material, now a mere footnote), ultra-tough armor and armor-piercing projectiles, woven threads that can be integrated into clothing, making clothing that can last a lifetime and not be torn or wear away easilly, molecular nanotube based computers, needle-free injections that use nanotubes to directly get medications through the walls of cells and the skin, without tearing up skin and blood cells, amazingly-tough and strong drill bits allowing us to drill deeper and further into the Earth's crust, spacesuits that wear thin and flexible but strong and resilient, Earth to Orbit space planes that use a single stage instead of multiple disposable stages, nano fuel cells, nano batteries, nano solar cells all based on fullerene, and MUCH MUCH MUCH MUCH MORE! And this is ONLY with the mass assembly of carbon nanotubes! Is that amazing or what? :O)
August 17th, 2004 at 5:12 AM
Re:Molecular Nanotechnology?
Get a clue. I've taken graduate-level courses in elecrodynamics, whose mere preface would halt the mental processes of your pea-sized brain.
Type 'static electrodynamics quackery' into Google and educate yourself.
August 17th, 2004 at 7:46 AM
Silly Wabbit, Trix are for kids…
Roland, you should think about a claim before you bother to repost it on nanodot. Lets look at what is being said, "According to Sargent, future fibre-optic communication systems could relay signals around the global network with picosecond (one trillionth of a second) switching times, resulting in an Internet 100 times faster."
Lets match it to the data. I run a traceroute from Seattle to a site in Western Australia (about halfway around the world). It takes 29 hops but looking at the delays it would appear that the signal may only be converted between light and electricity perhaps 4-5 times. Perhaps between Seattle and Tacoma [WA], Tacoma and San Jose [CA], then Palo Alto(?) [CA] to NZ, from NZ to what is probably Eastern AU and then from Eastern AU to Western AU (I'm guessing at some of this but it makes sense from a routing standpoint).
Now the total traceroute delay indicates a transit time of 300ms (0.3 sec) . To be 100 times faster the transit time has to be reduced to 3 ms (3000 ns). The largest single delay in this is when the time jumps from 80ms to 230ms (150ms diff.) going across the Pacific. Lets look at this. The circumference of the Earth is ~40,000 km. Lets assume the CA-NZ distance is ~10,000 km — light travels 300,000 km/sec so it is going to take a minimum of 33ms to get the signal across the Pacific (actually longer since light is slower in a fiber). So cutting 150ms to 33ms would only be 5x faster — not 100x faster. The problem of having to convert long distance optical signals to electricity and back to amplify the signal every ~100 km or so was solved many years ago with erbium doped fiber amplifiers (EDFA). According to a Corning publication in fall 2002 [1], "Corning and Ceyba… with Corning LEAF fiber… combined performance delivers error-free transmission over 4,000 km *without* electrical regeneration, on 160 channels extending on both the C- and L-bands…". You still have to amplify the signal (though unamplified signals over 20,000 km are now possible) but that *isn't* the source of the delay in the most up-to-date systems.
The delay time here is *not* the time it takes to convert the signal from electricity to light and back — it is (a) the distance the signal must travel (speed-of-light limit); and (b) the queue time to get onto the fiber. The second delay (b) is helped by significantly increasing the bandwidth of the fiber(s) [generally through wavelength division multiplexing [WDM]]. This however is based on the fiber technology (impurities, wavelength dispersion, etc.) and the lasers and detectors at either end of the fiber — *not* on the time it takes to convert or switch the optical signal.
Conclusion — Sargents claims regarding the impact on the Internet are either highly misleading or simply incorrect. Another case of "nanohype" perhaps.
Robert
1. http://www.corning.com/opticalfiber/guidelines_mag azine/Fall_2002/gl3678.pdf
August 17th, 2004 at 8:17 AM
Re:Molecular Nanotechnology?
Actually, MNT is more about molecular arrangements allowed by physics more than by "chemistry". The question of whether or not "classical" MNT (which is usually based on the concept of mechanosynthesis) is or is not "classical" chemistry is one major source of the differences of opinion between Drexler and MNT proponents and MNT naysayers (e.g. Smalley, Church, etc.).
If you consider the proposals for MNT based on mechanosynthesis (not strictly necessary as you can get MNT based on non-mechanosynthetic assembly methods, particularly classical biochemistry) then one is usually dealing with a SPM/AFM like device manipulating specific small molecules with device "tips" that are able to control specific assembly operations/reactions. To get to this point you have to solve a lot of positioning/reliability questions that have some similarity to what the semiconductor industry has had to go through. If you misalign a mask over a chip or dope the chip with the wrong atoms during chip manufacture you get chips that don't work. The same applies to MNT. One has to ask questions like "Is my device positioned over the XYZ tip bin?", "Is my device positioned over the ZYX reactant source?", "Is my device positioned over the location where I want the reaction to occur?", "Did the reaction take place successfully?", etc. You *either* have to assume that you are positioning your manipulators precisely to subatomic accuracy (something that cannot easily and reliably be done at this time) or have some type of feedback (such as taking a "snapshot") that allows you to verify and/or manipulate the assembly process.
So the articles on manipulating molecules with light and/or taking pictures of where they are at femtosecond rates would seem to qualify as being important to at least some nanoassembly strategies. Light switching improvements are only important if you believe we are going to have fully optical computers in the near future and such computers will be required for the analysis or management of nanoassembly processes. (One could make the argument that one is going to need much faster computers for the analysis of images taken every few femtoseconds but this is a stretch IMO).
Robert
August 17th, 2004 at 10:28 AM
Re:Silly Wabbit, Trix are for kids…
The benefit is probably vastly overestimated, but an all-optical switching mechanism would surely prove useful. Imagine a warehouse filled with a 3D grid of N x N x N optical computers, where N = 100 (used for protein folding or rational protein design, for example; or even computer-based design and simulation of nanostructures). Each node needs to be able to communicate with 1,000,000 other nodes. Even if each node communicates directly only with those in its immediate vicinity (some algorithms do parallelize well like this), that's still a lot of connections. And at this short scale of distance, the speed of light isn't a limiting factor, it's the speed of switching.
August 17th, 2004 at 4:07 PM
Re:Silly Wabbit, Trix are for kids…
Er, no. Switching time is not really related to end to end latency, and they're talking about bandwidth, not path delay.
Sure, you can't reduce total latency below the speed-of-light delay, but you can carry more total data on a fiber if you can switch and sense faster. If, as they claim, you can switch in a picosecond, you can run a data stream on the order of 1Tbps over a single fiber at a single wavelength. You can't switch that fast in an electro-optical device. You can do a certain amount of WDM instead, but that requires that you replicate a lot of the hardware for every wavelength, which is Not Cheap ™.
… but that number does sound like hype. Pulses spread, there are power limits, you'd have to build reasonably complex all-optical switching and buffering systems, especially if you wanted to avoid massive replication of the switching processors. That means that, unless you got really clever, you'd have to build VLSI out of this stuff, which means you may run into optical limitations on device dimensions, the speed of light delay within a chip could constrain your system design, the fabrication is likely to be hard, and so forth.
August 17th, 2004 at 6:49 PM
Re:Molecular Nanotechnology?
That was extremely informative, thank you Mr Bradbury. I have a question in regards to this. Is it feasible to assemble atoms or molecules using particle streams or waves based on smaller than atom particles, such as gamma rays? What would the limits be if one could consistently produce such rays and use them, or attempt to use them, for atomic precision mechano-synthesis?
August 18th, 2004 at 9:17 AM
Re:Silly Wabbit, Trix are for kids…
Yes, I'm aware that optical switching *would* be useful, for Folding@Home, Nano@Home, and other types of problems that the large supercomputer clusters are now being devoted to. I even discuss the problems of latency and internode communication times a bit [1] with respect to the problem of solar system sized computers (Matrioshka Brains).
But that isn't what the scientists *claimed* it would be useful for. Fully optical N-Cube type grids are most likely some number of years in our future (you need one heck of a benefit to justify even a fraction of the investment that has been made into the semiconductor industry). IMO, the semiconductor industry has to hit a wall and it will have to "appear" that nanotech will have some problems continuing current trends for optical to receive serious attention. Otherwise it seems likely to remain an area where huge investments will be made only when people *need* the technologies yesterday (e.g. the WWW driving the requirement for WDM).
Robert
1. The LogP Model for Assessment of Parallel Computation
August 18th, 2004 at 9:36 AM
Re:Silly Wabbit, Trix are for kids…
I would agree that total optical switching (really routing) would be faster than electro-optical methods. But when one does a traceroute (or even browses the web) the delays for the most part are *not* due to either a lack of bandwidth (on the trunk lines) or the switching delays (at the routing points).
They are due (IMO) to (a) slow pipes in the "last mile" of the connection; or (b) poorly designed web pages that contain dozens or hundreds of images that require browsers to make dozens or hundreds of individual requests to the servers (in which case the apparent latency isn't due to my moderately slow last mile DSL connection, the bandwidth of the trunk or the last mile lines or the switching speeds at the routing points. It is instead due to the volume of stupid little requests that the web server has to handle).
Oracle had to solve the data access over networks problem more than 17 years ago. Its first network data access protocols returned query results one row at the time and the communications protocol latencies and overhead made this a very slow process. When it modified its network transfer protocols to support the return of arrays of row data things significantly improved. Its too bad that much of the browser/web-server software development community hasn't developed and widely adopted similar solutions to this problem.
Robert
August 18th, 2004 at 11:51 AM
Re:Silly Wabbit, Trix are for kids…
You're only talking about user response times, rather than raw data throughput, either individual or aggregate. Both are important elements of "speed". It's not fair to claim that something that speeds the network up in one sense does not speed it up at all… especially when much of the slowness comes from the end-to-end protocol, not the network.
The Web is not the Internet, and, at the rate we're going, there's a good chance that multimedia streams will use most of the bandwidth in a few years. The streaming protocols are largely unaffected by end-to-end delay.
Having faster trunks is a prerequisite for having faster "last mile" links. Many consumer access lines are artifically bandwidth limited because of the cost of providing the bandwidth on the back end. Trunk bandwidth is a major expense for an ISP.
Delays, windowing, and protocol turnaround issues were thoroughly analyzed long before Oracle was ever founded. Every set of neophytes who invent a new networking protocol makes the same mistake… I remember spending a boatload of time explaining to people why their Novell networks didn't perform over satellite links. NETBIOS did the same thing. NFS version 3 is still a command-response protocol.
The Web people made a time-honored mistake, although, in their defense, they expected it to be used for monolithic text documents, where such things would have been less important. And, yes, some of the "Web services" people are in the process of making the same mistake again.
The Web people have developed solutions. HTTP 1.1 fixes a lot of this, and current clients and servers do pipeline requests. I suspect they don't do it very well, probably because they already get an adequate user experience without it.
Transmission delay is still a significant element of the delays you see in your traceroute output… you have to clock the last bit in before you can send the first bit out. It's shrinking, though, and I agree that it's not usually significant on high-speed long-distance trunks.
Routing is a special case of switching, except in marketing material. In marketing material, the world "switching" should usually, but not always, be read as "bridging".
August 19th, 2004 at 7:29 AM
Re:Molecular Nanotechnology?
I am not an expert in the topics most important to properly answer this question (it involves the dual wave/particle nature of electromagnetic radiation). I will make a couple of comments though.
The first comment has to do with the energy of the photons involved in UV and higher frequency radiations. As Eric mentions in Nanosystems, UV photons (and X-ray & Gamma-ray photons) have enough energy to break most covalent bonds. (This is the primary reason that exposure to these kinds of radiation is hazardous.) The solution to this (from Eric's perspective) is to coat nanomachinery in a UV reflective metal (e.g. aluminium) of sufficient thickness (which in fact is pretty thin) so that any incident UV radiation is reflected away. For X-rays and Gamma-rays the problem cannot easily be solved at the nanoscale. This may be a good thing because it means that nanomachinery is vulnerable to these types of radiation (which may be a good thing if active defenses are required against nanorobots running amok).
The problem of UV radiation causing atomic bond breakage is probably one of the reasons that semiconductor manufacturers are having a difficult time developing masks and resists that can shield or coat wafers during the manufacturing process. As the light being used moves into increasingly shorter UV wavelengths the potential damage they may cause increases. (Some of the laser's being studied for this produce radiation in the 13-15nm range I believe — at those wavelengths the damage the photons can cause is quite significant).
The second comment has to do with the inability to focus particle or radiation streams into areas (volumes) small enough to manipulate things at the nanoscale. With radiation streams the problem is that it is simply very difficult to focus X-rays and Gamma-rays. [Very special hardware structures such as those in the Chandra X-ray telescope are required and even their effectiveness is limited.] With particle streams one has the problem that the particles (if similarly charged) repel each other (producing a focusing problem again). One can work around this with things like electron beams (and in fact most work focused on lithography in the 10-20nm range, even the nanoimprint lithography being done at Princeton, uses electron beams at the start of the process). The problem here is with parallelism (E-beams are slow) and a reduction in the costs of large scale manufacture (E-beam machines are expensive). It was thought ~10 years ago that these problems might be solved (Bell Labs was a heavy supporter of E-beam lithography) but to the best of my knowledge these efforts have not worked out.
Even so it is useful to remember that most of the current manufacturing methods are "bulk" scale (even if the "bulk" one is dealing with may be 5-10 atoms in thickness). This is quite different from precision atomic bonding and structures that are atomically precise. For these one needs to look to chemistry, biochemistry (enzymes) and eventually mechanosynthesis (and perhaps self-assembly). Lithographic processes are going at things top-down while the other methods are working bottom-up. It should be kept in mind that the semiconductor industry (and many other manufacturing processes), even though they are dealing with raw source materials measured in cm, do in part depend on "self-assembly" — it is an essential aspect of the formation of any crystalline structure such as the Si or GaAs boules that are used in the start of semiconductor manufacturing processes.)
Robert
August 20th, 2004 at 4:31 AM
Re:Silly Wabbit, Trix are for kids…
I think we are in general agreement. In summary I would suggest that optical switching could be quite useful but perhaps not in the way or to the extent the original article might suggest. There will probably be many more situations like this as people try to sell their 'new' nanoscale 'inventions' without having a detailed understanding of how the real world works.
November 4th, 2004 at 6:57 PM
Re:Molecular Nanotechnology?
…I really have no clue what you people are talking about (trying to learn >. ) but you got a baaad attitude Yoda. I'm here trying to learn about where our future is heading.. and I sure as hell hope its not headed in any direction remotely filled with the arrogance and pig-headed cockiness you posses. All these brilliant men and women are trying to come together as peers to help and support each other. If you aren't going to help the process, then you're only going to hurt it. Debates are cool, but there's no need to start slinging mud..