<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
		>
<channel>
	<title>Comments on: Nanotechnology Could Speed Internet 100x</title>
	<atom:link href="http://www.foresight.org/nanodot/?feed=rss2&#038;p=1599" rel="self" type="application/rss+xml" />
	<link>http://www.foresight.org/nanodot/?p=1599</link>
	<description>examining transformative technology</description>
	<lastBuildDate>Wed, 03 Apr 2013 18:23:47 +0000</lastBuildDate>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=3.0.4</generator>
	<item>
		<title>By: Kyt</title>
		<link>http://www.foresight.org/nanodot/?p=1599#comment-4501</link>
		<dc:creator>Kyt</dc:creator>
		<pubDate>Fri, 05 Nov 2004 01:57:55 +0000</pubDate>
		<guid isPermaLink="false">http://www.foresight.org/nanodot/?p=1599#comment-4501</guid>
		<description>&lt;p&gt;&lt;strong&gt;Re:Molecular Nanotechnology?&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;...I really have no clue what you people are talking about (trying to learn &gt;. ) but you got a baaad attitude Yoda. I&#039;m here trying to learn about where our future is heading.. and I sure as hell hope its not headed in any direction remotely filled with the arrogance and pig-headed cockiness you posses. All these brilliant men and women are trying to come together as peers to help and support each other. If you aren&#039;t going to help the process, then you&#039;re only going to hurt it. Debates are cool, but there&#039;s no need to start slinging mud..&lt;/p&gt;

</description>
		<content:encoded><![CDATA[<p><strong>Re:Molecular Nanotechnology?</strong></p>
<p>&#8230;I really have no clue what you people are talking about (trying to learn &gt;. ) but you got a baaad attitude Yoda. I&#39;m here trying to learn about where our future is heading.. and I sure as hell hope its not headed in any direction remotely filled with the arrogance and pig-headed cockiness you posses. All these brilliant men and women are trying to come together as peers to help and support each other. If you aren&#39;t going to help the process, then you&#39;re only going to hurt it. Debates are cool, but there&#39;s no need to start slinging mud..</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: RobertBradbury</title>
		<link>http://www.foresight.org/nanodot/?p=1599#comment-4512</link>
		<dc:creator>RobertBradbury</dc:creator>
		<pubDate>Fri, 20 Aug 2004 11:31:06 +0000</pubDate>
		<guid isPermaLink="false">http://www.foresight.org/nanodot/?p=1599#comment-4512</guid>
		<description>&lt;p&gt;&lt;strong&gt;Re:Silly Wabbit, Trix are for kids...&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;I think we are in general agreement. In summary I would suggest that optical switching could be quite useful but perhaps not in the way or to the extent the original article might suggest. There will probably be many more situations like this as people try to sell their &#039;new&#039; nanoscale &#039;inventions&#039; without having a detailed understanding of how the real world works.&lt;/p&gt;

</description>
		<content:encoded><![CDATA[<p><strong>Re:Silly Wabbit, Trix are for kids&#8230;</strong></p>
<p>I think we are in general agreement. In summary I would suggest that optical switching could be quite useful but perhaps not in the way or to the extent the original article might suggest. There will probably be many more situations like this as people try to sell their &#39;new&#39; nanoscale &#39;inventions&#39; without having a detailed understanding of how the real world works.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: RobertBradbury</title>
		<link>http://www.foresight.org/nanodot/?p=1599#comment-4505</link>
		<dc:creator>RobertBradbury</dc:creator>
		<pubDate>Thu, 19 Aug 2004 14:29:15 +0000</pubDate>
		<guid isPermaLink="false">http://www.foresight.org/nanodot/?p=1599#comment-4505</guid>
		<description>&lt;p&gt;&lt;strong&gt;Re:Molecular Nanotechnology?&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;I am not an expert in the topics most important to properly answer this question (it involves the dual wave/particle nature of electromagnetic radiation). I will make a couple of comments though.&lt;/p&gt;
&lt;p&gt;The first comment has to do with the energy of the photons involved in UV and higher frequency radiations. As Eric mentions in &lt;em&gt;Nanosystems&lt;/em&gt;, UV photons (and X-ray &amp; Gamma-ray photons) have enough energy to break most covalent bonds. (This is the primary reason that exposure to these kinds of radiation is hazardous.) The solution to this (from Eric&#039;s perspective) is to coat nanomachinery in a UV reflective metal (e.g. aluminium) of sufficient thickness (which in fact is pretty thin) so that any incident UV radiation is reflected away. For X-rays and Gamma-rays the problem cannot easily be solved at the nanoscale. This may be a good thing because it means that nanomachinery is vulnerable to these types of radiation (which may be a good thing if active defenses are required against nanorobots running amok).&lt;/p&gt;
&lt;p&gt;The problem of UV radiation causing atomic bond breakage is probably one of the reasons that semiconductor manufacturers are having a difficult time developing masks and resists that can shield or coat wafers during the manufacturing process. As the light being used moves into increasingly shorter UV wavelengths the potential damage they may cause increases. (Some of the laser&#039;s being studied for this produce radiation in the 13-15nm range I believe -- at those wavelengths the damage the photons can cause is quite significant).&lt;/p&gt;
&lt;p&gt;The second comment has to do with the inability to focus particle or radiation streams into areas (volumes) small enough to manipulate things at the nanoscale. With radiation streams the problem is that it is simply very difficult to focus X-rays and Gamma-rays. [Very special hardware structures such as those in the Chandra X-ray telescope are required and even their effectiveness is limited.] With particle streams one has the problem that the particles (if similarly charged) repel each other (producing a focusing problem again). One can work around this with things like electron beams (and in fact most work focused on lithography in the 10-20nm range, even the nanoimprint lithography being done at Princeton, uses electron beams at the start of the process). The problem here is with parallelism (E-beams are slow) and a reduction in the costs of large scale manufacture (E-beam machines are expensive). It was thought ~10 years ago that these problems might be solved (Bell Labs was a heavy supporter of E-beam lithography) but to the best of my knowledge these efforts have not worked out.&lt;/p&gt;
&lt;p&gt;Even so it is useful to remember that most of the current manufacturing methods are &quot;bulk&quot; scale (even if the &quot;bulk&quot; one is dealing with may be 5-10 atoms in thickness). This is quite different from precision atomic bonding and structures that are atomically precise. For these one needs to look to chemistry, biochemistry (enzymes) and eventually mechanosynthesis (and perhaps self-assembly). Lithographic processes are going at things top-down while the other methods are working bottom-up. It should be kept in mind that the semiconductor industry (and many other manufacturing processes), even though they are dealing with raw source materials measured in cm, do in part depend on &quot;self-assembly&quot; -- it is an essential aspect of the formation of any crystalline structure such as the Si or GaAs boules that are used in the start of semiconductor manufacturing processes.)&lt;/p&gt;
&lt;p&gt;Robert&lt;/p&gt;

</description>
		<content:encoded><![CDATA[<p><strong>Re:Molecular Nanotechnology?</strong></p>
<p>I am not an expert in the topics most important to properly answer this question (it involves the dual wave/particle nature of electromagnetic radiation). I will make a couple of comments though.</p>
<p>The first comment has to do with the energy of the photons involved in UV and higher frequency radiations. As Eric mentions in <em>Nanosystems</em>, UV photons (and X-ray &amp; Gamma-ray photons) have enough energy to break most covalent bonds. (This is the primary reason that exposure to these kinds of radiation is hazardous.) The solution to this (from Eric&#39;s perspective) is to coat nanomachinery in a UV reflective metal (e.g. aluminium) of sufficient thickness (which in fact is pretty thin) so that any incident UV radiation is reflected away. For X-rays and Gamma-rays the problem cannot easily be solved at the nanoscale. This may be a good thing because it means that nanomachinery is vulnerable to these types of radiation (which may be a good thing if active defenses are required against nanorobots running amok).</p>
<p>The problem of UV radiation causing atomic bond breakage is probably one of the reasons that semiconductor manufacturers are having a difficult time developing masks and resists that can shield or coat wafers during the manufacturing process. As the light being used moves into increasingly shorter UV wavelengths the potential damage they may cause increases. (Some of the laser&#39;s being studied for this produce radiation in the 13-15nm range I believe &#8212; at those wavelengths the damage the photons can cause is quite significant).</p>
<p>The second comment has to do with the inability to focus particle or radiation streams into areas (volumes) small enough to manipulate things at the nanoscale. With radiation streams the problem is that it is simply very difficult to focus X-rays and Gamma-rays. [Very special hardware structures such as those in the Chandra X-ray telescope are required and even their effectiveness is limited.] With particle streams one has the problem that the particles (if similarly charged) repel each other (producing a focusing problem again). One can work around this with things like electron beams (and in fact most work focused on lithography in the 10-20nm range, even the nanoimprint lithography being done at Princeton, uses electron beams at the start of the process). The problem here is with parallelism (E-beams are slow) and a reduction in the costs of large scale manufacture (E-beam machines are expensive). It was thought ~10 years ago that these problems might be solved (Bell Labs was a heavy supporter of E-beam lithography) but to the best of my knowledge these efforts have not worked out.</p>
<p>Even so it is useful to remember that most of the current manufacturing methods are &quot;bulk&quot; scale (even if the &quot;bulk&quot; one is dealing with may be 5-10 atoms in thickness). This is quite different from precision atomic bonding and structures that are atomically precise. For these one needs to look to chemistry, biochemistry (enzymes) and eventually mechanosynthesis (and perhaps self-assembly). Lithographic processes are going at things top-down while the other methods are working bottom-up. It should be kept in mind that the semiconductor industry (and many other manufacturing processes), even though they are dealing with raw source materials measured in cm, do in part depend on &quot;self-assembly&quot; &#8212; it is an essential aspect of the formation of any crystalline structure such as the Si or GaAs boules that are used in the start of semiconductor manufacturing processes.)</p>
<p>Robert</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: jbash</title>
		<link>http://www.foresight.org/nanodot/?p=1599#comment-4511</link>
		<dc:creator>jbash</dc:creator>
		<pubDate>Wed, 18 Aug 2004 18:51:51 +0000</pubDate>
		<guid isPermaLink="false">http://www.foresight.org/nanodot/?p=1599#comment-4511</guid>
		<description>&lt;p&gt;&lt;strong&gt;Re:Silly Wabbit, Trix are for kids...&lt;/strong&gt;&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;You&#039;re only talking about user response times, rather than raw data throughput, either individual or aggregate. Both are important elements of &quot;speed&quot;. It&#039;s not fair to claim that something that speeds the network up in one sense does not speed it up at all... especially when much of the slowness comes from the end-to-end protocol, not the network.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The Web is not the Internet, and, at the rate we&#039;re going, there&#039;s a good chance that multimedia streams will use most of the bandwidth in a few years. The streaming protocols are largely unaffected by end-to-end delay.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Having faster trunks is a prerequisite for having faster &quot;last mile&quot; links. Many consumer access lines are artifically bandwidth limited because of the cost of providing the bandwidth on the back end. Trunk bandwidth is a major expense for an ISP.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Delays, windowing, and protocol turnaround issues were thoroughly analyzed long before Oracle was ever founded. Every set of neophytes who invent a new networking protocol makes the same mistake... I remember spending a boatload of time explaining to people why their Novell networks didn&#039;t perform over satellite links. NETBIOS did the same thing. NFS version 3 is &lt;em&gt;still&lt;/em&gt; a command-response protocol.&lt;/p&gt;
&lt;p&gt;The Web people made a time-honored mistake, although, in their defense, they expected it to be used for monolithic text documents, where such things would have been less important. And, yes, some of the &quot;Web services&quot; people are in the process of making the same mistake again.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The Web people &lt;em&gt;have&lt;/em&gt; developed solutions. HTTP 1.1 fixes a lot of this, and current clients and servers &lt;em&gt;do&lt;/em&gt; pipeline requests. I suspect they don&#039;t do it very well, probably because they already get an adequate user experience without it.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Transmission delay is still a significant element of the delays you see in your traceroute output... you have to clock the last bit in before you can send the first bit out. It&#039;s shrinking, though, and I agree that it&#039;s not usually significant on high-speed long-distance trunks.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Routing is a special case of switching, except in marketing material. In marketing material, the world &quot;switching&quot; should usually, but not always, be read as &quot;bridging&quot;.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

</description>
		<content:encoded><![CDATA[<p><strong>Re:Silly Wabbit, Trix are for kids&#8230;</strong></p>
<ol>
<li>
<p>You&#39;re only talking about user response times, rather than raw data throughput, either individual or aggregate. Both are important elements of &quot;speed&quot;. It&#39;s not fair to claim that something that speeds the network up in one sense does not speed it up at all&#8230; especially when much of the slowness comes from the end-to-end protocol, not the network.</p>
</li>
<li>
<p>The Web is not the Internet, and, at the rate we&#39;re going, there&#39;s a good chance that multimedia streams will use most of the bandwidth in a few years. The streaming protocols are largely unaffected by end-to-end delay.</p>
</li>
<li>
<p>Having faster trunks is a prerequisite for having faster &quot;last mile&quot; links. Many consumer access lines are artifically bandwidth limited because of the cost of providing the bandwidth on the back end. Trunk bandwidth is a major expense for an ISP.</p>
</li>
<li>
<p>Delays, windowing, and protocol turnaround issues were thoroughly analyzed long before Oracle was ever founded. Every set of neophytes who invent a new networking protocol makes the same mistake&#8230; I remember spending a boatload of time explaining to people why their Novell networks didn&#39;t perform over satellite links. NETBIOS did the same thing. NFS version 3 is <em>still</em> a command-response protocol.</p>
<p>The Web people made a time-honored mistake, although, in their defense, they expected it to be used for monolithic text documents, where such things would have been less important. And, yes, some of the &quot;Web services&quot; people are in the process of making the same mistake again.</p>
</li>
<li>
<p>The Web people <em>have</em> developed solutions. HTTP 1.1 fixes a lot of this, and current clients and servers <em>do</em> pipeline requests. I suspect they don&#39;t do it very well, probably because they already get an adequate user experience without it.</p>
</li>
<li>
<p>Transmission delay is still a significant element of the delays you see in your traceroute output&#8230; you have to clock the last bit in before you can send the first bit out. It&#39;s shrinking, though, and I agree that it&#39;s not usually significant on high-speed long-distance trunks.</p>
</li>
<li>
<p>Routing is a special case of switching, except in marketing material. In marketing material, the world &quot;switching&quot; should usually, but not always, be read as &quot;bridging&quot;.</p>
</li>
</ol>
]]></content:encoded>
	</item>
	<item>
		<title>By: RobertBradbury</title>
		<link>http://www.foresight.org/nanodot/?p=1599#comment-4510</link>
		<dc:creator>RobertBradbury</dc:creator>
		<pubDate>Wed, 18 Aug 2004 16:36:07 +0000</pubDate>
		<guid isPermaLink="false">http://www.foresight.org/nanodot/?p=1599#comment-4510</guid>
		<description>&lt;p&gt;&lt;strong&gt;Re:Silly Wabbit, Trix are for kids...&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;I would agree that total optical switching (really routing) would be faster than electro-optical methods. But when one does a traceroute (or even browses the web) the delays for the most part are *not* due to either a lack of bandwidth (on the trunk lines) or the switching delays (at the routing points).&lt;/p&gt;
&lt;p&gt;They are due (IMO) to (a) slow pipes in the &quot;last mile&quot; of the connection; or (b) poorly designed web pages that contain dozens or hundreds of images that require browsers to make dozens or hundreds of individual requests to the servers (in which case the apparent latency isn&#039;t due to my moderately slow last mile DSL connection, the bandwidth of the trunk or the last mile lines or the switching speeds at the routing points. It is instead due to the volume of stupid little requests that the web server has to handle).&lt;/p&gt;
&lt;p&gt;Oracle had to solve the data access over networks problem more than 17 years ago. Its first network data access protocols returned query results one row at the time and the communications protocol latencies and overhead made this a very slow process. When it modified its network transfer protocols to support the return of arrays of row data things significantly improved. Its too bad that much of the browser/web-server software development community hasn&#039;t developed and widely adopted similar solutions to this problem.&lt;/p&gt;
&lt;p&gt;Robert&lt;/p&gt;

</description>
		<content:encoded><![CDATA[<p><strong>Re:Silly Wabbit, Trix are for kids&#8230;</strong></p>
<p>I would agree that total optical switching (really routing) would be faster than electro-optical methods. But when one does a traceroute (or even browses the web) the delays for the most part are *not* due to either a lack of bandwidth (on the trunk lines) or the switching delays (at the routing points).</p>
<p>They are due (IMO) to (a) slow pipes in the &quot;last mile&quot; of the connection; or (b) poorly designed web pages that contain dozens or hundreds of images that require browsers to make dozens or hundreds of individual requests to the servers (in which case the apparent latency isn&#39;t due to my moderately slow last mile DSL connection, the bandwidth of the trunk or the last mile lines or the switching speeds at the routing points. It is instead due to the volume of stupid little requests that the web server has to handle).</p>
<p>Oracle had to solve the data access over networks problem more than 17 years ago. Its first network data access protocols returned query results one row at the time and the communications protocol latencies and overhead made this a very slow process. When it modified its network transfer protocols to support the return of arrays of row data things significantly improved. Its too bad that much of the browser/web-server software development community hasn&#39;t developed and widely adopted similar solutions to this problem.</p>
<p>Robert</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: RobertBradbury</title>
		<link>http://www.foresight.org/nanodot/?p=1599#comment-4508</link>
		<dc:creator>RobertBradbury</dc:creator>
		<pubDate>Wed, 18 Aug 2004 16:17:57 +0000</pubDate>
		<guid isPermaLink="false">http://www.foresight.org/nanodot/?p=1599#comment-4508</guid>
		<description>&lt;p&gt;&lt;strong&gt;Re:Silly Wabbit, Trix are for kids...&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Yes, I&#039;m aware that optical switching *would* be useful, for Folding@Home, Nano@Home, and other types of problems that the large supercomputer clusters are now being devoted to. I even discuss the problems of latency and internode communication times a bit [1] with respect to the problem of solar system sized computers (Matrioshka Brains).&lt;/p&gt;
&lt;p&gt;But that isn&#039;t what the scientists *claimed* it would be useful for. Fully optical N-Cube type grids are most likely some number of years in our future (you need one heck of a benefit to justify even a fraction of the investment that has been made into the semiconductor industry). IMO, the semiconductor industry has to hit a wall and it will have to &quot;appear&quot; that nanotech will have some problems continuing current trends for optical to receive serious attention. Otherwise it seems likely to remain an area where huge investments will be made only when people *need* the technologies yesterday (e.g. the WWW driving the requirement for WDM).&lt;/p&gt;
&lt;p&gt;Robert&lt;/p&gt;
&lt;p&gt;1. &lt;a href=&quot;http://www.aeiveos.com/~bradbury/MatrioshkaBrains/LogP.html&quot;&gt;The LogP Model for Assessment of Parallel Computation&lt;/a&gt;&lt;/p&gt;

</description>
		<content:encoded><![CDATA[<p><strong>Re:Silly Wabbit, Trix are for kids&#8230;</strong></p>
<p>Yes, I&#39;m aware that optical switching *would* be useful, for Folding@Home, Nano@Home, and other types of problems that the large supercomputer clusters are now being devoted to. I even discuss the problems of latency and internode communication times a bit [1] with respect to the problem of solar system sized computers (Matrioshka Brains).</p>
<p>But that isn&#39;t what the scientists *claimed* it would be useful for. Fully optical N-Cube type grids are most likely some number of years in our future (you need one heck of a benefit to justify even a fraction of the investment that has been made into the semiconductor industry). IMO, the semiconductor industry has to hit a wall and it will have to &quot;appear&quot; that nanotech will have some problems continuing current trends for optical to receive serious attention. Otherwise it seems likely to remain an area where huge investments will be made only when people *need* the technologies yesterday (e.g. the WWW driving the requirement for WDM).</p>
<p>Robert</p>
<p>1. <a href="http://www.aeiveos.com/~bradbury/MatrioshkaBrains/LogP.html">The LogP Model for Assessment of Parallel Computation</a></p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Anonymous Coward</title>
		<link>http://www.foresight.org/nanodot/?p=1599#comment-4504</link>
		<dc:creator>Anonymous Coward</dc:creator>
		<pubDate>Wed, 18 Aug 2004 01:49:53 +0000</pubDate>
		<guid isPermaLink="false">http://www.foresight.org/nanodot/?p=1599#comment-4504</guid>
		<description>&lt;p&gt;&lt;strong&gt;Re:Molecular Nanotechnology?&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;That was extremely informative, thank you Mr Bradbury. I have a question in regards to this. Is it feasible to assemble atoms or molecules using particle streams or waves based on smaller than atom particles, such as gamma rays? What would the limits be if one could consistently produce such rays and use them, or attempt to use them, for atomic precision mechano-synthesis?&lt;/p&gt;

</description>
		<content:encoded><![CDATA[<p><strong>Re:Molecular Nanotechnology?</strong></p>
<p>That was extremely informative, thank you Mr Bradbury. I have a question in regards to this. Is it feasible to assemble atoms or molecules using particle streams or waves based on smaller than atom particles, such as gamma rays? What would the limits be if one could consistently produce such rays and use them, or attempt to use them, for atomic precision mechano-synthesis?</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: jbash</title>
		<link>http://www.foresight.org/nanodot/?p=1599#comment-4509</link>
		<dc:creator>jbash</dc:creator>
		<pubDate>Tue, 17 Aug 2004 23:07:10 +0000</pubDate>
		<guid isPermaLink="false">http://www.foresight.org/nanodot/?p=1599#comment-4509</guid>
		<description>&lt;p&gt;&lt;strong&gt;Re:Silly Wabbit, Trix are for kids...&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Er, no. Switching time is not really related to end to end latency, and they&#039;re talking about bandwidth, not path delay.&lt;/p&gt;
&lt;p&gt;Sure, you can&#039;t reduce total latency below the speed-of-light delay, but you &lt;em&gt;can&lt;/em&gt; carry more total data on a fiber if you can switch and sense faster. If, as they claim, you can switch in a picosecond, you can run a data stream on the order of 1Tbps over a single fiber at a single wavelength. You can&#039;t switch that fast in an electro-optical device. You can do a certain amount of WDM instead, but that requires that you replicate a lot of the hardware for every wavelength, which is Not Cheap (tm).&lt;/p&gt;
&lt;p&gt;... but that number does sound like hype. Pulses spread, there are power limits, you&#039;d have to build reasonably complex all-optical switching and buffering systems, especially if you wanted to avoid massive replication of the switching processors. That means that, unless you got really clever, you&#039;d have to build VLSI out of this stuff, which means you may run into optical limitations on device dimensions, the speed of light delay within a chip could constrain your system design, the fabrication is likely to be hard, and so forth.&lt;/p&gt;

</description>
		<content:encoded><![CDATA[<p><strong>Re:Silly Wabbit, Trix are for kids&#8230;</strong></p>
<p>Er, no. Switching time is not really related to end to end latency, and they&#39;re talking about bandwidth, not path delay.</p>
<p>Sure, you can&#39;t reduce total latency below the speed-of-light delay, but you <em>can</em> carry more total data on a fiber if you can switch and sense faster. If, as they claim, you can switch in a picosecond, you can run a data stream on the order of 1Tbps over a single fiber at a single wavelength. You can&#39;t switch that fast in an electro-optical device. You can do a certain amount of WDM instead, but that requires that you replicate a lot of the hardware for every wavelength, which is Not Cheap &#8482;.</p>
<p>&#8230; but that number does sound like hype. Pulses spread, there are power limits, you&#39;d have to build reasonably complex all-optical switching and buffering systems, especially if you wanted to avoid massive replication of the switching processors. That means that, unless you got really clever, you&#39;d have to build VLSI out of this stuff, which means you may run into optical limitations on device dimensions, the speed of light delay within a chip could constrain your system design, the fabrication is likely to be hard, and so forth.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: BuffYoda</title>
		<link>http://www.foresight.org/nanodot/?p=1599#comment-4507</link>
		<dc:creator>BuffYoda</dc:creator>
		<pubDate>Tue, 17 Aug 2004 17:28:25 +0000</pubDate>
		<guid isPermaLink="false">http://www.foresight.org/nanodot/?p=1599#comment-4507</guid>
		<description>&lt;p&gt;&lt;strong&gt;Re:Silly Wabbit, Trix are for kids...&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;The benefit is probably vastly overestimated, but an all-optical switching mechanism would surely prove useful. Imagine a warehouse filled with a 3D grid of N x N x N optical computers, where N = 100 (used for protein folding or rational protein design, for example; or even computer-based design and simulation of nanostructures). Each node needs to be able to communicate with 1,000,000 other nodes. Even if each node communicates directly only with those in its immediate vicinity (some algorithms do parallelize well like this), that&#039;s still a lot of connections. And at this short scale of distance, the speed of light isn&#039;t a limiting factor, it&#039;s the speed of switching.&lt;/p&gt;

</description>
		<content:encoded><![CDATA[<p><strong>Re:Silly Wabbit, Trix are for kids&#8230;</strong></p>
<p>The benefit is probably vastly overestimated, but an all-optical switching mechanism would surely prove useful. Imagine a warehouse filled with a 3D grid of N x N x N optical computers, where N = 100 (used for protein folding or rational protein design, for example; or even computer-based design and simulation of nanostructures). Each node needs to be able to communicate with 1,000,000 other nodes. Even if each node communicates directly only with those in its immediate vicinity (some algorithms do parallelize well like this), that&#39;s still a lot of connections. And at this short scale of distance, the speed of light isn&#39;t a limiting factor, it&#39;s the speed of switching.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: RobertBradbury</title>
		<link>http://www.foresight.org/nanodot/?p=1599#comment-4503</link>
		<dc:creator>RobertBradbury</dc:creator>
		<pubDate>Tue, 17 Aug 2004 15:17:43 +0000</pubDate>
		<guid isPermaLink="false">http://www.foresight.org/nanodot/?p=1599#comment-4503</guid>
		<description>&lt;p&gt;&lt;strong&gt;Re:Molecular Nanotechnology?&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Actually, MNT is more about molecular arrangements allowed by physics more than by &quot;chemistry&quot;. The question of whether or not &quot;classical&quot; MNT (which is usually based on the concept of mechanosynthesis) is or is not &quot;classical&quot; chemistry is one major source of the differences of opinion between Drexler and MNT proponents and MNT naysayers (e.g. Smalley, Church, etc.).&lt;br /&gt;
&lt;br /&gt;
If you consider the proposals for MNT based on mechanosynthesis (not strictly necessary as you can get MNT based on non-mechanosynthetic assembly methods, particularly classical biochemistry) then one is usually dealing with a SPM/AFM like device manipulating specific small molecules with device &quot;tips&quot; that are able to control specific assembly operations/reactions. To get to this point you have to solve a lot of positioning/reliability questions that have some similarity to what the semiconductor industry has had to go through. If you misalign a mask over a chip or dope the chip with the wrong atoms during chip manufacture you get chips that don&#039;t work. The same applies to MNT. One has to ask questions like &quot;Is my device positioned over the XYZ tip bin?&quot;, &quot;Is my device positioned over the ZYX reactant source?&quot;, &quot;Is my device positioned over the location where I want the reaction to occur?&quot;, &quot;Did the reaction take place successfully?&quot;, etc. You *either* have to assume that you are positioning your manipulators precisely to subatomic accuracy (something that cannot easily and reliably be done at this time) or have some type of feedback (such as taking a &quot;snapshot&quot;) that allows you to verify and/or manipulate the assembly process.&lt;br /&gt;
&lt;br /&gt;
So the articles on manipulating molecules with light and/or taking pictures of where they are at femtosecond rates would seem to qualify as being important to at least some nanoassembly strategies. Light switching improvements are only important if you believe we are going to have fully optical computers in the near future and such computers will be required for the analysis or management of nanoassembly processes. (One could make the argument that one is going to need much faster computers for the analysis of images taken every few femtoseconds but this is a stretch IMO).&lt;br /&gt;
&lt;br /&gt;
Robert&lt;br /&gt;&lt;/p&gt;

</description>
		<content:encoded><![CDATA[<p><strong>Re:Molecular Nanotechnology?</strong></p>
<p>Actually, MNT is more about molecular arrangements allowed by physics more than by &quot;chemistry&quot;. The question of whether or not &quot;classical&quot; MNT (which is usually based on the concept of mechanosynthesis) is or is not &quot;classical&quot; chemistry is one major source of the differences of opinion between Drexler and MNT proponents and MNT naysayers (e.g. Smalley, Church, etc.).</p>
<p>If you consider the proposals for MNT based on mechanosynthesis (not strictly necessary as you can get MNT based on non-mechanosynthetic assembly methods, particularly classical biochemistry) then one is usually dealing with a SPM/AFM like device manipulating specific small molecules with device &quot;tips&quot; that are able to control specific assembly operations/reactions. To get to this point you have to solve a lot of positioning/reliability questions that have some similarity to what the semiconductor industry has had to go through. If you misalign a mask over a chip or dope the chip with the wrong atoms during chip manufacture you get chips that don&#39;t work. The same applies to MNT. One has to ask questions like &quot;Is my device positioned over the XYZ tip bin?&quot;, &quot;Is my device positioned over the ZYX reactant source?&quot;, &quot;Is my device positioned over the location where I want the reaction to occur?&quot;, &quot;Did the reaction take place successfully?&quot;, etc. You *either* have to assume that you are positioning your manipulators precisely to subatomic accuracy (something that cannot easily and reliably be done at this time) or have some type of feedback (such as taking a &quot;snapshot&quot;) that allows you to verify and/or manipulate the assembly process.</p>
<p>So the articles on manipulating molecules with light and/or taking pictures of where they are at femtosecond rates would seem to qualify as being important to at least some nanoassembly strategies. Light switching improvements are only important if you believe we are going to have fully optical computers in the near future and such computers will be required for the analysis or management of nanoassembly processes. (One could make the argument that one is going to need much faster computers for the analysis of images taken every few femtoseconds but this is a stretch IMO).</p>
<p>Robert</p>
]]></content:encoded>
	</item>
</channel>
</rss>