Nanotech report argues “uploaded” AI, nanobots implausible
from the it's-not-a-nightmare-if-you're-awake dept.
Politech gives a pointer to the final report of an NSF conference on "Societal Implications of Nanoscience and Nanotechnology, NSF". Politech's editor said "This is an important report, though the Viola Vogel comments are from someone who is in part a nanotechnology critic. For instance, Vogel calls simulating a human mind on a computer a "nightmarish scenario," though it seems to me to be an inevitable and unobjectionable step."



May 1st, 2001 at 1:02 AM
Questioning nanodogma
I don't think that someone who has helped set up a PhD program in nanotechnology at the University of Washington is a person I would call a "nanotechnology critic". Most of Vogel's comments are about what we can do to prepare people educationally for the needs of the nanotech world. The only critical parts of her article include:
In case you wonder who she specifically has in mind, at least as far as 'optimists' are concerned:
She goes on to give her perspective on why uploading, if it is possible at all, is orders of magnitude more complex than most of its boosters would seem to assume.
She then discusses another problem area, one recently commented on by Richard Smalley and reported upon by this site: the questionable abilities of nanobots. I won't quote it here, go read the report. It's well worth it, and not just for Vogel's portion.
Why, then, is this an important report but Vogels comments are critical? Should we be True Believers, swallowing Singularity dogma without question? I continue to state that this sort of attitude is reminiscent of religious or magical thought. Objections should be dealt with on their merits, not dismissed with some hand-waving about how they don't get it, or are just easily shocked.
To some extent, we are all like renaissance thinkers arguing about whether humans can fly to the moon – we don't have enough data or knowledge of future capabilities to argue convincingly one way or the other. But, no matter how romantic the notion, we aren't getting there in a swan-drawn chariot. Things frequently turn out to be more difficult than we wish them to be.
May 1st, 2001 at 8:05 AM
Re:Questioning nanodogma
Should we be True Believers, swallowing Singularity dogma without question?
That's a good question. There are plenty of interesting questions one could ask: how likely that it will happen, what forces make it likelier or unlikelier, whether it will be a good thing or a bad thing for humanity, and whether and how we can influence its degree of goodness. Eli Yudkowsky thinks about this stuff, but at least in public fora like this, there is little other discussion of these matters.
Until there are real intelligent machines, the only force propelling the Singularity will be human ambition. While the NASDAQ and the computer industry were growing like weeds, the Singularity idea sounded more credible. But the business cycle is a pendulum swinging between greed and fear, and now that we're in the fear phase, greed is on hold. So the economic slowdown is also a slowdown in progress toward the Singularity.
Most Singularity discussions regard machine intelligence as the sole prerequisite for a hard take-off. Computers today are still organized around the idea of waiting for commands from humans. There won't be a real Singularity until machines develop volition. The closest thing to machine volition in my day-to-day experience is probably a cron job.
Volitional machines can be dumb and smart machines can lack volition, so these are really two distinct properties. Much energy has been spent in making machines smart, but outside of a few specific niches, there hasn't been much effort to develop a deep and useful notion of machine volition.
May 1st, 2001 at 8:50 AM
Vogel is simply wrong.
I've reviewed her comments, and they demonstrate, at least to me, that she doesn't have sufficient knowledge about the topics (uploading or nanobots) to comment on them. For example, comparing nanobots to viruses is a stupid comparison. Anyone who has looked at any of Eric's older work, the first couple of Foresight conference proceedings, or the papers by Robert Freitas about nanobots or Nanomedicine knows that the correct comparison is nanobots to bacteria.
I'm in the process of writing a detailed referenced response to Vogel's commentary. When I get it done, I will post the URL here. I'm fairly certain it will demonstrate that she hasn't thought the topics in sufficient detail to "pass judgement" on them. On the other hand some of Kurzweil's comments indicate that he too hasn't thought about it in depth (you don't have sufficient bandwidth to do wireless communications of the internal brain bandwidth to an external computer).
What we need is a lot less speculation and hand waving and a lot more serious exploration and work on what kinds of nanobots would be required to accomplish uploading.
May 1st, 2001 at 12:02 PM
Re:Questioning nanodogma
Interesting, Volition, I would think, would have to be apart of self-awareness. Self-awareness (at least the biological kind) comes from a feedback system such as the central nervous system allowing our brains to define where we leave off and where the rest of the world begins. So my question is if a computer, that has no sensory feed back, become self-aware?
May 1st, 2001 at 5:55 PM
Re:Questioning nanodogma
Volition, I would think, would have to be apart of self-awareness [a distinction of self/other]
Not necessarily. Volition implies autonomous action to advance some goal, possibly unrelated to the boundaries of a self. A thermostat regulates the temperature of a house with minimal human intervention, but nowhere in the themostat is there any internal representation of "thermostat" or "not-thermostat" or "boundary-of-thermostat". If we are to believe the PR of people like Gandhi and Mother Teresa, they pursued goals without regard for the distinction between themselves and others.
May 2nd, 2001 at 4:08 AM
About uploading
I tend to agree with Vogel's opinion about the feasibility of uploading in the near future. In fact, I think that we will remain mostly biological for the next 200-300 years. The reasons why I believe this are based on what little we really do know about the neurological methods of memory storage and consciousness and what it tells us.
We know, at this point, that memory involves at least three different mechanisms; long-term potentiation, growth of dendrites, and gene-expression within the neuron nuclei themselves. How these different mechanisms relate to each other is currently unknown. We also know that memory storage (and, therefor, consciousness) is primary CHEMICAL in nature, not electronic. There is a growing concenses amoung researchers that long-term memories are based on the dendritic connections between neurons as well as the chemical type of synapes between the neurons (there are several different chemical types). How this relates to the exact information storage is totally unknown at this time. The point is that we really don't know that much about it, and what little we do know tells us that it is completely different from current or proposed computer architecture.Two other issues is that our neuro-physiology is redundant in nature and, that timing is very critical.
However, there are other reasons why uploading is not going to happen soon. Stem-cell regeneration and biotech, in general, is advancing very rapidly right now. If you do not believe me, spend the next few months following events on the two following biotech sites: http://www.biospace.com and http://www.bio.com. In fact, there are now two (yes, two of them!) different methods for reversing brain aging in mice. Check out the aformentioned websites to get the latest scoop on this. My point is that stem-cell based technology is going to make it alot easier to regenerate our bodies and brains than any kind of uploading will in the next 20-30 years. Stem-cells and other forms of biotech are going to make it alot easier and cheaper to remain biological than any kind of nanocomputing is going to make us uploads.
By remaining "mostly" biological does not mean that we have to stay with the same biochemistry that we have now. In deed, there will be many developments in synthetic, novel forms of biochemistry that will give us capabilities that we can only dream of right now (like living underwater or in environments radically differnet than that of Earth).
If you want to find out more information about neuroscience, biotech, and long-term survival (i.e. immortality), I recommend the PERIASTRON newsletter, compiled and written by a long-time cryonics member, Thomas Donaldson. You can find Periastron archives at http://www.cryonet.org, as well as information about subscribing.
May 2nd, 2001 at 4:56 AM
Re:Questioning nanodogma
True, but Ghandi and Mother Teresa had to base their goals on self-awareness, they knew what it was to feel pain and hunger and took the logical leap that others (humans) felt the sameway, in other words they could relate to others. Again all of this is based on Self-awareness, a thermostat has a very narrowly defined ability, it can either make it hot or cold depending on the temperature around it. There are lots of computers like that, servers being one, web servers dish out web pages by the thousands everday without human intervention (unless its NT of course
)