Foresight Nanotech Institute Logo
Image of nano

Kurzweil vs. Dertouzos debate future technology

from the who-won? dept.
Joseph Sterlynne writes "MIT's Technology Review has printed an exchange between Ray Kurzweil and Michael Dertouzos regarding the latter's recent article on reasonable expectations of technological progress." Kurzweil: "As for nanotechnology-based self-replication, that's further out, but the consensus in that community is this will be feasible in the 2020s, if not sooner." Dertouzos: "We have no basis today to assert that machine intelligence will or will not be achieved…Attention-seizing, outlandish ideas are easy and fun to concoct."

One Response to “Kurzweil vs. Dertouzos debate future technology”

  1. Jon Taylor Says:

    Uninformed, ostrich-head-in-the-sand pomposity

    Taylor:

    I find the level of obvious personal fear and denial, the overt arrogance and pomposity, the extremely uninformed level of basic science knowledge and the mealy-mouthed appeals to humanist irrationality in Dertouzos' reply to be quite offensive. So the reader will perhaps forgive me if I adopt a less than polite tone when savaging this dangerous blather apart. I am really sick and tired of how much airplay this drek gets in the name of "balanced debate". This is NOT a debate, it is science and rational extrapolation of past trends through the present and into the future.

    People like Dertouzos are unfortunately quite common these days among the opponents of the _real_ scientists and rational thinkers in this debate, and their denial-laden, mystical and profoundly irrational "arguments" MUST be stomped down with as much force as is necessary to clearly drive them out of the arena of serious debate. The most challenging situation that the human race as a whole has ever or probably will ever face is rapidly approaching, and we need exploration of ideas without letting our human fears and limitations of imagination get in the way. With that in mind, here we go:

    Dertouzos: In my column, I observed that we have been incapable of judging where technologies are headed,

    Taylor: This is absurd. We are quite capable of judging this, especially in the short term, simply by taking the quite conservative stance of simply projecting past trends forward into the future. Almost be definition, any assumed deviance from this type of projection must be justified, not the projection itself which should be taken as a conservative default baseline.

    Dertouzos: hence we should not relinquish a new technology, based strictly on reason. Ray agrees with my conclusion, but for a different reason: He sees technology growing exponentially, thereby offering us the opportunity to alleviate human distress and hasten future economic gains.

    Taylor: The potential for relieving human suffering, quickly and cheaply, which is offered by these technologies (MNT in particular) is so incredibly great that I cannot conceive of a defensible moral stance against their immediate development and widespread use.

    Dertouzos: From his perspective, my point is "irrelevant," and my views on the future of technology are "skeptical."

    Taylor: Ray was just being polite. Not me. Your points – ALL of them – are *grossly* uninformed and yet presented with a really irritating and condescending pomposity, irrelevant to the point of offensiveness, laden with subjective personal value judgements which are presented as if they were somehow evidence, and in general total crap. This isn't a debate, it is you projecting your rather obvious fears of the change times which are coming out onto the world. I've seen feminist diatribes which were defended better than your case.

    Dertouzos: Let's punch through to the underlying issues, which are vital, for they point at a fundamental and all-too-often ignored relationship between technology and humanity.

    Taylor: The question is whether the "full-on MNT and superhuman AI will happen and soon" stance is going to be proven out. Your attempts at deflection below are an obvious sign of extreme psychological discomfort and projection on your part, and there's simply no place for that in a debate this important. The only important aspect of "the relationship between technology and humanity" as far as this debate is concerned have to do with human(i.e. market and political)-generated pressures toward and away from the point at which the technologies in question have been developed.

    Dertouzos: Ray's exponential-growth argument is half the story:

    Taylor: It is the whole story, for it implies a gigantic pressure of historical force and momentum which has resulted in the past in an exponential curve of technological growth. Again, it is on YOU to show what type of force can be brought to bear over the next 50-100 years to almost completely counter this tidal wave of market momentum, AND the basic human competitive urge to domainate via accumulation of material goods, AND the moral need to relive suffering in the third world, etc etc etc. Trust me, you won't be able to even come close.

    Dertouzos: No doubt, the number of transistors on a chip has grown and will continue to grow for a while. But transistors and the systems made with them are used by people. And that's where exponential change stops!

    Taylor: Does it stop with drug use, which is now for better or for worse an integral part of every human society on earth? Does it stop with the internet turning its users into cells in a hive mind? Does it stop with progressively more and more intrusive biomedical engineering? Transistors and microprocessor technology are enablers of all of those technologies, and they will NOT stop growing until people are immortal and eternally happy. That may very well be the end of the exponential growth of technology, but for sure it won't happen before everything around us has changed into unrecognizeability. We'll almost certainly have mature Nanotech around the same time or even a bit before we run out of needed technological modifications to the human organism.

    Dertouzos: Has word-processing software, running on millions of transistors, empowered humans to contribute better writings than Socrates, Descartes or Lao Tzu?

    Taylor: What the HELL does the relative quality of literature have to do with whether or not market pressures will continue their historical growth patterns or whether various engineering problem domains will or will not be surmounted?? THOSE are the ONLY issues in question here, not warm-and-fuzzy humanist crap. Besides, I have a feeling that you and a lot of other mystical humanists are going to be eating some serious crow in about 15 years or so when a strong AI with at least several times the human IQ designs literature based on intimate knowledge of the human nervous system and how to stimulate it which, upon being read, will result in feelings being evoked in the reader which are so intense, and content so pertinent and profound, that NOTHING ever written by any human could possibly compare. I find your childlike faith in the transcendent creative capacity of the human brain quite amusing.

    Dertouzos: Technologies have undergone dramatic change in the last few centuries.

    Taylor: Entire technologies have appeared, lived their lives and died out to make way for new technologies. "Dramatic" is a rather mager choice of words here.

    Dertouzos: But people's basic needs for food, shelter, nurturing, procreation and survival have not changed in thousands of years.

    Taylor: Sure, but the secondary psychological mechanisms which are learned by the maturing human organism in response to the physical and social environmental context present are almost infinitely malleable, and it is THOSE "needs" and "desires" which form the organism's interface to the world. People still want mates today, but they can hunt for them on the internet, for example.

    Dertouzos: Nor has the rapid growth of technology altered love, hate, spirituality or the building and destruction of human relationships.

    Taylor: ALL of those you listed have in fact been altered DRAMATICALLY – at the secondary psychological level, which is the only one that counts here. Love? There are more isolated sociopaths today than there ever have been, because they don't die when kicked out of the tribe anymore, they just move to a new town and start exploiting people again. Hate? When one man's insane rage can result in many tens of deaths of innocents due to automatic gunfire? Spirituality? Hell, most people these days live out their lives as spiritually empty shells and never even know what they missed! Building and destruction of human relationships??? Boy it's been a while since you were single, right? |-> Post-traumatic stress disorder is the norm among generation X singles in most urban areas these days, and stories of horrific childhood abuse are astonishingly common. Cynicism, bitterness and resignation are the order of the day. Even between the last generation and the current one, there is a HUGE gap in basic aspects of social interaction. Please come down out of your ivory tower and find out about the way the world really is today.

    Dertouzos: Granted, when we are in the frying pan, surrounded by the sizzling oil of rapidly changing technologies, we feel that everything around us is accelerating.

    Taylor: This acceleration is not put forth by Kurtzweil from his own narrow personal perspective, it is based on established facts about human psychology with regard to linear vs. exponential projection. Our projective mechanisms evolved in the stone age, when the curve of social and technological change was so flat it basically WAS linear, to a close enough approximation.

    Dertouzos: But, from the longer range perspective of human history and evolution, change is far more gradual. The novelty of our modern tools is counterbalanced by the constancy of our ancient needs.

    Taylor: The use of "change" in this blatantly vague, wishy-washy manner is entirely inconsistent with the foundations of this debate. You don't get to change the terms around like that. We are specifically discussing the rate of technological growth here, which is established historical FACT and is not open to debate unless you are a historian.

    Dertouzos: As a result, technological growth, regardless of its magnitude, does not automatically empower us.

    Taylor: Who said it did?? Who said that whether it did or didn't on the whole has any bearing whatsoever on whether even a few genuine breakthroughs here and there won't *still* result in the development of full-on MNT? More deflection. All the humanity in the world won't stop this technology from arriving, soon and with all kinds of angels and demons along for the ride.

    Dertouzos: It does so only when it matches our ability to use it for human purposes.

    Taylor: Human purposes? You mean like mate competition via status, with status being defined as control of material wealth and expensive ornamenting objects? The same "human purpose" which is THE primary driving factor behind the exponential growth curve of technology since civilization got started? The same human purpose which is the primary source of irrationality and violent urges? Human purposes are what scare me the most right now!!!

    Dertouzos: And that doesn't happen as often as we'd like.

    Taylor: It is obvious here that what you mean is "it doesn't happen in the _ways_ _I_ would like". The people who make sports cars or first-run next generation handheld computing devices are satisfied, I have a feeling. And they and their high-end market kin are perfectly capable of driving technological growth through to mature MNT and beyond.

    Dertouzos: Just think of the growing millions of AIDS cases in Africa, beyond our control.

    Taylor: What a messed-up bit of evidence to BADLY misuse for your arguments. The problem of third-world AIDS is a problem of first-world political will to spend the money needed to do whatever it takes to clean up the mess. It could be done, and even going beyond basic issues such as safe sex education and access to condoms, most first-world AIDS patients who can afford the expensive drug cocktails which keep their disease mostly in remission and which are not available at economical prices in the third world because the pharmeceutical multinationals want to protect their grossly inflated first-world price margins. What sort of fantasyland do you live in, anyway?

    Dertouzos: Or, in the industrial world, ask yourself whether we are truly better off surrounded by hordes of complex digital devices that force us to serve them rather than the other way around.

    Taylor: Speak for yourself, please. I and most people around me most emphatically do NOT consider ourselves to be servants of our digital devices. Granted, I live and work in Silicon Valley, but I know many people outside the bubble here and they don't really feel that way either! Sure, they get frustrated at badly-designed technology which is a pain to deal with, but that's an issue of human interface optimization.

    Dertouzos: Our humanity meets technology in other ways, too: In forecasting the future of technology, Ray laments that most people use "linear thinking" that builds on existing patterns, thereby missing the big "nonlinear" ideas that are the true drivers of change. Once again, this is only half the story: In the last three decades, as I witnessed the new ideas and the 50-some startups that arose from the MIT Laboratory for Computer Science, I observed a pattern: Every successful technological innovation is the result of two simultaneous forcesóa controlled insanity needed to break away from the stranglehold of current reason and ideas, and a disciplined assessment of potential human utility, to filter out the truly absurd.

    Taylor: This is so inflated and pompous that it begs to be pricked with a pin like an overinflated balloon. How can you possibly consider the extremely insular and narrow perspective of ONE department at ONE university, dealing with ONE small subdivisional slice of ONE academic discipline, to be representative of ANYTHING in general? Out here in The Valley(tm), I have seen hundreds of wildly varying types of startups go through their lifecycles, and I can tell you that the notion, implicit in your writing, that more than a VERY few technology start-ups end up doing anything innovative is absurd. Most startups are designed either as a sales pitch to be bought out or as a huge art portfolio for the engineers employed there. The vast bulk of all technological innovation happens in pure research labs – huge companies like IBM and HP, academic labs such as the Media Lab (on occasion, anyway) or in military think tanks. The output of these labs is productized in varying ways, but the rubber meets the road in the labs.

    Dertouzos: Focusing only on the wild part is not enough: Without a check, it often leads to exhibitionistic thinking, calculated to shock.

    Taylor: Describing atomic energy in general and atomic explosives in particular to someone 200 years ago would cartainly have shocked them, to put it quite mildly. That has ZERO bearing on the probability that atomic energy would eventually be discovered and put to use in weapons development, which in hindsight we can see is not too far from 100%. Atomic energy was bound to be discovered, and once discovered was bound to be used in weapons development. That's a natural consequence of human nature, and so is the development of MNT which is driven by market and other social pressures. It WILL happen, and soon. Deal with it.

    Dertouzos: Wild ideas can be great. But I draw a hard line when such ideas are paraded in front of a lay population as inevitable, or even likely.

    Taylor: Oh really? Well, *I* draw a MUCH harder line than you do when prominent "academics" – I use the term loosely here in your case, since you obviously haven't even bothered to read the basic literature in the fields of science relevant to the debate – stand up in public and tell people not to take the most important event in the history of humanity seriously even as a potentiality!!! What you are doing is EXTREMELY socially irresponsible, in my view. The potential consequences of mature nanotech being loosed on the world in twenty to thirty years or so without any prior serious debate are so large that we MUST take this issue as seriously as we can, as long as it is POSSIBLE. Look at how much attention and resources are being put into near earth asteroid tracking, for example. Do you seriously think that the odds of a killer asteroid hitting the earth in the next couple hundred years are greater than the odds of humanity developing mature nanotech, biotech and strong AI??

    Dertouzos: That is the case with much of the futurology in today's media, because of the high value we all place on entertainment.

    Taylor: Forget about the media. The important debate is now taking place among a community of prominent, activist thinkers (and increasingly by the military, which tells you volumes about the true feasibility of nanotechnology) who unanimously agree that the technical feasibility argument is over. Nanotech is *quite* feasible, and frankly anyone who seriously questioned this after '92 or so when Drexler's "Nanosystems" came out in print is either too lazy to do the proper reading or in severe psychological distress and must deny the feasibility of nanotech in order to maintain their sanity. And yes, I have met quite a few people who get full-on panic attacks when confronted with this subject!!

    Derzoutos: With all the talk about intelligent agents, most people think they can go buy them in the corner drugstore.

    Taylor: What a ridiculous assertion. Most people have never heard of intelligent agents in the first place.

    Derzoutos: Ray, too, brings up his experience with speech translation to demonstrate computer intelligence. The Lab for Computer Science is delightfully full of Victor Zue's celebrated systems that can understand spoken English, Spanish and Mandarin, as long as the context is restricted, for example to let you ask about the weather, or to book an airline flight. Does that make them intelligent? No.

    Taylor: Define "intelligence" in purely computational terms. You can't. The functional restrictions are purely an issue of raw computing power, bandwidth, and storage for neural nets to evolve as our brains do. We should be blowing past the human brain's capacity for storage and functional adapatation in about nine years or so, quite possibly sooner. We'll have solved the problems you mention well before then, though. I guess you don't take Hans Moravec seriously? I mean, why should you? He's only the leading roboticist in the world!

    Dertouzos: Conventionally, "intelligence" is centered on our ability to reason, even imperfectly, using common sense.

    Taylor: There is NOT any such thing as intelligence, only adaptive algorithms our brains have learned and the metarepresentational collection objects into which they are bound. There is no such thing as universal common sense grammars, either. "Common sense is the collection of prejudices acquired by age 18" – Albert Einstein.

    Dertouzos: If we dub as intelligent, often for marketing or wishful-thinking purposes, every technological advance that mimics a tiny corner of human behavior, we will be distorting our language and exaggerating the virtues of our technology.

    Taylor: Pure mysticism. It won't even be that difficult to simulate a human brain once we get past the REAL challenges, which are getting the raw computing power and storage densities and accurately reverse-engineering the invariant functional domains of the brain. Ten years tops, quite likely well before then. Read some of the newest neurobiology literature. The underlying architecture of the human mind is finally yielding its secrets to fMRI scans and functional analysis, and the picture that is appearing is just not that complex and definitely subject to formal analysis, modeling and simulation.

    The human brain is a gigantic array of malleable nodes (unallocated neurons and larger association-sets with attribute tags), a set of between five and seven domain-specific reasoning modules which encode propositional calculi relevant to human needs (social interaction, ballistics simulation, formal logic, hazard management, etc), several intermediate buffers which hold, manipulate, and construct chains of associated patterns and basically define the conscious and unconscious cognitive domains, and stored memories which have retrieval indexes inlined into the structure of the memory database itself.

    All of this is trained into adaptive patterns by genetically hardwired reward-response pattern associations and their resultant downstream metarepresentation patterns, and the rest of the symbolic abstraction inherent in the object-representational formalism used in the human brain comes from sensory input and derived formalisms of increasing complexity (well, increasing in some people anyway |->).

    This is only the cortex, of course – the limbic brain architecture is much less well known and is suspected of being extremely convoluted, due to their much greater age and the resultant higher degree of functional adaptation and optimization inherent in it. Luckily, the limbic brain is not involved in cognition (the domain of "intelligence") but motivation, emotion, social interactivity and affective state generation.

    The reptilian brain is even more ancient, but it is so functionally basic by this point in evolutionary history that it is nothing more than a simple coprocessor, in charge of maintaining the physical plant operations so to speak. Actually the cortext is now thought to also be a coprocessor, with the true master seat of top-level lifecycle programming being driven by the limbic midbrain.

    But all we are interest in modeling for AI purposes is the cortex's functional domain structure – the limbic motivations and affective value colorations should be easy to model in an external database and inject into the neural net simulation program. We already know how to train neural nets this way. And if we do end up needing more complete reverse-engineered details of the limbic control architecture, I'm quite certain we'll get it down in at most another couple of years after the cortex.

    This stuff just ain't that hard (not *too* easy though |->), now that we have the necessary tools (computing power and fMRI brain scanners) to do and learn what we need to. As always, the primary impediment to humans doing whatever they can conceive of lies with our tools – our technology. So strong AI will almost certainly be here within ten to fifteen years.

    Dertouzos: We have no basis today to assert that machine intelligence will or will not be achieved.

    Taylor: This is ridiculous. Of course it will be, and this is provable as an absolute! We know it is possible to simulate the operation of a human brain, because that's what human brains do!!! The human brain is an existence proof of the physical possibility of a collection of atoms being arranged in such a way as to produce an entity which can pass the turing test. Now consider this: what are the odds that our brains are based on the most absolutely optimal physical computing substrate and/or functional design architecture? From what we know of evolutionary history and neuroanatomy, the *reverse* is likely; that our neurology is *extremely* inefficient, and most ways human would think of to re-architect it once the basic principles are known will almost certainly be more efficient than arrays of cell-based analog switches. We are facing the possibility of AIs with tens of thousands of times our intelligence. If that happens, all bets are off. And it could happen within my lifetime.

    Dertouzos Stating that it will go one way or the other is to assert a belief, which is fine, as long as we say so.

    Taylor: It most certainly is not, and we have no problem stating that. No one is stating true absolutes here – it is POSSIBLE that something will happen to prevent us from growing any further, technologically. All of the scenarios I can think of for this to be realistic and 100% comprehensive across ever single human on earth are considerably more frightening than strong nanotech, biotech and AI all rolled into one, and they should be for anyone who seriously considers the subject. How about a central computer (or cluster) monitoring every though of every human on the planet via high-bandwidth wireless cortical implants for antisocial impulses? All humans confined to virtual reality worlds 24/7, where they can do no _real_ harm? Highly detailed, AI-directed lobotomy-type operations to remove most of "humanity" from our brains? Massively destructive (nuclear,bio,chem,nano) religious warfare which sweeps the globe, sending up back into a pretechnological dark ages? Can YOU think of a really happy scenario which stops our technological growth dead in its tracks?

    Dertouzos: Does this mean that machine intelligence will never be achieved? Certainly not. Does it mean that it will be achieved? Certainly not. All it means is that we don't knowóan exciting proposition that motivates us to go find out.

    Taylor: I find it beyond comprehension that someone in your position is so behind on the literature. It certainly speaks volumes about what activities you choose to devote the bulk of your time to, hm?

    Dertouzos Attention-seizing, outlandish ideas are easy and fun to concoct.

    Taylor: So easy that it took Richard Feynman, the most brilliant (yes, much moreso than Einstein) physicist of the 20th century, to conceive of it (MNT) initially? So easy that it took K. Eric Drexler decades of research in near obscurity and a frankly ridculous amount of cross-science research integration in order to convince people that the same basic molecular machinery-based mechanisms that all life on this planet is designed around are applicable in a broader engineering sense? Those of us who truly understand the implications of what is about to happen to the human race are more than a little disgusted at the calcification of intellect and extreme (and sometimes self-serving) narrow mindedness of a hell of a lot of the scientific community as a whole. We've lost forty years of potential discussion about the implication of nanotech, because it was such a "wild" idea that people refused to take it seriously or preferred not to think about the implications! Maybe our species doesn't deserve to live.

    Dertouzos: Far more difficult is to pick future directions that are likely. My preferred way for doing this, which has served me well, though not flawlessly, for the last 30 years, is this:

    Taylor: OK, we'll play it your way for now.

    Dertouzos: Put in a salad bowl the wildest, most forward-thinking technological ideas that you can imagine. (This is the craziness part.)

    Taylor: OK, I'll throw in mature nanotech and strong AI – MUCH stonger than human intelligence.

    Dertouzos: Then add your best sense of what will be useful to people. (That's the rational part.)

    Taylor: OK, how about the end of all disease and human immortality for starters? That's pretty useful, I would think. Now how about being able to drop a "seed" the size of a golf ball on the ground and have it use a small extremely high density stored power core to begin digesting the soil around it, build and deploy "leaves" (solar power collectors) and then proceed to manufacture dwellings, irrigation equipment, basic appliances and even clear the land for you? Oh yeah, and when all of this is finished, it makes you a couple dozen more seeds! That sort of capability would be possible with fully mature nanotech, and I think that a few (billion) hardscrabble peasant farmers in the third world would find stuff like this very useful, don't you? Even a simple solar collector "nanoplant" would be an enormous help to poor, isolated rural areas. Come up with ANYTHING to match that, I challenge you.

    Dertouzos: Start mixing the salad. If you are lucky, something will pop up that begins to qualify on both counts.

    Taylor: He he he. More like "manna pours down from the heavens" in this case. As I said, MNT will result in such an incredible potential for improving the human condition that I can think of no possible moral argument against actively promoting it as a very realistic development goal.

    Dertouzos: Grab it and run with it, since the best way to forecast the future is to build it. This forecasting approach combines "nonlinear" ideas with the "linear" notion of human utility, and with a hopeful dab of serendipity.

    Taylor: There are tens of thousands of people all over the world who are doing exactly that. Are you genuinely unaware that nanotech is happening NOW, all over the place?? Tour wires? HP's molecular gate arrays? Scanning probe lithography? Fullerene wiring? Self-assembly of DNA-based 3D structures? Protein engineering? Quantum dots? If not, then why do you consider yourself in any way qualified to say what is and is not a "wild" idea? The nerve!

    Dertouzos: Ray observes that technology is a double-edged sword. I agree, but I prefer to think of it as an axe that can be used to build a house or chop the head off an adversary, depending on intentions. The good news is that since the angels and the devils are inside us, rather than within the axe, the ratio of good to evil uses of a technology is the same as the ratio of good to evil people who use that technology…

    Taylor: What an astonishingly naieve viewpoint. If someone deploys a "gray goo" nanovirus which lives off sunlight and atmospheric oxygen and eats the entire crust of the earth for raw materials, the fact that most people are good doesn't mean a damn thing. The more powerful a technology is, the fewer psychos have to be around to kill more and more people, until eventually (with MNT, probably) one person will be able to kill EVERYONE. Now, do you agree that there are currently at least several tens of thousands of people alive on the earth who are mentally ill enough that they'd gladly release a gray goo nanoreplicator if they had the chance, knowing that it would kill them and every other life form on the planet? From where I sit, that's a ratio of good to evil *potential* which approaches unity as close as makes no difference. What do we do about this? Hm?

    Dertouzos: which stays pretty constant through the ages. Technological progress will not automatically cause us to be engulfed by evil, as some people fear.

    Taylor: This is laughable. Read what I wrote above, and then think about all the easier ways there are to kill millions with replicator-based technologies of any kind (biotech, nanotech, AI+robotics). The replication capacity is what makes the new technologies fundamentally different from anything that come before – except of course life itself. Read "Dome" by Steve Perry if you get the chance, and then think about all the ways in which the scenario in that book is _optimistic_.

    Dertouzos: But for the same reason, potentially harmful uses of technology will always be near us, and we will need to deal with them. I agree with Ray's suggestions that we do so via ethical guidelines, regulatory overviews, immune response and computer-assisted surveillance. These, however, are partial remedies, rooted in reason, which has repeatedly let us down in assessing future technological directions. We need to go further.

    Taylor: Ah, some refreshing rationality.

    Dertouzos: As human beings, we have a rational, logical dimension, but also a physical, an emotional and a spiritual one. We are not fully human unless we exercise all of these capabilities in concert, as we have done throughout the millennia. To rely entirely on reason is to ascribe omniscience to a few ounces of meat, tucked inside the skull bones of antlike creatures roaming a small corner of an infinite universeóhardly a rational proposition! To live in this increasingly complex, awesome and marvelous world that surrounds us, which we barely understand, we need to marshal everything we've got that makes us human.

    Taylor: Damn, there it went. We need to transcend 95% of our humanity quickly if we are to have any hope of living together in groups without fear. Humanism is nice when people do not have the destructive capacity of gods at their fingertips. If we give ourselves the powers of gods, we must shed our humanity and become gods also, or we will kill each other for sure.

    Dertouzos: This brings us back to the point of my column, which is also the main theme of this discussion: When we marvel at the exponential growth of an emerging technology, we must keep in mind the constancy of the human beings who will use it.

    Taylor: No matter how comforting it may be for you to cling to this disturbingly naieve humanistic viewpoint, the fact is that humans are irrational, savage animals and we are about to put ourselves to the ultimate test. Will we survive? Personally, I plan to be at the forefront, one of the first to get mature (enough) nanotech to build a spaceship "ark" of some kind for myself and my loved ones. We will then leave earth and probably the solar system as well, and come back in ten thousand years when we know it is safe to do so. This planet is about to become way too dangerous to be trapped on.

    Dertouzos: When we forecast a likely future direction, we need to balance the excitement of imaginative "nonlinear" ideas with their potential human utility.

    Taylor: I should hope that the astonishingly large potential utility of biotech, nanotech and strong AI/robotics would be taken as a given.

    Dertouzos: And when we are trying to cope with the potential harm of a new technology, we should use all our human capabilities to form our judgment.

    Taylor: More "hiding in humanism". Hopefully you avoidants won't be the death of us all.

    Dertouzos: To render technology useful, we must blend it with humanity. This process will serve us best if, alongside our most promising technologies, we bring our full humanity, augmenting our rational powers with our feelings, our actions and our faith. We cannot do this by reason alone!

    Taylor: How very. What scares me the most is that mystical tripe like this will become VERY attractive to the masses who don't understand what is going on around them, violent religious movements will arise, and the world will dissolve into a seething mass of religious warfare. We (the transhumanists) will win any such wars, of course, but we'd sure hat to have to kill an awful lot of people unnecessarily, just because their crippled, savage, primitive human minds were poisoned by irrational mysticism like yours before they had a chance to be free. The ONLY thing about humanity that is worth keeping is love, and that too might be my crippled, primitive human mind talking. Personally, I say we let the AIs work it out for us – they won't have our limitations and dangerous urges.

    I urge you to take this letter as I intend it – a slap across the face, saying 'Wake up! This isn't a game, this isn't a media op or a fundraising opportunity! Your kids' and grandkids' lives are at stake here! The WHOLE WORLD AND ALL OF HUMANITY are at stake here!!! The "wild" ideas are right as rain!'. And for Pete's sake, _LEARN_ about the details of the technologies you so blithely dismiss as fantasy speculation! You should be embarassed to have posted such drivel as made up the bulk of your reply to Ray Kurtzweil without doing your homework first, Mr. MIT Computer Science lab head Ph.D 'philosopher'! No wonder all the good ones go to Stanford or Berkeley these days.

    Deeply disgusted,
    Jon Taylor
    taylorj@ggi-project.org

Leave a Reply