Foresight Nanotech Institute Logo
Image of nano

Tiptoe or dash to the future?

Over at Overcoming Bias, Robin Hanson wonders whether we should go fast or slow with tech development as we move toward a level of development (solar-system wide or interstellar civilization) where we are reasonably not likely to be wiped out in a single incident.

He bases his analysis on how likely we are to stumble (or be otherwise wiped out) along the way.

I’d personally reject that as a valid concern.  We don’t have a clue what, if anything, is actually going to wipe us out.  If you really wonder what we think now is going to look like 1000 years from now, consider what the medieval philosophers were worrying about 1000 years ago.  Yep, we’re that clueless.

A better way to look at the problem is to compare what it was like to live in brave (fast-advancing) vs cowardly (slow-advancing) times.  The brave times (e.g. just a century ago) were optimistic times, when people were full of promise and possibilities.  The cowardly times were despondent and depressed.

Shakespeare put it like this:

Cowards die a thousand deaths. The valiant taste of death but once.

None but the brave deserve nanotechnology.

12 Responses to “Tiptoe or dash to the future?”

  1. Robin Hanson Says:

    Yes for individuals being valiant signals better personal qualities than being shy. But that doesn’t make it a better global strategy for humanity.

  2. mitchell porter Says:

    I am just shaking my head at your attitude. You “personally reject [human extinction through high technology] as a valid concern”, because “we don’t have a clue what, if anything, is actually going to wipe us out”. How can you say that, knowing what you know? Is this the guy who dreamed up utility fog, talking? We may not know exactly what *will* happen, but we do know that new forms of doomsday weapon will be possible, and that they will be capable of being designed and manufactured anywhere in the world. The capacity for countermeasures will advance along with the destructive potential, but the risks are very real. Moreover, most of humanity has never heard of this stuff and wants nothing to do with it, but “brave” or not, “deserving” or not, they live in the same world as the people who are developing it and will live with (and perhaps die from) the consequences.

  3. J. Storrs Hall Says:

    Mitchell: remember that if we DON’T take a fast development route, we’re condemning billions of people to suffering and death who might otherwise not suffer and die. Likely including ourselves. If we have it in our power to develop nanotech (an open question to be sure), we cannot avoid affecting the whole human race deeply, even if what we do is nothing.

    Robin: Eudaimonia, in the sense of Aristotle’s virtue-based ethics, is a distillation of evolutionarily-gained lessons (both bio and cultural). We can think of virtuous character as comprising those things that have tended to work when conscious analysis and prediction failed. I’m not nearly so sure as you seem to imply that we have suddenly arrived at a point in history where hubris, signaling, and self-deception have vanished and pure ratiocination is the best we can do.

  4. flashgordon Says:

    I’ve been argueing with Chris Phoenix and then Eric Drexler that we should allow free expansion out to space to overcome the problems that comes from nanomanufacturing; they’ve completelly ignored me(which is why I started trying to talk to Eric Drexler; who’s completelly ignored me as well); i’ve posted this stuff at crntalk, a yahoo messageboard, where nobody seems to have any thoughts.

  5. James Gentile Says:

    Exactly, we may die if we do develop AI and nanorobotics, but if we don’t develop these things then..well, we’re all dead for certain. Personally, I’ll take my chances with nanorobotics and AI. Now, if we could just get some government, corporation or billionaire to donate more than the pocket change they have left over from throwing money away on garbage like climate simulations…

  6. Toads Says:

    The assumption from Robin Hanson is that altering the pace of technological change is an option. It is not. A seminconductor or software corporation that slows down development will merely be outcompeted by a competitor that can get to faster/cheaper sooner.

  7. Gary Miloglav Says:

    Flash: Fear not. Millions of people share your desire to migrate into space. It will happen sooner rather than later and I WILL be a part of it, so matter how old I get.
    Mr. Hall: I agree wholeheartedly: we must follow a fast development path. Yes, we may stumble along the way, humans do that. But to hold back and go slow only means we are yielding our future to someone else’s wishes.
    Going slow and taking a very deliberate, safe path, where all possible dangers are carefully eliminated, and every decision is made by endless committees, is not how humans “explored and conquered” the Earth. Yes, in the process we have created numerous problems and wars, but we also created endless opportunities, higher standards of living and longer lifespans (with better heath) than previously existed. “None but the brave deserve nanotechnology” but I’m sure it will be shared even with those who sought to delay or deny it.

  8. Valkyrie Ice Says:

    Dash to the future? Hell no. We need to strap a dozen JATO units to the back of the car, and punch it.

    To Robin, K. Eric, Mike and all the other ultra cautious “let’s go slow and make a trillion safeguards” let me say something I have wanted to say to you all since I first read Engines.

    Slow will kill us.

    Our only hope is to ride the rocket. There are 6 billion people on the planet, and no two of them share the exact same ethics and morality as any other. One man’s evil is another man’s good. You want to take it slow, make ten million safety checks, make sure that every contingency has been planned for? Well, I pity you when Al’Qida prefects their nanobot that will kill you for not being Shi’ite Muslim, or their superbug that will slaughter every non Arab.

    Simply put, ethics sounds nice. Morality sounds nice. Caution sounds nice. And it will get us all killed by the people who’s ethics, morality, and sense of what’s right and wrong are totally different than yours. Banning Stem Cell Research didn’t stop research in the rest of the world. Banning cloning didn’t stop it either.

    You say “let’s all be friends and play nice together”, and they will say “We will bury you.”

    Technology doesn’t wait on consensus. It doesn’t wait for everyone to agree on whether it should or shouldn’t be created. K. Eric had one thing right, you cannot put the Genie back in the bottle. The Genie is out, and it will serve whoever masters it first. At least if we become it’s master, there’s a better than even chance it will benefit the entire human race, but if we hem and haw and worry about how best to control the Genie, you can bet someone else will beat us to it. Japan? Not so worried, Giant Mecha would be cool. China? Not so certain there. The Taliban? We can kiss our asses goodbye.

    We can’t afford to debate, we can’t afford to slow down, we can’t afford to do anything but full speed ahead and damn the torpedoes.

    The sole consolation I have is the knowledge that for all your worry and caution, for twenty years now I’ve watched the makers and creators of the technology ignore you.

    It may kill us, yes. We may grey goo ourselves, make sky-net, turn our planet into a new asteroid field, or any number of other horrible things. But it’s the only hope we have of getting out of childhood alive. We’ve been walking a razor’s edge between heaven and hell since Einstein thought up E=MC2, and we have had a sword hanging over our heads for all of our existence. Once Drexler proposed a means to create the salvation of our race, it should have been the sole project of all of science to make it happen.

    We’re racing down an ever steeper slope to a future beyond imagining. Between us and them are a thousand pitfalls, terrorists, luddites, and crazies of all descriptions. If we slow down for even a fraction of a second, they will tear us from the sled and rip us to pieces. Speed is the only sane course. Some of us are going to die along the way. There’s nothing we can do about that, but the sooner we reach that light at the end of the path, the more of us will survive to enjoy our victory.

  9. mitchell porter Says:

    Robin’s post set this up as a choice between go-fast and go-slow. And I see we have a few more voices in favor of going fast. However, Robin’s post was specifically about economic growth rates. That level of description is highly abstracted from specific future threats, and an economic go-slow strategy – solely for the sake of reducing the risk of high-tech extinction – would be a *very* indirect way of addressing the risks. Needlessly indirect, I would think, and perhaps even counterproductive, for reasons Robin himself listed. So I am not here to advocate that.

    However, there are people here advocating a go-fast, gung-ho attitude specifically towards the technologies which carry an extinction risk, such as artificial intelligence and advanced nanotechnology. That is what I want to address.

    First of all: you should advocate taking an extreme risk only if it is absolutely necessary. I think that the psychological impetus for some of these heroics is, first and foremost, the desire to avoid death, and second, the desire to enjoy the riches of a transhuman world. Well, if what you want is to overcome the ageing process, you can advocate cryonics and SENS-style life extension research specifically. Those goals, and the scientific and technological progress needed to achieve them, are largely separable from the obviously dangerous stuff, and you should make the effort to locate the boundary.

    Second, I wonder if the doomsday outcome is emotionally real to some of the people who talk of being brave, etc. I think many are young, and still feeling the first shock of realizing that they have been brought into being by a species and a society which doesn’t care that that life will later be taken away from them; or perhaps even just optimistically presuming that happy endings must prevail. It certainly took me a few years to admit that the doomsday aspects of advanced technology are not just a thing to acknowledge in passing, in between singing hosannahs to the blissful posthuman future, and urging everyone to cheer up and seize the day. The real turning point came when I saw that the simplest application for what I was working on (or at least thinking about) was in fact a doomsday weapon.

    It is hard to attach justifiable quantitative probabilities to the possible outcomes. But it is at least plausible that the acquisition of a general capacity to make artificial life or artificial intelligence is *overwhelmingly* likely to be lethal to its creators. It may simply be the end, for the majority of intelligent species who get there. No “light at the end of the path” at all, just death. It’s so much easier to just make something out of control than it is to make something powerful yet benign.

    Now in fact we all appear to be pretty powerless anyway. It’s a rare lucky (or unlucky) person who gets to really make a difference in the big picture. So perhaps there’s no point in saying go fast or go slow, and no need to pour cold water on the advocates of go-fast. But I hope at least they understand, *really* understand, that the things we are talking about are capable of killing us all.

  10. Robin Hanson Says:

    Josh, yes of course if we have no idea what to prepare for then pundit foresight isn’t much help, so we might as well dash forward. But that does seem an odd position for the head of the Foresight Institute to take; why not quit trying to see ahead and just wait for whatever the future brings you? Not sure where you get me saying self-deception has vanished.

    Mitchell, yes we might consider fast econ growth but slowing on especially dangerous techs, *if* those techs weren’t yet on the critical path to getting to our desired future.

  11. Valkyrie Ice Says:

    @ Mitchell:

    I probably understand it better than the average debater on the subject. I wish that going slow was an option. It isn’t, and hasn’t ever been.

    I’ve been involved in this debate for nearly 15 years on Sci.Nanotech, Slash.dot, and Nano.dot as well as avidly pursuing information on the state of the art in most forms of technology. I know far too well how dangerous our technology is.

    I am also well aware of the forces arrayed against our success, from the danger of a hostile AI, to the creation of ultra lethal biological agents, to factions of humanity which will protest against advancement for whatever reasons.

    I know far too well how many corpses will line the road to the future. But I also know that EVERY road leads to the same place. We will reach the Singularity by one way or another. It’s been too late to turn back since Democritus first proposed that things were made up of atoms. VR is going to drive us there, Genetic engineering is going to drive us there, nanotech is going to drive us there, robotics is going to drive us there, and last but not least, human nature itself will drive us there. We have no choice about our destination. The ONLY choice we have is how many bodies will line the road.

    Our economy is suffering because we are feeling the first shockwaves of the radical tsunami of change which is coming. The massive amount of information on the internet, and the advances we have made in computing made this inevitable. Ultrafast loan approvals, computer assisted leveraging, high speed stock trading, all of these contributed. The Dinosaur corporations are desperately clinging to life via ever means at hand, including trying to secure a government lifeline. Yet they are failing to recognize the exposure of everything they are doing to the public via the internet, and counting on the complacency of the American people who for the first time are seeing their actions in real time, not weeks and months after it has happened. More and more people are suffering because the old economic system is collapsing, but the new economy of abundance has not yet arrived. What is the solution? Trying to prop up the doomed system of capitalism, waste trillions, and ultimately fail? Or investing in the future by providing social services to tide over our displaced during this transition? Which is more economically viable? Re-educating our displaced, providing them with the necessities of life, and equipping them to live in a future quite different? Or trying to drag us back to a past that never was, and which cannot be sustained? Which is better? Which leads to less suffering of people and fewer deaths? The only humanitarian choice seems obvious, yet it is not the one currently being pursued.

    We ARE dragging our feet, and it is STILL not slowing the breakneck rate at which we are advancing. Every day we refuse to charge ahead into the future we CAN foresee, the more people suffer, the more resources are wasted, and the more we allow those who were once so far behind us to catch up, and potentially surpass us. We should have sunk all of our resources into research 30 years ago, in hopes we could have shortened this chaotic and ever more dangerous period we are entering. While I do not doubt that humanity stands a very good chance of surviving the next twenty years, there will be MILLIONS of us who will not be so lucky. Millions of us who will die because we are dragging our feet trying to slow down the future.

    So which is better? Letting people die now because of what we MIGHT do? Or pursuing the course which tries to save lives now and hope we can foresee and head off the most dangerous of potential futures? Which is more important? The reality of suffering now, or the what ifs of our worst nightmares?

    The internet is already disrupting our world, destroying the tyrannies we have lived under, eroding the barriers of xenophobia, racism, bigotry and intolerance. It will only get worse as we move towards a unified world nervous system and information exchange. VR is going to accelerate this beyond what anyone dares to believe, and shatter our notions of what it means to be human by giving us a taste of what it means to be whatever we wish to be. We are already at the beginning stage of directed mechanosynthesis via DNA and protein, and may achieve true Drexlarian Nanotech within any time frame from several years from now to a decade or more. We no longer have the luxury of time. Robots are advancing at a breakneck speed, with the sophistication of animals already, and the equivalent of human physical ability not that far distant. Cybernetics are becoming more realistic and more integrated with biology everyday, with artificial hands that respond to direct neural control and provide touch feedback. Biological breakthroughs in organ growing via printing will make transplants obsolete in just a few years. DIY biologists are making new discoveries in their garages.

    WE HAVE NO MORE TIME FOR THE LUXURY OF GOING SLOW.

    It’s too late. And the only reason that everyone doesn’t know this already is that they have chosen to ignore the forest to look at the trees.

  12. J. Storrs Hall Says:

    Robin: That was tongue-in-cheek. You are of course one of the best pointers-out that our major modes of thought involve signaling, self-deception (because we think we’re not doing things for signaling purposes), and so forth.

    Our concept of the virtues has grown up in the same evolutionary environment as the self-deceptive modes of thought. Self-deceptive modes have certain obvious drawbacks — they keep us from seeing the truth. There may thus have been pressure for a countervailing mechanism: some reason for acting that caused us to do the right thing anyway. The virtues are a possibility.

Leave a Reply