Foresight Institute Logo
Image of nano

KurzweilAI.Net: new site discusses Singularity

from the tomorrowland dept.
Bryan Hall writes "Raymond Kurzweil, author of 'The Age of Spiritual Machines' has a new website showcasing the ideas of leading visionaries and breakthrough web technologies. The site is hosted by Ramona, a real-time virtual hostess, using natural language processing, real-time facial animation, and other technologies to answer visitors' questions vocally. Ramona is programmed to verbally explain hundreds of `thoughts' (such as `artificial intelligence') to visitors as well as provide articles, glossary definitions, links, and other information…A major focus of the site is the exponential growth of technology, leading to the 'Singularity,' which Kurzweil described as “future accelerated technological change so rapid and profound that it represents a rupture in the fabric of human history.'' The site's content includes parts of Kurzweil's forthcoming book, “The Singularity is Near.''"

15 Responses to “KurzweilAI.Net: new site discusses Singularity”

  1. PaulKrieger000 Says:

    The Future, Extremists, and representing sane Ntec

    Why does it seem to me that a few people who are head of successful companies are going off the deep end lately?

    First candidate for the asylum is Bill Joy with his self-replicating robots. Just, and I mean just, an inch behind him is Ray Kurzweil. These people are noted professionals within their respective fields of computer networks and AI software but it seems to me that they are stepping out of the bounds in their fields to comment on nanotechnology. Does anyone know if either of them knows Quantum Physics and mesoscale physics? I'm sure that they know some physics and that they both received degrees from good/great colleges but I am not sure if they can actually solve Shroedinger's equation.

    The Mechanical Engineering license that I received in my state required me to adhere to specific ethical guidelines. One of the ethical guidelines that decrees under the heading "REGISTRANTS OBLIGATION TO OTHER REGISTRANTS" and I quote it word for word: Registrants shall not attempt to injure, maliciously or falsely, directly or indirectly, the professional reputation, prospects, practice or employment of other registrants, nor indiscriminately criticize other registrants' work.

    I thought this was a good idea and I signed the paper. There is another part that says, under "REGISTRANTS OBLIGATION TO SOCIETY," and I quote word for word: "Registrants shall express a professional opinion publicly only when it is founded upon an adequate knowledge of the facts and a competent evaluation of the subject matter."

    I know that Kurzweil and Joy are professionals and that doesn't necessarily qualify them as "registrants" and therefore legally I have no recourse as a nano-mechanical Engineer. But I have two questions:

    (1) Why, if we consider them professionals, are they not under this type of code?

    (2) What do you think the public, who can not tell the difference, is going to do when they hear these "professionals," and can not clearly distinguish them from Nano scientists, and engineers?

    I agree that questions like: Where are we going?; How do we get there?; How do we make it safe?; "What about regulation?"; "What about Defense?"; "Who are allies/enemies and who should we trust?"; "Where do we want to be?"; ARE IMPORTANT but aren't all of these better left up to REGISTRANTS or professionals that are working within the field and have some sort of legal "OBLIGATION TO SOCIETY." And if not why the heck did I get my degree in engineering, learn all of that science stuff, if I could just read a few books and speak out anyway.

    My ultimate need for writing this email is that I believe, through a "competent evaluation of the subject matter" I can help mankind (Includes all of mankind and Earth) through nanotechnology. I could certainly be making more money and getting more respect elsewhere (Business Owner, Medical Doctor, Artist, Hollywood Actor). (And it isnít about the money I love engineering, science and math.) All of these extremists are starting to make me think the public WOULD be insane if they did not immediately and all at once call their respective governments and ask for a ban on MNT.

    Let's try and find a non-extremists approach to the future, please. I'm not sure I could go back to being a car or general robotics designer or a standard mechanical engineer.

    It also seems that each of their approaches, Kurzweil and Bill Joy, leave out a large portion of society. Joy leaves out the Semi-conducting companies and the economy. Kurzweil leaves out the people who don't one, care a lot for technology or two, care for being someone else (TED11 conference).

  2. MarkGubrud Says:

    Kurzweil does it again

    Move over SingInst, move over Extropy, move over Foresight Institute: Ray Kurzweil announces the shockwave flashiest, mind-spinningest, most data and image graphickest and fashionably slickiest Singularity website on the net, with no less than 17 employees and 9 consulting firms listed on its masthead, probably not even counting Ray's personal research staff.

    Kurzweil joins the long parade of scientists and inventors whose eccentricity has increased with age, but although he seems to be serious about his message, I also get the sense that he is not hung up about being and proving himself right, but is more interested in showmanship, like a sort of P. T. Barnum of technology.

    I hereby challenge Ray Kurzweil to a written debate to be posted on the home page of his website, or else to post a 10,000-word essay that I will write dissecting his message, much of which I do agree with, but equally much I find abhorrent.

    The centerpiece of the site is Kurzweil's precis for his coming Singularity book, which appears alongside Vernor Vinge's 1993 essay. Here Ray presents a bewildering array of semilog graphs, similar to those he presented at the Foresight conference in November, purporting to demonstrate his "law of accelerating returns", which can be summarized by his statement that "technological change is exponential." It's a pretty impressive collection of data, and the basic point is both correct and important, but it is accompanied by and used to justify the full litany of Technology Cult theology:

    I am not saying that technology will evolve to human levels and beyond simply because it is our destiny…. Rather my projections result from a methodology based on the dynamics underlying the (double) exponential growth of technological processes.

    In other words, this is Science. Except it isn't, not when he's trying to sell the idea that

    nonbiological intelligence should still be considered human as it is fully derivative of the human-machine civilization

    Hey Ray, so is a refrigerator. Are refrigerators therefore human?

    This isn't science, it's moral philosophy, and an evil one. But a well-developed evil philosophy. According to Ray, and most other technology cultists, it's all just evolution:

    Biological evolution is one such evolutionary process. Technological evolution is another such evolutionary process.

    the implication being that biological and technological evolution should be regarded as morally equivalent, with the latter just a continuation of the former.

    With the advent of a technology-creating species, the exponential pace became too fast for evolution through DNA-guided protein synthesis and moved on to human-created technology.

    Later, he makes the (im-)moral dimensions of this conceit explicit:

    Evolution, in my view, represents the purpose of life…. The Singularity then… represents the goal of our civilization.

    And what is this "ultimate goal"?

    Ultimately, nonbiological intelligence will dominate…. By the end of the twenty-first century, nonbiological thinking will be trillions of trillions of times more powerful than that of its biological progenitors, although still of human origin. It will continue to be the human-machine civilization taking the next step in evolution.

    Before the next century is over, the Earth's technology-creating species will merge with its computational technology. There will not be a clear distinction between human and machine. After all, what is the difference between a human brain enhanced a trillion fold by nanobot-based implants, and a computer whose design is based on high resolution scans of the human brain, and then extended a trillion-fold?

    Note that he does not ask what the difference is between either of these two differently-described types of objects and a human being, i.e. a member of species homo sapiens. One of us, that is. At the end of Kurzweil's "evolution" is nothing that can defensibly be called human. Ray is describing a vision of Apocalypse, the consumption of our species by its own creation. And trying to sell it as sexy and spiritual.

    To his credit, Kurzweil takes a few more peeks into the metaphysical void implied by his dogmas than he has in previous rounds, coming almost close to acknowledging the Gorgon of nothingness that lies within. (Hans Moravec seems to have made contact with the Medusa and gone notably mad somewhere between Mind Children and Robot, the latter book proposing that rainstorms, single bits and everything and nothing must be conscious, in some incommunicably schizophrenic way, in order to salvage the central catechism that a software clone of you is you.)

    Is this really me? For one thing, old biological Ray (that's me) still exists. I'll still be here in my carbon-cell-based brain….. If you were to scan my brain and reinstantiate new Ray while I was sleeping, I would not necessarily even know about it…. How could he be me? After all, I would not necessarily know that he even existed.

    Good points, which I have made many times. But for some reason old Ray at this point resorts to the "continuous replacement" argument and then drops the issue. Just a brief peek into the darkness, and then back to the humming artificial lights.

    But I think Ray knows better, knows in his gut that what he is proposing is nothing but the suicide of our species. In his last book, Ray described a "recurrent dream" about wandering through "millions of buildings" with "no one there…. suddenly the dream ends with this feeling of dread…" That's telling it from the heart, Man.

  3. PaulKrieger000 Says:

    Question for the Scorer

    I wanted to make two points and apparently failed to do so.
    One, Ray Kurzweil and Bill Joy both have extreme personalities and points of view.
    Two, these might end up being a problem in the future public relations when the press gets a hold on what has been said about on the subject of nanotechnology. I am for nanotech 150% and I have read both Joy's article/s and "The Age of Spiritual Machines."

    I would like to know why either one of these points are wrong or Flambait or why my essay had a zero point score. This is strictly for my own learning.

    I also think that the post right after mine, "Kurzweil Does it again" makes the same points and is scored a 2.? Although, I will admit he states nothing about Bill Joy.

  4. SkevosMavros Says:

    Seemed Fine To Me

    Personally I found your post interesting and moderated it up accordingly. :-)

    I generally don't moderate anyone down if I can possibly help it (with the notable and, IMHO, justifiable exception of Kadamose).

    I often find myself moderating posts up even if I strongly disagree with their arguments (I often disagree with Mark Gubrud – e.g. calling Hans Moravec "notably mad" is over-egging the pudding I think! – but I probably mod him up more often than anyone else). As long as a post is well written, rational, passionate, and relevant, I will usually mod it up – from what I see on nanodot this seems to be the typical approach to moderation.

    As for why you were modded down – perhaps someone felt you expressed yourself too strongly? Seemed fine to me! :-)

  5. SkevosMavros Says:


    Whoops! I forgot I couldn't post to a thread that I have already moderated. So my positive moderation of you has been undone – you're back to a zero (at this time). Oh well, it's the thought that counts… and at least the system is no longer calling you flamebait! :-)

  6. Practical Transhuman Says:

    Techno-apocalyptic numerology.

    Why does the year 2030 CE keep coming up as some kind of turning point? For example, apparently economist John Maynard Keynes back circa 1930 predicted that the economic problem would be "solved" in about a hundred years, which would make it — 2030!

  7. archinla Says:

    The Horror…

    I wholeheartedly concur with Mark Gubrud's comments on Ray Kurzweil. Kurzweil's vision is very disturbing as it essentially paints a picture of our extinction. While I welcome the day when I can have this aching spine of mine replaced by advanced technology; or learn Italian simply by downloading it into my enhanced brain, I do not share Kurzweil's unbridled enthusiasm for the day pre-configured manufactured entities are preferred to new born babies. Has this guy ever had children? Has he ever felt the swell of pride as he held his child in his arms? He seems to be suggesting a world where everything is simulated, but leaves so many seemingly dumb questions unanswered about this cold, dry, super-intelligent world; like what happens to friendship? how can he equate intelligence – even a trillion trillionfold – with wisdom? what happens in a world where there's nothing at stake – where you run no risk of physical harm – where there's no bravery, no self-sacrifice? It's all so esoteric, humorless, disembodied. Dead, even. And on the subject of death, he once again gives us that non-answer, with the duplicate Ray no different to the original. I for one look forward to extending my life and those of my loved ones as long as possible. Kurzweil's pontifications on this seem to suggest he can't wait to die and live on only via a duplicate of himself. This I do not understand. Kurweil is clearly a formidable intellect and a visionary, but I just wish someone else would come up with a plausible future scenario for the post-Singularity and for nanotechnology which doesn't necessitate our extinction.

  8. RobertBradbury Says:

    Re:The Future, Extremists, and representing sane N

    1. Why, if we consider them professionals, are they not under this type of code?

      Because mechanical engineers have had several hundred years of blowing things up (steam and gasoline engines come to mind) to impress people with how destructive their mistakes can be. Presumably a code of ethics discussing a duty to society goes hand in hand with the training and licenses you get to practice that profession. Programmers, until recent times (the Arriane 5 comes to mind), have been less successful at such spectacular failures. But as the e-virus plagues and trapdoors that surface when the source is opened show, perhaps it is time that similar ethical codes be adopted by software professionals (such as Bill and Ray). If this were true, then the concerns they raise and their attempts to educate (Ray more than Bill) seem to me to be precisely what ethical codes should require of professionals. Namely making people aware of the risks far enough in advance that strategies to avoid them can be developed.

    2. What do you think the public, who can not tell the difference, is going to do when they hear these "professionals," and can not clearly distinguish them from Nano scientists, and engineers?

      Well the majority of people in the U.S., who still believe that evolution is baloney, will probably think Bill and Roy are full of baloney as well and go back to watching TV. The people who can connect the dots (some greens, bioethicists, scientists and other technically literate people) will hopefully

      1. Carefully review the data for accuracy. Are Ray's graphs wrong?(!)
      2. Determine what the risks are.
      3. Propose possible solutions.
      4. Select the best paths.

      Which seems to me to be presisely what the Foresight Institute is trying to do. The Senior Associates conferences seem directed towards airing the risks, benefits, paths and tradeoffs in solutions. Calling people like Bill or Ray or Hans "crazy" isn't going to work. You have to explore the ideas carefully and see if and where they may be flawed. The public may think they are crazy, but the public has been wrong before and on numerous occasions paid a dear price for sticking their heads in the sand.

    If the data and statistics are incorrect, then you should state precisely where you are in disagreement. Inertia applies to societies and technological development as well as it does to steaming locomotives. The inertia of societies (e.g. the failure to simply believe we could eventually assemble self-replicating systems) is why we don't have robust bio-nanotech today. You only have to look at the entire history of the Genome Project to recognize that we probably could have achieved it 5-10 years earlier if we had simply realized that it was feasible in 1980 or 1985 instead of the early 1990's. The inertia of technological development, in part driven by the competition between individuals and corporations and now if we look at the Celera example, between corporations and government, is driving the "Moore's Laws" of computing, communications bandwidth, biotechnology, and eventually nanotechnology. Ray is simply projecting those forward. If you doubt Ray, you should read the paper by Johnson & Sornette (cond-mat/0002075) which says the singularity should occur by 2052 +/- 10 years. (So not everyone says the date is 2030, some people get different numbers. According to some of my graphs, we get desktop computer human brain equivalence by 2010.

    How many people or papers have to tell you it coming before you will believe it? Or are you content to be a Jewish banker in Poland failing to heed the warnings about the NAZIs? (byegones for the use of sterotypes).

    Mark and archinla seem to suggest or fear that humans may become extinct. Why do you fear that? Or do you fear you may become obsolete? Or do you fear that the evolution of more advanced life-forms would destroy the "human" spirit or will-to-live? If you think humans are so great, you need to read If Humans Were Built to Last by Olshansky, Carnes & Butler, Scientific American, March 2001 (not on the Web unfortunately). I just cannot see why anyone would want to preserve themselves in a form that is so fundamentaly suboptimal (depending on who you choose to believe 50-93% of the genome is junk). We aren't even close to the memory, thought capacity or longevity limits. Just what is so f****** great about being a human?

    Just because Ray believes humans and AI's will merge into something greater, doen't require that ALL humans follow this path. Of equal or perhaps greater probability is that the transhumanists & extropians will hop on a Nano-Arc, travel to a nearby brown dwarf (there are dozens within a few dozen light years) and proceed to dismantle that, living trillions of years as part of a Matrioshka Brain. And Homo ludditis will go on its natural course until the sun becomes a red giant in 3-5 billion years and swallows the planet, or the oceans dry up in about a billion years, as a Japanese scientist recently predicted, making the planet pretty unlivable, or the Earth gets smacked by a species extincting asteroid or comet which now appears to happen every 100 million years or so. These are the paths that humanity is on unless they decide to embrace the technologies required to alter them.

    As an uploaded sub-mind of a Matrioshka Brain, I'll be perfectly happy to come back to Earth and pick up the remaining matter and energy, when Homo ludditis goes extinct. Evolve or perish, its very simple.

    Now of course, another possible path is that the humans who believe they can only save humanity by eliminating advanced technologies and implementing a police state to prevent their development, are likely to doom humanity, because they will force people like myself to develop the technologies in secret and use them as ruthlessly as possible — because we will be very clear that our survival depends upon it.

    I will grant one point to Mark, in that I too believe Hans may have gone over the edge in Robots. But this gets into a discussion of the nature of the Universe and that rapidly goes beyond our theoretical and experimental knowledge in my opinion. I'm willing to put Moravec's perspective, somewhat after quantum mechanics but probably way before Tipler's Omega Point and Penroses microtubule "consciousness" along my personal "fact-or-fiction" credibility scale.

  9. MarkGubrud Says:

    the nature of the Universe and everything

    First, thanks, Robert, for pointing up the Johansen and Sornette paper. It looks very interesting. I haven't read the paper yet, but right away I note that they refine the concept of technological singularity by pointing out that finite-size effects, as in statistical mechanics, would be expected to round off the singular behavior while still allowing the transition to a qualitatively new regime.

    Your comments as to the reality of the trends pointing to singularity, and the need to think and respond seriously, I am all in agreement with. But the points I will reply to here are equally essential.

    I will grant one point to Mark, in that I too believe Hans may have gone over the edge in Robot.

    But apparently you only got half my point. The other half is that Hans is right, that is, he has seen what apparently Ray still refuses to admit to, namely that such an absurd notion (that rainstorms, rocks and absolutely anything must be conscious, since anything at all can be taken, through some possible mapping, as a representation of any "brain pattern" and life history) is an inescapable corollary of the belief that "any representation of my brain pattern is equivalent to me and therefore is me." In other words, if you believe that by creating a physical or logical copy of your brain, "you are uploaded" and that by such a process "you can become" a superintelligent being, then you must as well believe that rocks contain universes of souls. Nonsense implies nonsense.

    Just what is so f****** great about being a human?

    It is what you are, Robert. You cannot be anything else. We are not discussing your options here. We are discussing whether the future belongs to people or to robots.

    Mark and archinla seem to suggest or fear that humans may become extinct. Why do you fear that?

    One clear possibility is mass suicide, as people are enticed to "become" machines. I don't think it is likely that everyone would ever choose that, but another clear possibility is that the machines will choose to polish off the rest (perhaps by forcible "uploading"), and a third is that the humans would try to stop the machines from taking over and the result would be a war ending in human extinction. Actually, I don't think any of these scenarios is likely, but…

    Or do you fear you may become obsolete?

    What I do fear is that the belief implied by this question, that there is no alternative to endless cutthroat competition, will not become obsolete in the "new regime" of technology.

    Or do you fear that the evolution of more advanced life-forms would destroy the "human" spirit or will-to-live?

    What is destroying the human spirit is the failure to recognize that we are our own reason for everything, and do not need to justify our lives in terms of being the most "advanced", smartest, economically productive, or whatever.

    I just cannot see why anyone would want to preserve themselves in a form that is so fundamentaly suboptimal

    Suboptimal for what purposes? For whose purposes?

    Homo ludditis will go on its natural course until the sun becomes a red giant in 3-5 billion years and swallows the planet…. another possible path is that the humans who believe they can only save humanity by eliminating advanced technologies and implementing a police state to prevent their development

    No. The universe belongs to us, and we will use technology to make it ours. We will not be swallowed up by technology, nor will we be afraid to use it. We will need to be an orderly civilization, with laws and police, but we will be a free people, and we will have a future.

  10. archinla Says:

    Re:The Future, Extremists, and representing sane N

    I resent the luddite name-calling. I'm not against evolving. I just don't think Kurzweil paints a compelling picture when he emphasizes that experiences will be virtual. In his desire to dazzle us by talking about the tremendous speed and trillion trillionfold increase in intelligence he omits to talk about wisdom. Although he may have a clearer idea than other speculators about the post-singularity future he is nevertheless no different to any other prognosticator in that much of what he predicts will probably be proved wrong. This fear of obsolescence you infer – why don't you address the transition, because that's where the real upheaval begins, that's where it does get scary. It's far too easy for you or Kurzweil to say "the singularity is near" and don't worry, everything will come out in the wash. That to me is more like blind faith than the supposed ignorant fear of extinction. And Kurzweil's statement that SETI has it all wrong and we're in the lead in the universe is risible. In reference to this part of "The Singularity is Near" I think the first two posters had it dead on when they posit that he's lost some of his marbles. Just what is so great about being human? Now that's a pretty stupid question. If you don't think it's so great being human then why haven't you topped yourself already? Like Mark says – it's what you are – you cannot be anything else. And, as I said before, it would be nice to get a new spine, or even a whole new body. I don't fear the future in that respect other than that I won't survive long enough to see the day. I do fear zealots though who are so hell bent on getting past this hideous, unbearable messy stage we're at now that they can't wait to wipe us all out. Why don't you explain what a Matrioshka Brain is, or make a compelling argument for this Borg-like picture of the future?

  11. RobertBradbury Says:

    Re:The Future, Extremists, and representing sane N

    First let me say that I make the comments like Homo ludditus with a wry smile on my face. Since one of my main vocations is biotechnology research I have to deal with the narrow-mindedness of the Greens and so I tend to group the many people who show some sign of "future-resistance" in some way into that category. Future-resistance from my perspective may be as simple as "not seeing" or "accepting" the singularity. I may sometimes use mild insult as a debating tactic to get us to see the extremes clearly so we can argue back to the middle.

    Let me state my position very clearly so there is no confusion about it. I want to preserve the greatest amount of information that can feasibly be preserved.. So by definition, I want to save as many human minds as possible, because I view those minds as having the highest information content. At the same time I believe in the ownership of ones own information and the right of self-determination. So I would fight tooth and electron against some of the "forced" preservation and/or virtual enslavement scenarios that Mark has mentioned. So, when I diss the Greens (or other forms of future-resistance) it is because I feel that their agendas will result in a greater loss of information (I mean really is there any discussion about saving Monarch butterflies over starving humans?). [Yes, I know we could have an intense discussion about whether GM crops do anything to mitigate human starvation now, but I'm fairly sure I would win the argument because unless population growth slows radically, they will be required at some point.]

    As to why I don't "top" myself — it would violate my personal prime directive. There are a number of things I'm personally curious to see the resolution of — What is the dark matter? Can we really upload? How do we solve the heat dissipation problem during planetary dismantlement? Will cryonic reanimation really be feasible? etc. I've spent about 10 years almost full time getting my knowledge level to the point where I can seriously discuss these questions. It would be a huge loss of information to "top" myself. :-)

    I think one of the points we may be getting stuck on is "what is a human?" or "who am I?". My perspective may be different from that of many people because some 'enlightening' experiences I have had in my life. I do not view "myself" as my body, or my brain, or my position in life. I view myself as an accumulated set of experiences and my stream of consciousness. I do not think anyone with a different perspective of "who" they are would find much appeal in the Kurzweil/Morovec Human-Robot-AI merger or living a disembodied virtual existence.

    Now, you seem in favor of the life-extending biotechnologies so we are in no disagreement there. However even with a completely self-renewing body, your longevity will be limited to around 2000 years (assuming the current U.S. accident rate). You can push that higher by engineering a safer world, but there are limits (asteroids for example). I view the transition to a disembodied state as a requirement for exceeding those limits. Its that simple. I have absolutely no problem with people choosing to not make that transition and do feel it is essential that we develop a greater amount of tolerance for people choosing their own personal "risk level" from not wearing motorcycle helmets to uploading themselves. The goal should be to allow as much freedom as possible without getting to the point where your freedom in some way limits someone elses. You can use the assembler, but you should only use it safely.

    You will note in other posts, that I'm very clear about trying to adopt the right speed — for example not enabling self-replicating nanobots until we are sure of the failsafes and using things like Broadcast Architectures to minimize possible hazards. I'm also realistic about our limited ability to steer this ship given the inertia it has. Buckminster Fuller used to use the concept of a "trim tab". Though I've never seen one of these, they apparently are mini-rudders on the main-rudders of large ships. As the main-rudder is turned in one direction, the trim-tabs turn in the other direction, apparently to relieve the pressure of the water against the rudder allowing decreased force to be used during the turning process. What we are all looking for is the "trim-tabs" on the singularity.

    I agree that Ray is wrong in some areas. I have previously critiqued the Scientific American article that Ray references. I think this is another classic case of what happens When Scientists Overextend Themselves. You can assume as soon as someone starts raising any suggestion of "magic physics" (faster-than-light travel, negative mass, etc.) I'm going to groan and either start yelling at them or walk out of the room depending on whether its been disproven (as Penrose has been), or simply highly improbable (in which case I think speculating about it is a poor use of time). There was an interesting comment in one of the SETI conference proceedings from a few years ago — "If you give a theorist long enough, they can explain anything." — I pretty much agree with that and prefer to spend my time chewing on sandwiches with real [synthetic] "beef" in them.

    A Matrioshka Brain is a solar system sized supercomputer utilizing all the energy produced by a star and using all the matter in a solar system in some optimal computational architecture. (To have a productive discussion about them would require people go through all the papers that the link points to). I don't feel their development represents an "inherent" conflict with humanity, as I think you or Mark envision it, unless the greens suddenly get it into their head that they want all the inert matter in the solar system left exactly the way it is now.

    As far as Mark's question as to what I think is "optimal" or "suboptimal" go back to the prime directive. If it isn't storing (or processing) as much information as is possible given the laws of physics, it is suboptimal. So for example the solar system is sub-optimal unless its maxed out at 1045+ bits of "useful" information. Defining "useful" involves philisophical value judgements. The greens might assert the almost random disorder of the atoms in the planets is "useful" because its "natural". I would most likely disagree.

    I'm not going to discuss Mark's points on Moravec and conscious rocks because I thought it was strange at the time I read it and may not have fully understood it. If I don't understand it, I can't really respond to it. I can only state where I stuck it in my relative scale of "wierd concepts". Giving it proper thought is likely to require more time than I have now or in the foreseeable future. I'll simply offer this — More than 20 years ago, Robert Freitas developed the concept of a "sentience quotiont" to measure the "thought capacity" of a particular organization of matter. Rocks could be considered "conscious" from Moravec's perspective but they would have a phenomonally low "sentience quotient" using Robert's scale.

    As I've got about a dozen other things on my plate that have much more of a present day trim-tab effect than determing the nature of the universe I may exit stage left at this point. Good discussion though.

  12. archinla Says:

    Re:The Future, Extremists, and representing sane N

    Thankyou for giving such a great – and thorough – response. You've definitely cleared some things up for me. I'm not a scientist though, and don't have the knowledge to discuss these things on a technical level. Also, I probably had more imagination for the fantastic worlds of the future when I was a science fiction loving teenager than I do now. That's my excuse for not being able to comprehend the disembodiment part of this picture. I still haven't heard anyone tackle the uploading question very well, and I personally think it is the crux of the matter. The other question I have about all this wonderful hardware is the software that governs it. It's the old garbage in/garbage out argument; the story of HAL in 2001. Sooner or later our software has to get better as it is currently terrible. If software doesn't start writing itself along lines of evolution and natural selection these nanomachines are going to catastrophically fail due to human error. I may have skipped Kurzweil's view on the matter, but isn't there a bit of a giant leap of faith on this when it comes to estimating how near we are to the Singularity? Thanks again.

  13. kurt2100 Says:

    A future that does not necessitate our extinction

    So, you're not into uploading and virtual reality universes? Neither am I. Probably most other people you and I know are not either. Personally, I think this uploading jazz is a good 100-150 years away. The reason why I think that is because human neurology does not work like present-day or even future-planed computers at all. Memory storage and processing is based no only the pattern of dendritic connections between neurons, but also on the chemical-type of synapes. Our brains continually "rewire" themselves (probably in our sleep when we dream) by deleting existing dendritic connections and establishing new ones. In other words, our memories are our dendritic connections and that it is dynamic in nature. There is no currently planned computer architecture that will allow for this dynamicism. That's one of the reasons why I think uploading will be a fantasy for a long-time to come.

    If you brouse the net, you will notice that most of the real breakthroughs are occurring in biotech, not nanotech. Just today, you can see on the biospace website ( that a gen therapy for injecting growth factors into the brain has been tested on primates, and has been shown to completely reverse brain aging. On the same site, you will see the human ovaries have been grown in mice. This means that post-menopausal women will be able to grow a new set of ovaries in about 10 years. This doesn't even take into account the developments in stem-cell regenerative medicine. These kind of biotech breakthroughs are occuring almost daily (I'm not kidding).

    My point is that we will get improved health and vitality, molecular computers, and other breakthroughs in material science; but will probably not get the wilder stuff thats predicted by Kurzwiel as others any time soon.

    Don't worry about it, I don't. I have every intention of remaining in protein for some time to come, and I have no intentions of becoming extinct.

  14. ChrisWeider Says:

    Parallels with older technologies

    I think that we see this sort of rampant optimism every time a truly transformational technology comes along, and it consists essentially of linear (in the past) and exponential (now) extrapolation of existing trends. People who applied the same methodologies in the past predicted an air-car in every home… or full-bore virtual reality… or complete colonization of the solar system by 1990. I think it's much more useful to try to determine where the bottlenecks might be rather than just extrapolating wildly.

  15. waynerad Says:

    Re:Kurzweil does it again

    You say, in his last book, Ray described a "recurrent dream" about wandering through "millions of buildings" with "no one there…. suddenly the dream ends with this feeling of dread…"

    I don't remember this. Where is it?

Leave a Reply