Foresight Nanotech Institute Logo
Image of nano

Visualizing the Cosmic All

In E.E. Smith’s famous Lensman series, the galaxy is the battleground between two races of superintelligent beings, the (good) Arisians and the (evil) Eddorians.  When I listen to people who worry that we are about to create a superintelligence which will take over the world, I get the impression they’ve come from reading “Galactic Patrol” and think that we are on the verge of disastrously creating an Eddorian unless we buckle down quick and figure out how to build a friendly Arisian instead.

In the books, the superintellects had lots of ESP powers but we can dismiss those.  The actual intellectual capability they were imputed to have was the ability to predict.  Prediction is of course the sine qua non of intelligence, but the Arisians were able to predict, e.g., five years ahead of time, that a certain man would be sitting in a barber’s chair and a kitten would jump onto his lap, jostling the barber’s arm and giving him a scratch.  All from the laws of physics and the knowledge of initial conditions.

There are many reasons why this is simply, completely, totally, always forever and truly impossible.

First of all the laws of physics are quantum and have a built-in probabalistic uncertainty. By the same token, it is impossible to know the initial conditions of any substantial portion of the universe to any very high precision: measuring a particle necessarily changes its state in a way do not completely know.

Second, huge parts of the phenomena of interest, a many levels of ontology, are in dynamic systems that are subject to chaotic behavior.  The Butterfly Effect reigns not only in weather, but in markets and politics and epidemiology and computers (one different bit out of a gigabyte can completely change the program’s behavior) and every human mind.

Computers are a particularly hard case of this.  Very basic theorems of computer science tell us that one cannot in general predict what a program will do without actually running it.  This is fine if your superintellect has plenty more processing power than the computer in question, and can emulate it.  But the closer the computer you’re trying to predict comes to having your own processing power, the more likely it will surprise you.

A weird special case of this is that you can’t even predict a universe if you yourself are part of it, because you are a computer with processing power equal to yourself.  (This, BTW, is where our notion of free will comes from: our world models must necessarily exempt our self-models from their general basis in determinism.) You could cheat and force yourself to act in the future according to a list of actions you prepared today, but you wouldn’t be acting all that intelligently; and you wouldn’t be acting with free will, either.

A more obvious case is simply a world with two (well-matched) superintellects, in which at least somewhere they are in competition, maybe even just a friendly game of chess.  In a game between two identical chess computers, each gets to see one ply deeper into the future than the other one did.  Neither can know enough to guess what the other one is going to do for sure.

In a world with lots of superintellects, no one will be able to predict any detail on which they compete.

10 Responses to “Visualizing the Cosmic All”

  1. jim moore Says:

    Off current topic:
    Sense you are sort of the nanotech balloon guy, what do you think about making something like Bucky Fuller’s Cloud 9 / Cities floating in the sky. If you have a 100 meter radius diamondiod sphere filled with hydrogen you get ~4 million kg of lift. I think that might be enough for ~1,000 people and a light weight infrastructure.
    If you put an array of 1,000 spheres together you have a city of a million.

    Look Mom no eco footprint!

  2. J. Storrs Hall Says:

    You could have diamond balloons so thin they would be buoyant at the size of the water droplets making up clouds — 15 microns. So your cloud city could actually look like a cloud! (The fractal netting/strings are left as an exercise :-) )
    (In practice you’d make the balloons bigger than 15 microns, but you could still make the whole thing look like a cloud with, say, 1mm balloons.)

  3. Instapundit » Blog Archive » VISUALIZING the Cosmic All. Says:

    [...] VISUALIZING the Cosmic All. [...]

  4. Gerald Hogan Says:

    If, as you say, the Arisians could not possibly predict the future, how do you explain Hari Seldon’s ability?

  5. kyle Says:

    I love when a scientist says something is possible, and then a musician, astrologist, or mathematician figures out how to do it just to be antagonistic. Richard Fuller was highly motivated by the type of real life experience they just don’t teach in the school system. It just gets lost when you focus on just the physics and myth busting. But in a world full of buddhas, no buddha can predict what another buddha is going to do. But the buddhas know exactly what a person with just superintellect is going to do. In theory, a theory that has been thoroughly debunked by science, of course ^_^

  6. Warremn Bonesteel Says:

    so… Professor Bruce Bueno de Mesquita’s work is a complete waste of time, I suppose.

    Even without mathematics, but with the use of other academic disciplines, I have been able to predict the behavior of societies and nations, even down to the level of ‘predicting’ the rise of certain types of personalities and celebrities in popular culture and politics – in addition to the approximate timing of their appearance on the national and international stage. (President Obama, Joe the Plumber, Paul Potts, Sarah Palin, Susan Boyle…and, in broad strokes, even the T.E.A. Party movement and Scott Brown. Such things are almost as regular and predictable as a metronome or a bar of music…or the movement of a flock of birds or school of fish.)

    With the help of a mathematician and perhaps a physicist, I could be much closer than I have been. (Only three large ‘misses’ in eleven years with perhaps four or five small misses.)

    There also appear to be a couple of big, fat, gaping holes in your logic, Dr. Hall. Could you clarify your point, please?

    I mean, if certain aspects of chemistry and physics weren’t predictable, there wouldn’t be computers chips *or* the ability to program them. If the behavior of both hardware and software is not predictable according to scientific theory, it is no longer science that I am presently using to type this note on this website. It’s magic! I know that your example wasn’t the point of your article. Perhaps it was just a bad analogy on your part?

    The present world isn’t quite full of ‘super intellects,’ but there are lots of them. Numerically, 5% of the world’s population have IQ’s that are at near genius or genius levels. Two percent of 7 billion people is quite a lot of super intellects. In general, the behaviors of such intellects, and their impact upon other people, is predictable, just as the behaviors of crowds is predictable. (‘Ants have algorithms,’ flocks of birds.) Once group behavior is predictable, which it is, individual behaviors become predictable. (system dynamics??)

    There is either something you’re not communicating, or something I am not understanding, here. …and I am not an ignorant man, Professor.

  7. TheAbstractor Says:

    This discussion reminds me of the Dune Chronicles, about how Alia Atreides was losing her mental ability to predict the future because of these other people around her who, well, had the mental ability to predict the future. People can predict the future just fine; we just can’t predict other people.

    (As an aside, the Dune series is probably one of the most depressing storylines of what can only be called “anti-science fiction”. Strong AI ends up being dangerously infeasible. Liberal democracy just leads back to the same old aristocratic circlejerkery. Space travel did nothing more than expand humanity’s age-old problems of Malthusian resource scarcity pressures and Hegelian power struggles to the rest of the galaxy. Religion and faith (which were always a lie after all. Sorry, no Second Coming for you!) ends up being more coercive and reinforcing of class-structure than ever before. Pharmacology offers humanity only a hobsian dilemma between escaping through hedonistic oblivion (Semuta music) or seeing but not being able to achieve transcendence before going mad (melange). Only eugenics offers some prospect of trans-human advancement, but only for the One descendant who is the end product of all the genetic engineering. Everyone else just has to fall down and worship him or get massacred by his jihadi worshipers.)

  8. tanstaafl Says:

    “First, there is nothing either intrinsically right or intrinsically wrong about liberty or slavery, democracy or autocracy, freedom of action or complete regimentation. It seems to us, however, that the greatest measure of happiness and of well-being for the greatest number of entities, and therefore the optimum advancement toward whatever sublime Goal it is toward which this cycle of existence is trending in the vast and unknowable Sceme of Things, is to be obtained by securing for each and every individual the greatest amount of mental and physical freedom compatible with the public welfare.”

    E.E. “Doc” Smith
    First Lensman

    not a bad mission statement for the tea party

  9. Peter Says:

    Not sure why you imagine the race of superintellects would reason by the same dimensions / rules. A.I. will not be A.I. 4 long as it is simply a means to accelerate what is our natural evolution. We have the cerebral capacity still to be used,
    When individually we have our intel chip, (whatever) ability to process information wired up, The story will continue it is simply evolution.

  10. Warren Bonesteel Says:

    Now, we seem to be wandering into social identity theory and worries about depersonalization.

    …I’m still stuck on how quantum entanglement relates to the anthropological postulate of “the psychic unity of mankind.” (There’s a relationship there, but I haven’t had the time to sort it out.)

    If quantum entanglement is a lie, then the author is quite correct in his assertions. If the entire field of science is a lie, then the author is correct in his assertions. i.e. nothing is predictable in mathematics or science.

    (hmmm…How does all of that relate to Couzin’s work on collective behavior?)

    …and if one premise of the author’s statements is true, then why are physicists able to isolate photons and hold them in ‘stasis,’ let alone ‘transport’ them from ‘here’ to ‘there’ in laboratory experiments. If the actions of the smallest observable particles are not predictable?? Where does that leave us? (Even the results of the dual slit experiment are predictable.) If those assertions are true, then how can they be observed in laboratories…which are already recording the actions of those particles?

    If other parts of his underlying premise are true, uncertainty is universal and Murphy rules all. Entropy no longer applies and extropy is a fantasy. (Digital Physics and The Holographic Principle become meaningless.)

    …then… there’s that whole Simulation Theory to think about…mirror neurons, brain entrainment and plasticity…

    If you can’t predict a universe you aren’t a part of, doesn’t that make Bohm, Feynman and others complete fools, or possibly even outright liars?

    Just thoughts in passing, here. I’m honestly trying to understand what the Professor is saying, here.

    Perhaps the case is that we don’t know all there is to know about all there is to know? Just because we do not understand how to accurately model certain levels of complexity, doesn’t mean that it can’t be done. It just means that we don’t know how to do it, yet.

Leave a Reply