Foresight Nanotech Institute Logo
Image of nano

LA Weekly “disses” Tranhumanists

from the reactionary-media dept.
Martin Archer writes "Here's another myopic treatment of tranhumanists, from the LA Weekly, Jan 19-25, 2001. Say what you will about Natasha Vita More and Max More, but personally, I just don't understand the journalist's viewpoint: '…Perhaps one day we'll all be transhumans, or posthuman cyborgs, but since we're not cyborgs now, it's hard to get too worked up about it…' It's difficult bringing the subject of the impending technological life extension 'tsunami' even to close friends. How do you do it without coming across as a weirdo?" CP: Fortunately, with the web we can judge for ourselves, without intervening media bias: see Extropy.org and Extrodot.

31 Responses to “LA Weekly “disses” Tranhumanists”

  1. MarkGubrud Says:

    comes with the territory

    This article about M. and N. More is pretty humorless, and the treatment of N. is overtly mean. But they put themselves up before the public and they get what they deserve. I don't think this article is at all unfair or even biased. The author does not agree with their ideology, but he does not distort it in any way that I can detect. He zooms in on their poverty, N's desperate denial of her aging, the awfulness of her art (and all other "transhumanist art"). That's mean, but truthful. The article does not go into much depth about M's philosophy, but gives him enough space to hang himself:

    He foresees a time when "we finally migrate off the biological hardware onto a different platform" and describes himself as being only "accidentally" a human. "That's what I happen to be right now," he says, "but I'm really a process, a personality that exists through time changing, that may or may not have the same hardware x many years from now."

    The guy actually wrote a PhD thesis about this, yet failed to recognize the absurdity of such assertions. "We… migrate"? What, exactly? How could anyone spend so much time thinking about this, and not see through it? A strong will to believe, I suppose, plus an unacknowledged but implicit supernaturalism: "I am not physical, but ethereal. I can survive death."

    Still, this article is not nearly as good as David Gale's hilariously droll Meet the Extropians from GQ in 1993.

  2. Iron Sun Says:

    "Intervening media bias" or "real world"?

    Fortunately, with the web we can judge for ourselves, without intervening media bias

    And avoid coming into contact with dissenting views? Extrodot has its own media bias, after all.

    It sometimes seems to me that far from bringing humanity together, the internet may lead to a sort of memetic ghettoization, with every special interest group killfiling anyone who expresses an opposing view, a mass electronic chorus of "la la la we are not listening". It's all very well and good to have forums in which to discuss stuff with like-minded individuals, but the memetic immune system needs a bit of vaccination from outside sources now and again.

  3. Iron Sun Says:

    Why Transhumanism/Extropianism is a religion

    I just don't understand the journalist's viewpoint: '…Perhaps one day we'll all be transhumans, or posthuman cyborgs, but since we're not cyborgs now, it's hard to get too worked up about it…'

    To quote the bit of the article immediately preceding this comment:

    Speculating about the future is fascinating in its way, but most of us would just as soon wait until we get there. If someone had told a late-19th-century peasant that soon people would be able to fly in gigantic winged steel tubes with numbered seats in them, he'd have been right, but so what? A lot of good it was going to do the peasant, who would have probably hit him over the head with a pitchfork.

    You can attempt to prepare for the future, but beyond a certain point it's just wishful thinking. And the future seldom bears much resemblance to starry-eyed futurist ideals. In the 1920's, visions of the future looked like Metropolis, with huge and small biplanes swooping amongst the skyscrapers, serving as personal and public intracity transport. Of course, some of these visions inspired the next generation of aviators and engineers, but spoilsports who might question the economic, technical or even moral validity of romantic ideas should have their objections dealt with rationally, rather than dismissed as "they just don't get it"

    On to my main point: similarities between extropianism and other "non-traditional" religious beliefs.

    1. Extropians claim to base their philosophy in rationality and technical possibility. So do the Scientologists and Raelians. Just because you claim it to be so don't make it so. So many of the central tenets of transhumanism are based on principles just as untested as the Raelians' claims that we are all cloned saucer people. Perhaps Roger Penrose is right, and consciousness arises from quantum effects in the microtubules of neurons, something that will be harder to simulate in a computer that the electrochemical model. Perhaps not. But a belief in uploading is still a leap of faith.

    2. Some Extropians use exclusionary labels like "Mysterians" to identify and ridicule opposing viewpoints. You might as well call them gentiles, or (even better) the Scientologist phrase "suppressive persons". Such handy terms allow easy identification of dangerous outside ideas, all the better to ignore them with.

    3. Transhumans have a tendency to feel that their views make them better or superior to the common mass of humanity, even if they deny this for the sake of propriety. In this forum, Practical Transhuman (a tautology that rates right up there with civil engineer ;-) has said that he thinks that transhumans are unusually individuated, and that a lot of "ordinary" people are interchangeable sheeple. The feeling that you belong to a special elite that has access to a special way of living that sets you apart from the common herd is common to most cults.

    4. In a reply to one of my posts here (which I can't locate, so I can't credit you, sorry!), someone said that if you substituted the word "rapture" for "singularity" at some Senior Associates gatherings, you'd think Jimmy Swaggart was in the house.

    This could go on for a while. I think I'll finish with an exchange from the last such discussion I had on this site. User kurt quoted a very religious-type argument for why transhumans can ignore objections to their viewpoint by way of the analogy of explaining colour to a blind person. I replied that a glib metaphor was not necessarily appropriate, but served as a good way of reinforcing a world view that you already wanted to be true. His response was "Like I say, your blind, so you don't know what the color red is like" Oh, bravo. That's the equivalent of "is too" "are not" "is too, neener, neener", the classic religious debate.

    We are both religious fanatics.

  4. Matthew_Gream Says:

    Why the world needs philosophers ?

    Perhaps the journalists should be pointed to a copy of "why businessmen need philosophy" by Ayn Rand – at least it illustrates generally why philosophy is a practically useful activity for navigating through world problems (and in the same way that one may read Hobswamns "On History", the content can be detacted from her philosophical position).

    Whether or not the journalists "personally" agree with the idea of being post-human, the reality is that it is more than likely to be an inevitable part of the future – so unquestionably it becomes a issue that needs to be discussed and understood. Perhaps it is a little early for mainstream society, but if the article really wanted to be serious it could have discussed this point.

    A legitimate criticism could be levelled at the extropians for the way in which they promote or discuss transhumanism (I am making no judgement, just pointing out an avenue of debate the journalists could have taken) – and therefore an objective criticism of M and N as the "leaders" of the movement for the way that present and deliver the issue to an illinformed public. In that approach they are validating the importance of the topic, and being objective critics which would ultimately be beneficial to all concerned.

    If I remember, one other article in the past on extropians commented on the inside of M and N's apartment – not entirely relevant to the intellectual content of the issue, but as the article was written by the british press, it is typical in the way that the british press tries to draw out a "social picture" of the people involved. When Anthony Giddens (he is the Director of the LSE and perhaps the most important sociologist in the world at the moment) was discussed in the Financial Times, they talked about his "beaten up fiat", and him running around Cambridge in a "leather jacket" – trying to draw out an impression of the personality behind the politic. Arguably an unimportant part of his intellectual output, but then Oscar Wilde said something along the lines of "only superficial people don't judge by appearances".

    Excuse my slight rant :-) [little wonder I fell for Noam Chomsky in my early years :-) ].

  5. Matthew_Gream Says:

    Re:Why Transhumanism/Extropianism is a religion

    Extropianism does have clearly theological undertones in its basic principles – not unsurprising in the american culture which is highly aspirational and individualised. The american society sublimates spirituality all through its society (e.g. business books, branding, corporate cultures, etc).

    It is fair to make an objective criticism – perhaps the concepts of "extropian" are only relevant for particular personalities, and therefore not a generally applicable "way of life" for all types of personalities. Recently there is a lot of talk about Singularity, which heads into dangerous territory with its theological undertones (say the world "wisdom", and "externalisation") – in many respects, every individual tends towards intellectual singularity after they begin to spirtualise – exactly the reason you have people spiraling to the top of corporate cultures, or ascending intellectual paths to magnum opuses, or whatever other transcendent world view they are hooked into.

    What I mean to say is that it seems important for any proponents of these "things" to be sure that they have based what they say into something that has a reasonable quantitative basis. Perhaps an analogy is with reading pop-futurist books that make simple conceptual extrapolations to predict the future – with little other intellectual tools than the idea of "conceptual extrapoliation" – not a very good basis for reasoning.

    I hope I make sense – I am a rather confused boy at the moment.

  6. GReynolds Says:

    Unsurprising

    It is typical that the people who are most open to new ideas are unconventional people. Sometimes those new ideas are right, sometimes they're wrong. The point is, you can't tell whether they're right or wrong by the conventionality of the people who accept them. Relatively conventional people accepted Paul Ehrlich's "new idea" that world starvation was less than a decade away. (Many still do…. and will for decades!) On the other hand, the late-1930s British Interplanetary Society contained quite a few, er, unconventional people. They turned out to be right. And many unconventional people thought Marxism was the bees knees; they were wrong. While I believe that the broad outlines of transhumanism are extremely likely to prove correct, I do sometimes get a feeling that there are people for whom it's a substitute for having a life. But so what? The same is often true of stamp collectors, or environmental activists. I *was* amused to see that now the people who talk about space tourism count as practical. Weren't THEY the wild-eyed dreamers a decade or so ago?

  7. Practical Transhuman Says:

    Natasha-of-Nine

    I found Natasha's "Primo 3M+" unintentionally funny, sort of like an ad in the Borg version of a women's fashion magazine.

  8. MarkGubrud Says:

    Re: Uploading

    Very good points, Mr. Iron Sun. A few comments:

    Perhaps Roger Penrose is right, and consciousness arises from quantum effects in the microtubules of neurons, something that will be harder to simulate in a computer that the electrochemical model. Perhaps not. But a belief in uploading is still a leap of faith.

    You're right, but Penrose isn't. Point-by-point:

    Quantum coherence might be important to dynamics at the molecular scale, but the brain is almost certainly not a quantum computer. I think most physicists would consider this patently obvious, but Max Tegmark took the time to demonstrate it in this paper.

    Even if the brain were a quantum computer, it might be possible to simulate it using another quantum computer, but this would not make it sensible to say that the simulation is the original. For example, both could exist simultaneously. Thus it would not make sense to say that by creating a simulation, one could "migrate to another substrate," or that this would be a way to escape the body and survive its destruction.

    Even assuming the brain is a classical information processing system, its simulation is almost certainly a much more demanding problem than most enthusiasts for "uploading" have assumed. That being said, it remains true that such a simulation is, in principle, possible. And even so, it still would not make sense to say that the simulation is the original, or that making such a simulation would be a way for an individual to escape death.

    It is likely that very convincing simulations of human personalities, even particular personalities, can be implemented in hardware which is logically much simpler and physically much more efficient than that of the human brain. But such simulations would not be the individuals simulated; they would not even behave identically. It is probably relatively easy to make a rough simulation of a particular person's behavior, very hard to make a highly faithful simulation, and impossible to make a simulation that behaves exactly the same as any particular human being in every situation.

    Some Extropians use exclusionary labels like "Mysterians"

    As I understand it, this is supposed to refer to the notion that there is something mysterious about human nature (or nature in general) that cannot be discovered by science or replicated by technology. I agree that this is a false notion. But the extropians fail to acknowledge the mysterianism in their own belief that one can escape death by transferring some immaterial essence of one's self to a machine, that egotistical intelligences occupy a preferred place in the scheme of creation, and many other aspects of their belief system, (some of which you have commented on).

  9. samantha Says:

    Re:comes with the territory

    Surely N's age and various opinions of her art are utterly irrelevant to extropianism as such. Only small monkey brains will pick at nits and ignore the main gist. I will be glad when we graduate from such tactics. Truth is not a sufficient criteria. It is more important that the particular truths are relevant to the topic at hand. Ethereality is not at all necessary to support the possibility of uploading or so upgrading the body that current biological limits are transcended. Such possibilities are implied by removing mystical notions of a supernatural soul inhabiting and enlivening the body. If the consciousness and memory is process within given hardware then it is possible to think of trasferring that process to run in different hardware environment. That implies nothing at all mystical and the continuous assertions otherwise are quite boring.

  10. MarkGubrud Says:

    Re:comes with the territory

    Surely N's age and various opinions of her art are utterly irrelevant to extropianism as such.

    Not irrelevant, but not central, either. N's age is clearly and important part of her human reality, and her efforts to deny it is indicative, as is the character of her art, of vanity. Which could fairly well be said to characterize the entire Extropian movement. Still, I agree that this is hardly a cogent critique of the extro philosophy.

    If the consciousness and memory is process within given hardware then it is possible to think of trasferring that process to run in different hardware environment. That implies nothing at all mystical and the continuous assertions otherwise are quite boring.

    It is not mystical to propose that some kind of facsimilie of a brain can be created which would behave similarly to the original. What is mystical is the language used to give meaning to such a proposal: "We… migrate." There is no thing which could migrate. Use of language implying the existence of such an ethereal thing, separable from the physical body and transferrable to "different hardware", exposes the mystical, dualistic conceptions underlying the enthusiasm for this scenario. If we are to describe it in terms that do not carry this implication, we have to say "Max More looks forward to the day when we can all be killed and replaced by computers."

    This is a cogent critique of the philosophy, and your use of the word "boring" is irrelevant.

  11. ChrisWeider Says:

    Re: Uploading

    I think the real leap of faith here, as you point out, is transference of the ego (I'm using this sloppy term because I don't know a better one). However, I don't know of any good reason why incremental replacement of neural hardware in situ wouldn't suffice to maintain the ego. But in a very real sense, the whole life-extension movement seems misplaced to me… many on this board have pointed out major threats that are likely to occur way before I even reach my three-score and ten, much less celebrate my Millenium Day :^).

  12. psycho_infomorph Says:

    Re:Why Transhumanism/Extropianism is a religion

    Yeah. I have a form of autism called "Asperger's Syndrome" which prevents me from appreciating the finer nuances of human emotion such as sarcasm, etc. and so you could say that I have a simplified mind in certain ways. I do however recognize the simpler emotions, like bravery. It takes a lot of courage to go out in public and try to be a transhuman. The people you ought to really worry about are the more private transhumans who someday just might find your burial site, gather up remnants of your DNA, clone you, and train you to chant over and over "I will never diss an infomorph".

  13. MarkGubrud Says:

    Re: Uploading

    transference of the ego (I'm using this sloppy term because I don't know a better one)

    The word you're looking for is soul. That is the standard term (in monotheistic civilization, where "spirit" is more often associated with poly- or pantheistic animism) for an ineffable essence which can be separated from the physical body and which itself embodies the true life, consciousness, and identity of the person.

    People who advocate uploading use all sorts of words to try to indicate this idea without revealing the essentially religious character of the dualism it implies. "Neural pattern", "algorithm", "process", "brain-contents", "identity", "your consciousness", "your memories", "YOU". These are all just covert synonyms for "soul."

    The real issue is "ME": If a copy of my brain is made, will I wake up in it? This is what we are all asking. If you believe in dualism, and that your soul (or some equivalent term) will automatically "transfer" to the copy, the answer is a reassuring "yes." But if you stick to a purely physicalistic (materialistic) viewpoint, you are left with the picture of a(n imperfect) copy being made, and you being destroyed (killed).

    I don't know of any good reason why incremental replacement of neural hardware in situ wouldn't suffice

    I don't know of any good reason why it should make any essential difference if the copy is made in situ, nearby, a million miles away, immediately, continuously in small steps, or much later. It is still just a copy, another thing.

    Ask yourself this: If a copy of me were made without my knowledge by some process which I could not detect, would it make any difference to me? Then ask: Would it be okay if someone killed me, as long as they made a copy somewhere? Then ask: If I'm going to die, why would I care if a copy were made or not? Maybe to continue my work and affairs. That might be some consolation, but that is not what I really want. I want, myself, to avoid dying.

    But what does this italicized word mean? Unless you believe in souls, it doesn't mean anything. So you can give up your fear of death, and with it the delusion that "uploading" is a way to avoid death.

  14. ChrisWeider Says:

    Re: Uploading

    If the work is done incrementally, in situ, it *is not* a copy. Unless you wish to argue that I'm a different person because I've replaced all the carbon atoms in my body over my lifetime. I have no truck with the soul, and am fully aware of the religious axioms implicit in uploading. Perhaps we should use a different term, say 'upgrading'. Mark, let's do a thought experiment. I have a neuron which is used in my sense of me (as opposed to long-term storage). I take another neuron and set it up so that it has exactly the same connections, and then switch from one to the other. Am I still me?

  15. archinla Says:

    Re: Original post

    I'm not a scientist, and neither are the Mores, evidently. Perhaps my original post shows that I identify with them for this reason. I don't have the knowledge to participate in the general discussion of nanotechnology at a scientific level, although, courtesy of popularizers like Kurzweil, et al, I'm at least aware of its implications. But yes – at that drop-off point in my knowledge, my own singularity, I turn into a believer, so I accept that criticism. The LA Weekly article doesn't investigate technology at all, and rather, accepts the Mores' philosophy at face value and to my mind, does everyone a disservice. So why write it at all, let alone feature it on the front of LA Weekly (a free rag with an absolutely enormous distribution in Los Angeles)? I just wish public, "non-scientific" discourse of nanotechnology and related subjects was more advanced than the humorless, ignorant scorn aimed at these quite innocent folk.

  16. Matthew_Gream Says:

    Definition of Religion

    FYI

    Out of sheer co-incidence, I am reading Clifford Geertz's excellent book "The Interpretation of Cultures", and it has to be one of the top 5 books that I have read in the last year.

    Quoting from pg90:

    "a religion is:
    (1) a system of symbols which acts to (2) establish powerful, pervasive, and long-lasting moods and motivations in men by (3) formulating conceptions of a general order of existence and (4) clothing these conceptions with such an aura of factuality that (5) the moods and motivations seem uniquely realistic."

  17. MarkGubrud Says:

    Re: Uploading

    If the work is done incrementally, in situ, it *is not* a copy. Unless you wish to argue that I'm a different person because I've replaced all the carbon atoms in my body over my lifetime.

    I used the word "copy" to characterize the type of process contemplated. An original object is to be measured in some way, and another object created to specifications derived from the measurement of the original. We are used to putting a document on top of a photocopier and having a duplicate roll out the side with no alteration of the original. But we can imagine just as well that the duplicate would be made in small pieces, the corresponding pieces removed one at a time from the original, and the duplicate pieces glued in.

    We can argue about the interpretation of bare physical facts, but interpretations are illusory; the only reality is the physical process which we are describing. I object to the language used by people like More, Moravec and Kurzweil because it obscures the reality (the physical process) in order to make the physical destruction of human individuals or perhaps the entire species seem appealing.

    Most people are not likely to wish to have their brains disassembled, no matter how many facsimilies are made. It is pretty clear to me that I could not survive having my brain disassembled. The fact that a replica of some sort might be made would not change the physical fact of what was done to me. The location of the replica relative the the Earth's rotating coordinate system would not change this either.

    But our brains are disassembling and reassembling themselves all the time, exchanging old molecules for new. Isn't this just the same? No, it's not. It is a completely different process, and one which is an essential aspect of human existence. But isn't it morally the same? No, it's not. Having your brain disassembled by a machine and human flesh replaced by artificial neurons, software simulations, or whatever, is not an aspect of human existence. The product thus created, call it an "upload" or a "cyborg", would not be a human being. This use of words is not a matter of interpretation, but a direct reference to physical facts.

    I have no truck with the soul, and am fully aware of the religious axioms implicit in uploading. Perhaps we should use a different term, say 'upgrading'.

    That's even more religious; you've added an element of moral transcendence to the merely superstitous claim of transferability.

    I have a neuron which is used in my sense of me (as opposed to long-term storage). I take another neuron and set it up so that it has exactly the same connections, and then switch from one to the other. Am I still me?

    It's a bit unclear what you mean by "my sense of me (as opposed to long-term storage)." You mean that there may be some neurons involved in immediate consciousness, others which are involved in long-term memory? Perhaps. But surely there are some which are involved in what we call awareness. You ask what happens if one of these is replaced by a functional equivalent of some sort.

    As Moravec points out in his famous scenario, we can confidently predict that a patient undergoing such a procedure would neither perceive nor report any change. What does this prove? Absolutely nothing, except that death is not necessarily perceptible to the one dying. We all experience the death of thousands of neurons every day, or, that is, it happens, but we don't experience it. Nothing new here. Also nothing new in the fact that an entire brain can be destroyed without the person ever being aware of it.

  18. crasch Says:

    Re: Uploading

    The real issue is "ME": If a copy of my brain is made, will I wake up in it? This is what we are all asking. If you believe in dualism, and that your soul (or some equivalent term) will automatically "transfer" to the copy, the answer is a reassuring "yes." But if you stick to a purely physicalistic (materialistic) viewpoint, you are left with the picture of a(n imperfect) copy being made, and you being destroyed (killed).

    The particular atoms that make up your body, including your brain, undergo constant replacement. Do you think that you have been "killed" because your body contains few of the atoms it contained 10 years ago?

    Might it be the case that it is not the atoms themselves, but the pattern that the atoms form that creates your identity? If you agree that it doesn't matter if an atom is replaced with an atom of the same type, perhaps a collection of atoms, such as a neuron, could also be replaced without changing your identity?

    For the purposes of maintaining your identity, do we even need to replace a neuron atom for atom? A neuron communicates with other neurons primarily by emitting various neurotransmitters at the synapes. Might it not be possible to replace a natural neuron with a functionally equivalent artificial neuron, so long as the artificial neuron behaves identically to the biological neuron it replaces? After all, why must a neuron be made out of protein, fat, and water?

    The existing efforts are crude, but already researchers have found ways to interface biological neurons with silicon–see (Stett, et al 1997) and (Kovacs, et al 1994)

    If you grant that one of your neurons could be replaced with an artificial neuron without "killing you" why not all of them? At what point would you be "killed"?

    I'm sure you're aware of Roger Sperry's work with split brain patients. Sperry's work demonstrated that the functions of the brain were localized to specific areas of the brain–the left hemisphere tends to process verbal, analytical information the right hemisphere tends to process visual information. The corpos callosum allows both halves of the brain to communicate with each other. (A gross simplification, I admit). Patients with severed corpus callosi exhibit few obvious neurological deficits–the two halves of the brain appear capable of operating quite well independently of each other. Is a split brain patient two individuals in a single body? Or a single individual with two spatially separate loci of control? And if you grant that it is a single individual with two spacially separate loci of control, why would it not be possible for additional physically separate loci of control? Or loci located outside of the individual's body?

    Note that at no point am I arguing that your identity can exist independent of the physical substrate. Rather, I'm suggesting that your identity need not be embodied in a particular physical substrate (in this case, biological neurons). It is the difference between the blueprint for a house, and the house itself.

  19. MarkGubrud Says:

    Re: Uploading

    Do you think that you have been "killed" because your body contains few of the atoms it contained 10 years ago?

    Clearly I have not been killed, otherwise I wouldn't be able to answer you. But, as you point out, there is quite a difference between the person I am now and the Mark Gubrud of 10 years ago. Ten years stand between us, and that's about 9.5 trillion kilometers, a lot of distance. Even so, there is a connection between us, the continuity of life.

    These are all physical facts. If I say, "I am the person who was, and will be, in spite of continual renewal and exchange of fundamental particles," that is an interpretation overlaid on the physical facts. It, and the moral significance we all attach to this interpretation (desiring to go on living, and to prosper, to grow, to learn, to "improve" in various ways) are well justified in terms of the biological facts, but only in terms of those facts.

    As soon as we start monkeying with what we are at a fundamental level, we cast doubt on the human meaning of life, identity, and continuity. The notion of uploading destroys this meaning completely, leaving us with no justification for connecting the past with the future. It is thus a moral hazard, a kind of ideological poison.

    Might it be the case that it is not the atoms themselves, but the pattern that the atoms form that creates your identity?

    The language you are using suggests the idea that "identity" is some kind of extra thing "created" by the existence of atoms arranged in a certain pattern, like a radio wave created by the existence of atoms arranged in the pattern of a transmitter. But this is nonsense. The electromagnetic field is a physical quantity, the identity field is not. Neither is "pattern" a thing in itself; "there is no information without representation," i.e. matter, from whose physics the "physics of information" is derived.

    Neither the atoms nor "the pattern" create anything. We create the concept of identity, and justify it in terms of biological facts. But "identity" does not exist in itself.

    perhaps a collection of atoms, such as a neuron, could also be replaced without changing your identity?

    If you change atoms, you change atoms. What does it mean to change "identity"? That is just a matter of interpretation. It is clearly a very important matter for us as human beings, but identity across uploading cannot be given any interpretation satisfying the human need for meaning in a consistent way, as can be seen by considering any number of fairly obvious scenarios.

    For the purposes of maintaining your identity

    For the purposes of maintaining an illusion, all you need is a willing suspension of disbelief. That is what Moravec, Kurzweil, and More are hard at work trying to arrange.

    Might it not be possible to replace a natural neuron with a functionally equivalent artificial neuron, so long as the artificial neuron behaves identically to the biological neuron it replaces?

    It is undoubtedly possible, though probably not as easy as you might imagine. But the artificial neuron would not be human. The system would have been moved one step down the hypothetical road that connects a human being to a non-human android via the route of a part-human, part-machine cyborg. Note that this is not a matter of interpretation, but purely a description of hypothetical physical facts.

    why must a neuron be made out of protein, fat, and water?

    Because that is what neurons are made of, unless you are talking about "artificial neurons," which are not really neurons at all, but merely systems designed to mimic the functioning of neurons.

    If you grant that one of your neurons could be replaced with an artificial neuron without "killing you" why not all of them?

    One of my neurons could be replaced without killing me for the same reason I don't die every time one of my neurons does. But if all my neurons die, then surely I will be dead, i.e., from my point of view, I will no longer be, will no longer have a point of view, or anything.

    At what point would you be "killed"?

    At no particular point. Death is always a process that takes place over time, even if it happens so fast as to be imperceptible to the one dying. It can just as well be imperceptible if it happens slowly. But after all my neurons had been destroyed, you would be safe in saying I was dead.

    Is a split brain patient two individuals in a single body?

    No, by definition. But the split-brain phenomenon is a challenge to our concepts of the self. I tend to think that the resolution of this challenge requires abandoning the ideology of ego that underlies the Extropian delusion.

    if you grant that it is a single individual with two spacially separate loci of control, why would it not be possible for additional physically separate loci of control? Or loci located outside of the individual's body?

    Split-brain patients are able to go on functioning more or less normally only because the two halves are used to working together and not at cross-purposes. Also, the corpus callosum is not the only integrative mechanism at work. There are right-left connections at lower levels of the brain. Perhaps more importantly, both hemispheres occupy the same body, in the same environment; they hear the same sounds, see the same things, participate in the same activities. So they naturally just do what they have always done in various situations.

    However, if you imagine cutting up the brain into smaller and smaller pieces, I don't think you will be able to go very far before you impose drastic impairments on the person. Similarly, I don't see how you coordinate multiplying "loci of control" including some "outside of the individual's body".

    I take your point to be that consciousness, identity, or whatever it is you think you are talking about, is not (as demonstrated by split brains) necessarily localized and could perhaps be expanded with the addition of outboard hardware. I have never denied that human beings could be deformed continuously into superintelligent cyborgs and beyond. I don't think it would be quite as straightforward as often depicted, but I believe it is probably possible. What I do take issue with is the notion that it is possibly desirable. Seen clearly, it is a prescription for destroying all that we can and do value in being alive and in being human.

    at no point am I arguing that your identity can exist independent of the physical substrate.

    The problem is your assumption that "the identity" exists in addition to the "the physical substrate". The latter is all that does exist.

    I'm suggesting that your identity need not be embodied in a particular physical substrate

    You have not explained what it means for "identity" to "be embodied." A particular physical system can be said to have an identity (although this is not a physical quantity, it is a physical fact) in that it is the particular physical system and no other object is it. Similarly, I have an identity, as a particular physical system; no other physical system is me. The human concept of a single identity over time is more problematic, but we can justify it in terms of biological continuity. However, if we try to extend this to uploads, we run into all sorts of paradoxes which I am betting you don't need me to point out to you.

    It is the difference between the blueprint for a house, and the house itself

    Yes, but you are the one confused; you think the blueprint is more real than the house itself, that it is the real house.

  20. Iron Sun Says:

    Avoiding uncomfortable implications

    One of the bits about the common "gradual replacement" uploading hypothesis that both amuses and irritates me is how it is obviously an attempt to sidestep some of the more disturbing possibilities such a course of action might make possible.

    By being a process of incremental substitution, it mimics the ongoing process of change that occurs in day to day living, and uploading can thus be presented as a continuation of life. It's a nice piece of propaganda designed to assuage fears about loss of identity. It also avoids the whole "Well, I've had my brainscan, so now my meatself can die. Oh dear, it doesn't want to" situation. Few people would want a scenario in which one "self" wakes up in Cyber Happy Land while the other is stuck in the now obsolete human body. As Mark has pointed out, this isn't what would actually happen, but nevertheless, even as a misapprehension of the truth, it wouldn't win many converts to the cause.

    It also directs attention away from one of the other implications of the "Memorex" system of uploading: that of making multiple copies. Some people might find the idea of numerous examples of their personality and memory existing simultaneously an attractive idea. They are probably very scary, and should not be allowed to.

    The whole "backup" idea also get to me. "Okay, I backed up my brain yesterday, so it doesn't matter if I die today". See if you still feel that way as the plane you are in goes down in flames.

  21. crasch Says:

    Re:Avoiding uncomfortable implications

    One of the bits about the common "gradual replacement" uploading hypothesis that both amuses and irritates me is how it is obviously an attempt to sidestep some of the more disturbing possibilities such a course of action might make possible.

    Imagine that you were in the shoes of the person arguing for uploading. Assume that you were addressing a skeptical, possibly hostile audience. Would you immediately jump to the most radical implications of your position? Or would you begin from what you assume to be common ground?

    It has not been my experience that extropians sidestep the more unusual implications of uploading–I encourage anyone who may have questions to read/subscribe to the extropians mailing list. There you will find a number of people quite willing to explore even the most far-reaching ramifications of uploading.

    It also directs attention away from one of the other implications of the "Memorex" system of uploading: that of making multiple copies. Some people might find the idea of numerous examples of their personality and memory existing simultaneously an attractive idea. They are probably very scary, and should not be allowed to.

    Well, the lack of redundancy in the human brain has always bothered me. It takes at least a decade or so to acquire competency in most fields. Yet a single careless driver, a fall in the bathtub, a stray bit of fat in the wrong artery and…poof!…in an instant, all of that accumulated knowledge is gone. This seems quite wasteful to me.

    I've always loved to read–I have a great fondness for books. When I learned of the burning of the Library of Alexandria, it seemed like a great tragedy to me, because no duplicates of many of those books existed anywhere else in the world.

    Yet what is a book? Are not most books simply a story that somebody has written down? Most people write down only a few, if any, of the stories that they know. When they die, all of their stories become lost. As a result, everyday, the equivalent of thousands of Libraries of Alexandria are lost. If saving those people's lives (and their stories) means expanding my concept of self to include duplicates, then I'm quite happy to do so.

    The whole "backup" idea also get to me. "Okay, I backed up my brain yesterday, so it doesn't matter if I die today". See if you still feel that way as the plane you are in goes down in flames.

    Right now, most people's sense of self is strongly associated with a single, physically localized body. However, I expect that our sense of self will gradually expand to include a network of selfs possibly spread over a large geographical area.

    We already store a part of ourselves external to our body. For example, couldn't the written word be considered an externalized form of long-term memory? Computers and calculators also assist with arithmetic, visualization, long term memory and other mental functions that prior to the computer's invention were performed primarily by our biological apparatus alone. I expect that as these "external brain assistants" increase in capability, and as the interface between our biological brains and these "brain assistants" improve, our sense of self will also expand to encompass a network of brains, possibly spread over a wide geographical area.

    Would a duplicate trapped on a burning plane feel fear? Probably, if it were an exact duplicate of me as I am now. But as I imply above, I would expect that only part of "myself" would be duplicated in any given node of my "brain network". I also would expect that those partial duplicates would not necessarily experience fear in the same way we do now. We feel fear when endangered because those ancient relatives who felt no fear presumably did not survive to reproduce. From the standpoint of the "brain network" as a whole, the loss of a single node would not be the catastrophic loss that the death of my body would be now. Fear appears to be controlled primarily by the amygdala. Therefore, I expect that when we have advanced enough to create the "brain networks" I describe above, I will also be able to alter the functioning of the amygdala (or its equivalent) in each node of the brain network so that node would feel whatever level of fear I felt appropriate.

    Therefore, the "node" going down in flames will likely feel some fear (I would leave the capability to feel some fear in place, because I wouldn't want my nodes to endanger themselves willy-nilly) but I expect that the "node" would most likely spend most of its attention attempting to radio transfer as much of its unique memory, skills, personality to another part of my network, rather than stewing about its own eventual demise. In any case, assuming that you have made appropriate backups, only a small part of "me" (referring to the entire "brain network") would be lost–I would imagine that the loss "I" would feel would be analogous to the loss I would now feel from the loss of a few days of unsaved work on my computer.

    Many people, if they think about the possibility of "brain networks" now, will find them strange and frightening, But such "brain networks" will evolve gradually over many years (decades probably). At each step, the advances will seem obviously helpful. A prosthetic eye for blind people? Of course. Brain/computer interfaces for the paralyzed? Seems very helpful. Nanometer scale medical imaging? Marvelous. By itself, no single innovation will suffice, but each advance will build upon the others.

    Eventually I expect that we will become "brain networks" without ever realizing what was happening. Our future selves (or our descendants) will look back on our present crude state (by comparison) and feel the same way we do about our ape-like ancestors ("They only had single node brain networks? How sad.")

  22. transhuman57 Says:

    Those Wacky Transhumanists!

    Brendan Bernhard, we owe you a big debt of gratitude for your probing and insightful article about transhumanists Max More and Natasha Vita-More. Rather than adopt the usual sardonic and patronizing tone that seems to permeate the rest of the L.A. Weekly, you have bravely taken a different approach in exposing this self-proclaimed "movement" as just another California fad that deserves no more than a passing guffaw. I did notice your appreciation of Natasha's physique, as well as the "long legs of the interviewer's 20-something assistant, languorously stretched out on the other side of the living room". You also state that "Natasha calls herself an artist, but she might more accurately be viewed as a symptom." What astounding journalism! You sure put these misguided people in their place. Thank you for also pointing out that "Death is awful, but an endlessly prolonged life span seems unimaginable, even monstrous." How deluded I had been to desire an unlimited lifespan! Now I can degrade, die and decay in peace. Long live regular humanity! Well, not too long…

  23. Iron Sun Says:

    Re:Avoiding uncomfortable implications

    Assume that you were addressing a skeptical, possibly hostile audience. Would you immediately jump to the most radical implications of your position?

    Of course not. If I was a KKK Grand Dragon recruiting a new member I would start by playing on feelings of hostility and alienation before I burned any crosses.

    Yet a single careless driver, a fall in the bathtub, a stray bit of fat in the wrong artery and…poof!…in an instant, all of that accumulated knowledge is gone. This seems quite wasteful to me.

    Perhaps it is that imminent sense of potential loss that lets us savour each moment. In order to truly live, we must accept the total inevitability that one way or another all that we have will pass away. Anything else devalues the human experience far more than you seem willing to admit to yourself at this point.

    At each step, the advances will seem obviously helpful. A prosthetic eye for blind people? Of course. Brain/computer interfaces for the paralyzed? Seems very helpful. Nanometer scale medical imaging? Marvelous. By itself, no single innovation will suffice, but each advance will build upon the others.

    And because it is gradual, that makes it acceptable? Have you ever heard the one about "first they came for the Jews and I did nothing… then they came for the etc etc"? The slow erosion of civil rights, the rise of Nazism, someone becoming addicted to prescription painkillers and sliding into heroin use – all of these things can happen incrementally with justifications each step of the way. The problem is that many of the steps may make perfect sense, such as prosthetics for the disabled. But, like the prescription painkillers, some people find the dangerous side effects attractive even when they do not need the therapeutic aspects. And before you say "why can't I enhance myself in whatever way I choose" let me give you this analogy: If I live on a farm and need to get rid of vermin, I can get a shotgun. I can also take it down to the shooting range and take some potshots at clay pigeons for fun. That doesn't mean that I need to have a gun grafted to my arm. To make a useful or fun faculty part of my self changes the way that we look at it and the way it affects our behaviour.

  24. kurt2100 Says:

    Color red, religion-like, and mountaineering

    My comment about explaining the color red to a blind person is not religious-like at all, and is entirely appropriate to the issue of living forever. A case in point. I occasionally climb mountains. I have occasionally been asked by people who are not climbers why I like to climb mountians. Some of these people have told me that they think mountain climbing is dangerous, difficult, and a stupid thing to do (especially since I am into life-extension) and they do not understand why I do it. I tell them that I climb because I enjoy it and that I am "into it". I tell them that I cannot explain to them why I like to climb, just that it has to be experienced in order to be understood. You see, they cannot understand why mountain climbing is enjoyable any more than someone who is blind can understand the color red. If you like mountain climbing, you do it. If you don't like you don't do it.

    Likewise, with life-extension. You're either "into it", or you're not. Being "into" transhumanism is no more a religion than being "into" mountaineering. It's simply a personal choice. Like-wise, to "object" to me being into transhumanism is as silly as to "object" to me going mountain climbing. Its simply not an issue.

  25. Iron Sun Says:

    Re:Color red, religion-like, and mountaineering

    My comment about explaining the color red to a blind person is not religious-like at all

    Yes it is. It implies a special knowledge, enlightenment or state of grace that must be believed or directly experienced rather than be subjected to rational debate.

    A case in point.

    A case against point: You have chosen a daring and yet not beyond the pale activity to illustrate, or perhaps obfuscate, your position. Allow me to present a different angle.

    In my misspent youth, I came into contact with people involved in what is commonly known as the drug culture. I was mainly a spectator, and I certainly never became addicted. But I knew people, several of whom are no longer with us, who were. Young heroin users who are in the initial honeymoon period of using the drug feel good about it for a number of reasons. They think that because they didn't drop dead of an overdose the first time they shot up that smack is a lot less dangerous that all of the drug war propaganda. They feel the illicit thrill of sticking one to the man, and of being part of a subculture. They are convinced that they aren't going to get addicted, and that they can stop whenever they want if they do. They feel they are initiated into a way of life that non-users cannot understand, and they feel contempt or even pity for anyone who tries to talk them out of it. Older, more desperate users will offer naive non-users the drug in an effort to get them addicted in order to support their own habit. Some younger users offer it to their friends because they honestly believe that it is a wonderful experience that should be shared.

    Likewise, with life-extension. You're either "into it", or you're not. Being "into" transhumanism is no more a religion than being "into" mountaineering.

    Mountaineering is an activity, not a lifestyle. It may be a major, defining part of your existence, but it is not a philosophy like transhumanism.

    It's simply a personal choice.

    So, as I have pointed out, is heroin use.

    Like-wise, to "object" to me being into transhumanism is as silly as to "object" to me going mountain climbing.

    I do not object to your philosophy, in the sense that I wish to compell a change, I simply take issue with a number of your assumptions and try to point out some of the flaws and pitfalls in your reasoning. Again, a hallmark of religious-style closemindedness in the face of argument is to retreat to a position of "Well, I don;t care what you say, I'm going to believe what makes me happy, bleeah"

    Its simply not an issue.

    'Nuff said.

  26. RobertBradbury Says:

    Uploading As Migration

    Mark is clearly an expert in quantum mechanics and related topics of physics. I notice that he 'disses' Penrose's 'consciousness in the microtubules' perspective and he seems to dislike the idea that there is anything 'mysterious' or 'magical' about the brain.

    If those statements are moderately accurate then I believe that he and I have similar perspectives regarding the physical basis of human minds. What I cannot understand is precisely what he is objecting to in the 'migration' perspective Max proposes.

    Off and on I've devoted a couple of years of thought to the ideas regarding uploading and I find that I would agree completely with the idea of 'uploading' as a 'migration' of our minds onto a different hardware platform.

    There are at least two forms of uploading that one can consider, "destructive readout" and "mental evolution onto different hardware".

    The "destructive readout" method involves freezing the brain, then disassembling it atom by atom taking careful note of where everything is. Once that is complete, a great deal of computer structural analysis is done to determine precisely what the state of the brain was before the freezing occurred. While we cannot be absolutely certain to be able to "backtrack" the relocation of the molecules to their original states, I believe the largely redundant information in the DNA, protein structures and synaptic connections will allow a reconstruction of a largely equivalent molecular map. Then one either can reassemble a copy of the original brain (molecule by molecule) using highly parallel AFM directed methods. An alternative is to simply run an atomic scale simulation of the brain on a very large supercomputer. The only problem with the last approach seems to be the fact that even solar system sized supercomputers (Matrioshka Brains) currently seem to have insufficient capacity to perform this task given current molecular modeling methods. Most likely cellular automata architectures highly specialized for running molecular sumulations (perhaps derived from IBM's Blue Gene supercomputer) may be able to accomplish this.

    Now, provided the recreated "brain" is sufficiently accurate, I would argue that either it, or the simulation of it, is precisely a migration (in essence a recreation) of the original individual. While I personally, do not like the taste this approach leaves in my mouth with regard to my own personal "indefinite longevity", I would not quibble that the recreation is effectively "me".

    An alternative to this is "mental evolution onto different hardware". In this instance, one would administer to the individual nanobots that are capable of mapping the neural structure of the brain to the fine detail level of the synaptic connections and the relative "strengths" of those connections (by determining how many receptors are in each synaptic cleft, the quantity of neurotransmitters released when the neuron "fires", etc.). These monitoring nanobots would then construct an integrated whole-brain communications network (using methods disscussed in Chapter 7 of Nanomedicine). The whole brain network of nanobots is connected (via fiber optics or very high frequency microwave links) to exo-computers with significantly more capacity than the human brain. The internal nanobot network would monitor all internal brain communications gradually learn to interpret the meanings of specific signals (as neuroscientists currently do with NMR and PET scans localizing brain functions to various regions, but at a much finer resolution). Our minds and the exocomputer would interact until an effective shorthand is developed between our minds and the computer that allows rapid two-way communication. One should easily be able to accept that our mind would "program" the exo-computer with agents to store and retreive data (off-loading of our memory). [As an aside, I'll note that a recent news item suggested that people are doing this now with Personal Digital Assistant devices, but the result is that their memories are becoming poorer, presumably due to a lack of exercise.] If, as William Calvin, suggests our "thoughts" are neuronal firing patterns that can be copied from one part of the brain to another, then one can envision moving "thoughts" into the exo-computer as well. I would expect this to be a gradual evolutionary process where as time goes by, more and more of one's memory and 'mind' ends up in the exo-computer. At that point, because the exocomputer hardware would be designed to allow "copying", you can relocate this part of your mind and execute it anywhere (allowing very lengthy longevities if you have copies in enough 'safe' locations). Because the capacity of the exo-computer can be scaled up (a single 1 cm3 Drexlerian nanocomputer can probably run 100,000 human 'minds'), people gradually put more and more of their 'minds' into the computer. Eventually at some point an 'accident' happens to the original body/brain, leaving the majority of the mind disembodied in the exo-computer (or circulating around the net). Now, given molecular scale manufacturing capabilities, you could 'reconstruct' the original mind, but its capacities would be so limited compared with the evolved (uploaded) mind, there doesn't seem to be much point to doing this. It is likely that the evolved mind would view the loss of the original body/brain as we humans currently view the loss of a finger, or a tooth, or perhaps the millions of cells we lose every day.

    It is worth noting that this scenario is probably not much different from what goes on normally each day because neurons do die (in large numbers) so over time we are losing memory, or at least the accuracy of it, and we certainly do neuronal connection remodeling and perhaps even gradual neuron replacement with new neurons derived from stem cells. As the pace of biotechnology picks up we will likely chose to enhance our brains to decrease the rate of neuron cell death and/or increase the rate of neuron replacement. This is likely to increase the pace of our own mental evolution. The nanobot to exo-computer approach simply utilizes more sophisticated hardware to further increase the rate at which this process occurs.

    Now, the only "catches" I see to this scenario is whether or not the nanobots can monitor the brain at a sufficiently fine scale to get "all" of the information that is there and whether there will be sufficient bandwidth between the human brain and the exocomputer to allow effective integration of the minds (or whether you develop a split-personality disorder).

    There is probably a third approach that involves the gradual replacement of neurons with enhanced bio-engineered neurons (with I/O ports that can be more easily 'tapped' by the nanobots) or even nanobot-based neurons themselves. However, 'true' indefinite longevity, will require that you either learn to live with the idea that the death of a mind-instance (due to an accident) and the subsequent activation of a copy is still 'you' (at least up to the last backup point), or else it will be necessary to distribute your mind over a very large volume of space such that local 'accidents' only damage very limited parts of your mind (as a minor stroke might do our current brains).

    Given these various approaches I fail to see how uploading cannot be viewed as a migration across hardware platforms. It is worth noting that much of my career in the software industry involved taking programs and 'porting' them onto different hardware platforms. Software does migrate across platforms even if it isn't designed for it. The brain is the hardware that supports the human mind. The mind, to me, seems to be intimately enmeshed with the physical structure of the neurons and the molecular architecture of the synapses. Unless one says that it is "impossible" to replace those hardware parts with other effectively equivalent hardware (or software) parts, I fail to see how one can assert that the human mind cannot be migrated.

  27. MarkGubrud Says:

    Re:Uploading As Migration

    Mark is clearly an expert in quantum mechanics and related topics of physics.

    I have some professors who'd chuckle at that, and some who'd howl.

    I would agree completely with the idea of 'uploading' as a 'migration' of our minds onto a different hardware platform.

    This language is clearly derived from usage in the software industry, where "applications migrate from one platform to another". However, even there this is a metaphor. Birds migrate, people migrate; they are actual things that actually move from one place to another. But "software migration" means that you stop using one computer and start using another for the same purpose. So you are saying that a person's brain could be discarded and another one used for the same purpose(s). That might be okay if the purposes were those of another person. If I am an employer using a computer programmer to get a piece of code written, it might suit me just fine to "migrate" the programmer's "software" to another "platform." It would not benefit the programmer, however. Certainly not if I killed her in the process.

    "destructive readout" and "mental evolution onto different hardware"

    It seems to me that the second is not fundamentally different from the first, but is really only a kind of smoke-and-mirrors argument.

    While we cannot be absolutely certain to be able to "backtrack" the relocation of the molecules to their original states, I believe the largely redundant information in the DNA, protein structures and synaptic connections will allow a reconstruction of a largely equivalent molecular map.

    If molecules move or undergo significant changes of state in the freezing or vitrification process, the DNA will be completely useless in reconstructing their prior states at the time of death. Molecular-level details are almost certainly important; the naive idea that memory and personality can be reduced to synaptic connectivity is flatly contradicted by modern neuroscience. It seems likely that much information will in fact be irretrievably lost, although that would perhaps be equivalent to the effects of a major brain trauma which typically causes loss of recent memory and some reversible loss of competency.

    one either can reassemble a copy of the original brain (molecule by molecule) [or] simply run an atomic scale simulation of the brain on a very large supercomputer [but] even solar system sized supercomputers (Matrioshka Brains) currently seem to have insufficient capacity

    It makes no sense to imagine using a computer to do a simulation if building the real thing would be a more efficient way to get a result. You can call it a computer if you like.

    provided the recreated "brain" is sufficiently accurate, I would argue that either it, or the simulation of it, is precisely a migration (in essence a recreation) of the original individual.

    You put the word "brain" in quotes. Why? Perhaps because you recognize that there is something phoney about it. If it were an atomic-level facsimilie of the original brain, I would have to agree that it was a brain, no quotes. But it would not be the original brain. If it were a simulation, it would not even be a brain. In either case, the original person would have been destroyed, killed.

    Introducing new terminology at the last minute, you downgrage "precisely a migration" to "in essence a recreation". The latter seems to claim a lot less than the former. But it is not entirely clear what either phrase is claiming. What is apparent is that after our hypothetical procedure we have at best some kind of "recreation," perhaps in the same sense that Disneyland gives us recreations of the Old West and so on. It certainly is not the original, real thing, in this case the human being whose murder you imagined.

    I personally, do not like the taste this approach leaves in my mouth

    Why not? You are admitting there is something wrong with the claim that it amounts to your "own personal 'indefinite longevity'".

    I would not quibble that the recreation is effectively "me".

    What do you mean by the word "effectively"? You are admitting, again, that there is something wrong with the claim that it would really be "me", whatever this means.

    nanobots that are capable of mapping the neural structure of the brain to the fine detail level of the synaptic connections and the relative "strengths" of those connections (by determining how many receptors are in each synaptic cleft, the quantity of neurotransmitters released when the neuron "fires", etc.).

    This is certainly not sufficient to reconstruct a functioning brain or brain simulation. At a minimum, you will need also the complete three-dimensional geometry of the dendritic, somatic and axonal membrane, including, very probably, the distribution if not the particular locations of protein complexes throughout. Otherwise you will lose all information about dendritic computation, timing in multineuron assemblies, and likely chemical and glia-mediated interactions between neighbor neurons. However, even that is not enough. Almost certainly there are internal neuron states involving non-membrane-bound proteins, the cytoskeleton, and nucleic acids, and which would have an effect on personality and may even play a role in memory.

    In short, the technical challenge of "uploading" has almost certainly been underestimated by most enthusiasts and authors on the subject. However, this is perhaps only an incidental observation. The important point is that the claim that such processes offer a way for the individual to escape death and "migrate to other hardware" is ontological nonsense.

    whole brain network of nanobots is connected (via fiber optics or very high frequency microwave links)

    Any estimates on the amount of disruption and displacement of brain tissue by the required fiber optics, or the heating by the microwaves?

    internal nanobot network would monitor all internal brain communications gradually learn to interpret the meanings of specific signals (as neuroscientists currently do with NMR and PET scans localizing brain functions to various regions, but at a much finer resolution

    You evidently want much "finer resolution". We can't rule out that it might be possible, if, as you say, one has complete monitoring of all activity, and a sufficiently powerful computer. But these are not requirements to be underestimated. And even so, it is not clear that such an approach would ever succeed in teasing out "the meanings" of all "specific signals." There may be thoughts that will occur to me only a few times in my life, tied to memories that for the most part remain buried in the tangle. I think this kind of approach might within a reasonable amount of time give an eavesdropper some crude capability to "read" some of my mind, but it is not clear that it would ever give him the ability to reconstruct a truly faithful copy.

    Our minds and the exocomputer would interact until an effective shorthand is developed between our minds and the computer that allows rapid two-way communication

    Now you are talking about interfacing, not "uploading."

    One should easily be able to accept that our mind would "program" the exo-computer with agents to store and retreive data (off-loading of our emory)

    How is this "off-loading" anything? The brain might learn to use an external tool through a bionic interface, but this does not seem likely to be a seamless integration, much less an "off-loading" of already established memory and personality, much less any sort of "migration" of "consciousness".

    people are doing this now with Personal Digital Assistant devices

    And they can continue to do so, with no need for any bionic implants. As the devices and software get better, they will put more and more information power at our disposal without needing to invade our bodies.

    but the result is that their memories are becoming poorer, presumably due to a lack of exercise.

    I find this news item quite dubious. Of course, people who rely on PDAs might not bother to memorize phone numbers and so forth, but the same would be true of people who used notebooks.

    "thoughts" are neuronal firing patterns that can be copied from one part of the brain to another,

    Different parts of the brain exchange information, but it is extremely unlikely that there is a universal code that can simply be copied from one region to another.

    then one can envision moving "thoughts" into the exo-computer as well.

    I cannot envision "moving 'thoughts'" at all. I know how to move brains, but not thoughts.

    as time goes by, more and more of one's memory and 'mind' ends up in the exo-computer.

    If I keep a diary very religiously, so that after thirty years or so there is much more detailed information about my life contained in the many volumes of my diary (and there are people who have done this) than I could ever recall, then would you say that my "memory and 'mind'" (there you go again with the apologetic quote marks) ended up in the diary? Or would you admit that the diary was just an adjunct record which I could consult if I wanted to recover some lost bit of ephemera from the past?

    At that point, because the exocomputer hardware would be designed to allow "copying", you can relocate this part of your mind and execute it anywhere

    Sounds neat, but again, I can authorize duplication of my diary, and it doesn't make me immortal, at least not literally (though perhaps literarily), and anyway, I allow that by some technology it might be possible to duplicate my brain (make a facsimilie) any number of times… so what?

    an 'accident' happens to the original body/brain, leaving the majority of the mind disembodied in the exo-computer

    Here's where you attempt the sleight-of-hand maneuver — but I caught you! This "majority of the mind" you're talking about is just the computerized PDA/diary/ajunct/whatever that the person was supposedly using to expand her capabilties. Now you say "an 'accident' happens", meaning, the person is dead. End of that story. Perhaps there is another story here, about the "disembodied" computer software. Such rogue software could indeed play havoc. Let's make sure it can't, that any "adjunct" software created by a human individual dies with that individual, or is frozen at least so that it cannot cause harm.

    It is likely that the evolved mind would view the loss of the original body/brain as we humans currently view the loss of a finger

    I could not have found words to more effectively express the monstrousness of what you are proposing.

    It is worth noting that this scenario is probably not much different from what goes on normally each day

    Uh….

    neurons do die (in large numbers) so over time we are losing memory, or at least the accuracy of it, and we certainly do neuronal connection remodeling

    Yes, all this is true and is a normal part of human existence. It shows that, over long periods of time, we change substantially, and not only in the exchange of our fundamental particles. Our existence as distinct, unique individuals is local in space and time, and is extended over time only through the continuity of life. This is all part and parcel of the human condition, and it should inspire some humility rather than hubris.

    we will likely chose to enhance our brains to decrease the rate of neuron cell death and/or increase the rate of neuron replacement.

    First of all, why will we "likely" do this? To serve what purpose? Well, I suppose most people as they age are annoyed at the gradual loss of competence, and would like a return to the vigor of youth. So perhaps some biotech interventions might be desirable. But not a runaway cerebral hypertrophy that turns us into Mars creatures. Anyone who wants that needs to have their head examined, not expanded.

    This is likely to increase the pace of our own mental evolution. The nanobot to exo-computer approach simply utilizes more sophisticated hardware to further increase the rate at which this process occurs.

    Has it never occurred to you that an unlimited "mental evolution" might not be a good thing?

    There is probably a third approach that involves the gradual replacement of neurons with enhanced bio-engineered neurons (with I/O ports that can be more easily 'tapped' by the nanobots) or even nanobot-based neurons themselves.

    If a human brain dies naturally or is destroyed artificially, the person is dead, no matter what kind of artifact has been created in the meantime.

    'true' indefinite longevity, will require that you either learn to live with the idea that the death of a mind-instance (due to an accident) and the subsequent activation of a copy is still 'you' (at least up to the last backup point)

    Are you nothing more than a "mind-instance"? And your destruction would be okay as long as some other "copy" would be "activated" afterward? So if I make a "backup," and point a gun at you, you will have no fear? Suppose I even continually update the "backup", so that I can promise that your copy, when activated, will remember every experience, right up to the penetration of the bullet and your slow bleeding to death. Then you would have no problem with being shot? We can even make it painless if you like. But you are going to die. I am going to make a copy and activate it, but you are going to die, sucker.

    Given these various approaches I fail to see how uploading cannot be viewed as a migration

    Try harder to see it.

  28. RobertBradbury Says:

    Re:Uploading As Migration

    So you are saying that a person's brain could be discarded and another one
    used for the same purpose(s). That might be okay if the purposes were those of
    another person.

    I am perhaps not being completely accurate in matching my words with my meanings. I consider the person's "mind" to be what is being migrated. To me the brain is nothing more than the hardware the mind currently runs on. I will freely admit that minds are not "currently" software, and operate more at a "firmware" level, but I believe we can migrate the mind up to the software level given sufficiently advanced technologies.

    I think anyone uploading someone else's mind without their express permission to do so is probably violating the fundamental right's of that individual. But here we get into areas where human rights have not been defined, such as "Do you have a right not to be born genetically enhanced?", or "Do you have a right to prevent the use of your genetic material?" (say someone wants to clone Madonna, getting her DNA is probably not particularly difficult).

    Some of your comments seem to suggest that mind uploading is something employers would do to lower labor costs. If I could "sell" my mind collect royalties from it, it makes for some interesting scenarios. From my perspective, the technologies required for uploading are not strongly dissimilar from very advanced biotech or true molecular nanotech. I believe that environment eliminates the classical employer/employee relationship because nobody will "have" to work to survive.

    It seems to me that the second is not fundamentally different from the
    first, but is really only a kind of smoke-and-mirrors argument.

    I believe that 'destructive readout' & recreation vs. 'evolved relocation' are fundamentally different because there may be significant temporal discontinuities with the destructive methods. It may take years, from the point where your mind is stopped to the point when your mind may be restarted. Evolved relocation, IMO, may not require long periods of down time.

    If molecules move or undergo significant changes of state in the freezing or
    vitrification process, the DNA will be completely useless in reconstructing
    their prior states at the time of death.

    Unless the molecules are substantially reduced to atoms the information required for their proper reconstruction is still there. All one needs to do is assemble a complete genome for the individual and you know what the atomic structure should be for all of the molecules in the body. Assembling a complete genome from even highly broken pieces of DNA is feasible, because that is exactly what companies like Celera do today. Once functional genomics works out all of the DNA and protein regulatory pathways we should know where the molecules belong within the cells and their normal pysical relationships to all of the other molecules. Keeping in mind that many cell types can be frozen solid and revived today, I believe that all of this information combined with nanoscale assistive machines or cleverly engineered drugs, should allow cryonic reanimation. That is not however "uploading" which is going to require a "readout" in some way of the connection matrix and synapse strengths and potentially even the concentrations of various molecules within the neurons and maybe even gene activation state information. I do not beliave that any of that information is "substantially" lost during freezing. It may however take a very big computer to figure out where it all was before the freezing process started.

    If you haven't already read it, I would recomend you read Ralph Merkle's paper:
    The molecular repair of the brain
    It is somewhat dated at this point, but it is a good place to start.

    Molecular-level details are almost certainly important; the naive
    idea that memory and personality can be reduced to synaptic
    connectivity is flatly contradicted by modern neuroscience.
    It seems likely that much information will in fact be irretrievably lost,
    although that would perhaps be equivalent to the effects of a major brain
    trauma which typically causes loss of recent memory and some reversible
    loss of competency.

    I would tend to disagree. Medical procedures that require the body temperature to be lowered to the point where the heart stops also lower the brain temperature to the point where neural electrical activity ceases. People are routinely brought back to life following these procedures. The Coma Recovery Association documents that to be declared brain dead, an 2 EEGs must detect no electrical activity over a 24 hour interval.

    The fact that they require two such scans seems to suggest that the detection of no electrical activity is insufficient to certify the individual is incapable of regaining consciousness. If you have no electrical activity in the brain, that seems to suggest your 'mind' can be rebooted from the structural and molecular material alone.

    You put the word "brain" in quotes. Why? Perhaps because you recognize that
    there is something phoney about it. If it were an atomic-level facsimilie of
    the original brain, I would have to agree that it was a brain, no quotes.

    If you produce a wet brain, containing essentially the same molecules with essentially the same organized structure as the original brain, then I would say you have a brain and you should get back a reasonable recreation of the mind contained within it. If you are running a molecular simulation of the brain or a model of the neural network contained within the original brain, then I would say you have a "brain" (in quotes).

    In either case, the original person would have been destroyed, killed.

    I would argue that every individual has a fundamental right to 'migrate' their mind onto whatever hardware he chooses so long as that does not interfere with anyone elses right to do the same. If you believe that an individual's mind is contained within the specific atoms of the brain of of an individual, then I would tend to agree with that the original has been destroyed. But if you took all of the atoms in my brain apart and recorded their exact position with angstrom accuracy, then put all of those very same atoms back in their original locations I know that that individual would be me — however waking up after that process, I would might find myself feeling a bit awkward knowing what had been done.

    Introducing new terminology at the last minute, you downgrade "precisely a
    migration" to "in essence a recreation". The latter seems to claim a lot
    less than the former.

    If it looks like a specific duck, walks like a specific duck and quacks like a specific duck, then the recreation is the specific duck, even if it isn't made out of the original atoms the duck is made out of. [This is based on my interpretation that "I" am my mental history & patterning and not the atoms of my brain that are being recycled (with some gain and some loss) every day.]

    Why not? You are admitting there is something wrong with the claim that it
    amounts to your "own personal 'indefinite longevity'".

    A recreation is me, it is just a me that I may not rather be.

    [snip -- discussion regarding brain structural and molecular complexity]

    In short, the technical challenge of "uploading" has almost certainly been
    underestimated by most enthusiasts and authors on the subject.

    I would agree that the complexity is not understood by most people who discuss the topic. I believe that the lowest level details you suggest may not be particularly significant, but it will take another 10-20 years of neuroscience before we are likely to know for sure. However, if and when we get to the point where we understand what causes comas and how to bring people out of them, I think we will be a long way towards understanding exactly what level information must be restored to recreate a mind.

    The important point is that the claim that such processes offer a way for
    the individual to escape death and "migrate to other hardware" is
    ontological nonsense.

    If the individual's mind has been recreated, I consider it a form of migration. The dictionary I've got defines "migrate" as 1. to settle in another country or region; 2. to move to another region with the change in season, as many birds. So long as the mind being recreated is a reasonable facsimile [exact reproduction or copy], then I would consider it to have "migrated". If one attaches the 'mind' to the 'brain' and then to the 'molecules' and even 'atoms', then that would not be the case.

    Any estimates on the amount of disruption and displacement of brain
    tissue by the required fiber optics, or the heating by the microwaves?

    Not at this time. This is where concrete work needs to be done. Nanobots are going to be more efficient at performing specific tasks, such as 'counting' concentrations of neurotransmitter molecules or 'transmitting' information on neuron firing frequencies, spacings and amplitudes. However nanobots do have heat limits. They may have molecular granularity limits (a limit on the amount of sampling they can do without exceeding heat capacities). Even though they should be the size of mitochondria, they may not be able to go inside axons or dendrites if they interfere excessively with molecular trafficing. Remote operations, sensing, etc. will make information collection more difficult. As surgeons now routinely use fine bore needles to deliver therapeutics to the brain (e.g. chemotherapies or cells for treating Parkinsons patient's), the brain is probably "fairly" tolerant of the physical distortion of linkages. However as head trauma injuries show, there are probably limits to this.

    But these are not requirements to be underestimated. And even so,
    it is not clear that such an approach would ever succeed in teasing
    out "the meanings" of all "specific signals." There may be thoughts
    that will occur to me only a few times in my life, tied to memories
    that for the most part remain buried in the tangle. I think this kind
    of approach might within a reasonable amount of time give an eavesdropper
    some crude capability to "read" some of my mind, but it is not clear
    that it would ever give him the ability to reconstruct a truly faithful copy.

    Since I know that given current brain structures, I am probably going to lose memories over time (due to neuronal cell loss and/or incomplete or inaccurate copying to new storage locations), the information losses mentioned, go with the territory from my perspective.

    Now you are talking about interfacing, not "uploading."

    Fine. It is part of the technology path I see to 'evolutionary uploading'.

    How is this "off-loading" anything?

    Do you bother to "remember" anything now that you put in your notebook or PDA? Sure you may remember some of it, but if you know where you can store it and get it back easily you don't make a point of remembering it.

    The brain might learn to use an external tool through a bionic interface,
    but this does not seem likely to be a seamless integration, much less an
    "off-loading" of already established memory and personality, much less
    any sort of "migration" of "consciousness".

    Memory is part of who we are. There are lots of cases of people who lost the parts of their brain essential for childhood memories or today's memories. They may be perfectly 'consciouss' but not be fully functional.

    And they can continue to do so, with no need for any bionic implants. As the
    devices and software get better, they will put more and more information power
    at our disposal without needing to invade our bodies.

    I would want the interface because I view it as a bandwidth issue. Fingers, listening and speaking are very low bandwidth channels compared to the amount of information we now can access. If you want the "philosphical" side, you have to ask would non-bionically enhanced individuals be considered 'disabled' at some point.

    I find this news item quite dubious. Of course, people who rely on PDAs might
    not bother to memorize phone numbers and so forth, but the same would be true
    of people who used notebooks.

    Perhaps. I find that I like to take in information, organize it, then put it someplace where I know where to find it (be it a book or web addresses or phone numbers), then I tend to remember where it is but not always what it is (at least not in any detail).

    Different parts of the brain exchange information, but it is extremely unlikely
    that there is a universal code that can simply be copied from one region to
    another.

    Calvin seems to suggest that there is in at least some regions of the cortex. He suggests that some of the interesting idea "combinations" may occur when you copy a pattern from one area onto another area that is biased for running a different pattern or when to patterns run side-by-side and influence each other. I would tend to agree that long distance communications (e.g. memory fetching or physiological controls are probably hardwired with their own unique codes). If Calvin's concept of 'thought patterns' has some merit and nanobots can read them out (and fetch and restore them to the neurons in some way), then I believe one has thought transference.

    Or would you admit that the diary was just an adjunct record which I could
    consult if I wanted to recover some lost bit of ephemera from the past?

    I think the diary would function similar to the way good books do now, they recreate in your mind the images the author intended. A good author can probably create such rich mental images the first time you read something, that when you re-read the book, the images created are similar to those that are recreated when you read your diary. I would say that you have transfered some of your 'mind' (static memories of experiences) into your diary.

    Sounds neat, but again, I can authorize duplication of my diary, and it doesn't make me immortal, at least not literally (though perhaps literarily), and anyway, I allow that by some technology it might be possible to duplicate my brain (make a facsimilie) any number of times… so what?

    Here's where you attempt the sleight-of-hand maneuver — but I caught you!
    This "majority of the mind" you're talking about is just the computerized
    PDA/diary/ajunct/whatever that the person was supposedly using to expand her
    capabilties.

    :-) Well, I take the point of view that the human mind is a very sophisticated multi-processor (I can drive and think about a work problem at the same time). If I've offloaded more and more of my sub-conscious thoughts (such as those required for much of the sensory processing in 'driving') into the exo-computer then part of my mind has migrated. One could even view anti-lock brakes and automobile anti-collision radars as first steps along this path. They are offloading and/or improving on the subconscious processing your brain normally does. There isn't a bionic interface yet, but I can recall a time or two when I wish I had one for my anti-lock brakes.

    Now you say "an 'accident' happens", meaning, the person is dead.

    The person is brain-dead, yes. How much of the mind is lost depends on the partitioning between their body and the exo-computer.

    End of that story. Perhaps there is another story here, about the "disembodied"
    computer software. Such rogue software could indeed play havoc. Let's make
    sure it can't, that any "adjunct" software created by a human individual dies
    with that individual, or is frozen at least so that it cannot cause harm.

    Well, since that software and the memories are potentially valuable parts of the estate, I don't think you watn to 'dump' them unless the individual has expressly requested this. Yes, I agree they need to be guaranteed as safe (presumably these run in some sort of virtual machine).

    First of all, why will we "likely" do this?

    Because, if one does not gradually replace the neurons that are dying at a greater rate than they are being formed, at some point after hundreds of years, you are dead. Neuronal supplementation will be a requirement for the extension of life for thousands of years (which is what we will probably achieve by moderately advanced applications of biotechnology).

    So perhaps some biotech interventions might be desirable. But not a
    runaway cerebral hypertrophy that turns us into Mars creatures.
    Anyone who wants that needs to have their head examined, not expanded.

    We climbed on the slippery slope when we started cooking our food, is there any reason to stop now? We should do it carefully and responsibly and try to accomodate the desires of as many people as possible, e.g. you shouldn't be 'required' to go bionic. At the same time disallowing it seems to be like the arguments for not educating women or allowing blacks to vote. What makes current era human genomes or current era human intelligence the 'right' place to stop?

    Has it never occurred to you that an unlimited "mental evolution" might
    not be a good thing?

    Yes, thats why I say 'carefully' and 'responsibly'. If one reads Robin Hanson's paper If Uploads Come First, then one realizes uploads are not to be taken lightly! There will need to be limits on how many copies a person can make and how they treat uncopied minds and the resources they require.

    If a human brain dies naturally or is destroyed artificially, the person
    is dead, no matter what kind of artifact has been created in the meantime.

    You seem to be attaching the person to his wet brain. My interest is in preserving the continuity of my consiousness and memories that I view as my 'mind' which I am not convinced will always be limited to remaining in my wet brain.

    Are you nothing more than a "mind-instance"? And your destruction would be
    okay as long as some other "copy" would be "activated" afterward?

    "I" am nothing more than a mind-instance until I personally am shown some concrete scientific evidence that something more is involved. (After-death experiences are hearsay evidence in my book.) While my destruction followed by the reactivation of a copy would not be my most desired path, I do take some comfort in knowing that part of me would continue to exist.

    So if I make a "backup," and point a gun at you, you will have no fear?

    Not unless you suppress my adreneline levels at the same time.

    Suppose I even continually update the "backup", so that I can promise
    that your copy, when activated, will remember every experience, right
    up to the penetration of the bullet and your slow bleeding to death.
    Then you would have no problem with being shot?

    Of course I'm going to have a problem with it as its going to hurt like hell.

    We can even make it painless if you like. But you are going to die.
    I am going to make a copy and activate it, but you are going to die, sucker.

    If you make it sufficiently worth my while, and had proven to me that the copy would indeed have a full set of my experiences up until the point of death, I would have no objection to this. The "sufficiently worth my while" would vary inversely with my degree of confidence in the accuracy of the copy. I would predict that if these technologies do become available, you will see many people creating interesting ways in which to kill themselves and be getting paid a lot to have it shown on "Extreme Uploadings".

    For me to claim that I object to being replaced by a copy would require that I invalidate the concept that the copies are indeed identical. Since I'm fairly sure the copies can be made identical to a relatively insignificant degree of difference I cannot raise an objection. Should I value the real Mona Lisa over the atomic resolution copy of the Mona Lisa? Only if I have some romantic attachment to the atoms that Da Vinci used to create the original.

    I found Moravec's discussion of topics related to this useful in creating my position.
    The document is: HERE

  29. MarkGubrud Says:

    Re:Uploading As Migration

    I think anyone uploading someone else's mind without their express permission to do so is probably violating the fundamental right's of that individual.

    Probably? How would you determine whether it is or not?

    Some of your comments seem to suggest that mind uploading is something employers would do to lower labor costs.

    Economics and justice issues are another matter. Here let's stick to uploading. My point was that duplicating a person for the sake of using the duplicates might make some kind of sense, but having one's self killed for the utility of duplicates makes no sense, unless one is some kind of "soldier" willing to make "the supreme sacrifice".

    I believe that 'destructive readout' & recreation vs. 'evolved relocation' are fundamentally different

    It was careless of me to assert that they are not; I don't want to get into arguments about which differences are "fundamental". You are talking about different procedures. However, both destroy the person, while creating some kind of artifact which might be another person, a facsimilie of the first, or a nonhuman simulation of some aspect of the person's constitution and behavior. In either case, the original person is dead.

    there may be significant temporal discontinuities with the destructive methods

    Yes, but, so what?

    All one needs to do is assemble a complete genome for the individual and you know what the atomic structure should be for all of the molecules in the body.

    Wrong, and irrelevant. Some molecules certainly have multiple states, including, very importantly, DNA, which is switched on and off by binding proteins. And it is not merely the atomic structure of individual molecules, but the arrangements of molecules and supramolecular assemblies in cells which store the information you are concerned about. You cannot reconstruct these from the genome.

    If you produce a wet brain, containing essentially the same molecules with essentially the same organized structure as the original brain, then I would say you have a brain and you should get back a reasonable recreation of the mind contained within it.

    Your language, "the mind contained within [the brain]" is dualistic, and implies belief in the existence of non-physical entities. A brain is a brain. If it is an atomically-precise facsimilie of another brain from which it was copied, then it can be called a copy of the original brain, but it is not the original brain. In any case, there will never be any atomically-precise facsimilies! At best you might make some kind of approximation that perhaps no one would be able to tell from the original.

    If you are running a molecular simulation of the brain or a model of the neural network contained within the original brain, then I would say you have a "brain" (in quotes).

    So, in your glossary, "brain in quotes" means a simulation of a brain. I suppose by extension a "bomb" is a simulation of a bomb, and a "girl" is a simulation of a girl. This raises a sidebar. I don't have a problem with people who want to be "uploaded," as long as they only want to "live" in "virtual reality". I might try to talk them out of committing suicide, but at least they aren't proposing to create any threats to the rest of humanity. As long as "uploads" will always be confined to ineffectual "cyberspaces," with no hooks into the real world, no harm can be done by their electrons whirling around.

    I would argue that every individual has a fundamental right to 'migrate' their mind onto whatever hardware he chooses so long as that does not interfere with anyone elses right to do the same.

    Well, I would argue that humanoid artificial intelligences of superhuman capability would be a profound threat to the rights of all humans, and therefore should not be allowed to be created by any means, including "uploading." If, however, they are completely isolated from any possibility of affecting the physical world outside a simulated "reality" (let it be of their own choosing), then such systems would be incapable of doing any mischief. But I don't know if society will ever give up the taboo against suicide, at least of healthy individuals.

    If you believe that an individual's mind is contained within the specific atoms of the brain of of an individual,

    Then you are a dualist. I am not a dualist. I do not believe that the brain "contains" anything. It is. So is another brain. Two different brains are two different brains. If you believe anything different, you must believe in the existence of something extraphysical.

    [1] if you took all of the atoms in my brain apart and recorded their exact position with angstrom accuracy, then put all of those very same atoms back in their original locations [2] I know that that individual would be me

    How does [2] modify [1]? You describe in [1] an (almost certainly impossible to carry out exactly) physical procedure. What does [2] claim that could be independently verified?

    If it looks like a specific duck, walks like a specific and quacks like a specific, then the recreation is the specific duck, even if it isn't made out of the original atoms the duck is made out of.

    But if we could make one recreation, we could make two, four, a dozen, a billion. Which, then, is the specific duck?

    A recreation is me, it is just a me that I may not rather be.

    You seem to be appealing to some right of privacy, personal choice. That's a cop-out from the discussion, but, okay, no one can make you say what the reasons are why you'd "not rather". But we can infer that they exist, and in the absence of another plausible interpretation of your hesitancy, that you do not really fully believe your assertion that "A recreation is me".

    The dictionary I've got defines "migrate" as 1. to settle in another country or region; 2. to move to another region with the change in season, as many birds. So long as the mind being recreated is a reasonable facsimile [exact reproduction or copy], then I would consider it to have "migrated".

    You quote a dictionary definition and then proceed to use the word in an inconsistent way!

    Since I know that given current brain structures, I am probably going to lose memories over time (due to neuronal cell loss and/or incomplete or inaccurate copying to new storage locations), the information losses mentioned, go with the territory from my perspective.

    The inaccuracy of the brain does argue that an imperfect copy might still be indistinguishable from the original, but it does not change the fact that the copy is not the original; it is a copy. They are two different things. I don't see how anyone can deny this tautological fact. The fact that any copy is going to be imperfect, most likely very imperfect, only further highlights the fact that it is a different thing.

    Fingers, listening and speaking are very low bandwidth channels compared to the amount of information we now can access.

    But not compared with the amount we can handle. Having to formulate spoken or written sentences both forces and permits us to organize our thoughts; the stream of consciousness is of course a cacophony. Let advanced AI systems anticipate what we might be interested in, let them provide us with rapid, easily cueable access to information. But keep the boundaries, and keep us humans firmly in control. To do otherwise is to invite our annihilation.

    you have to ask would non-bionically enhanced individuals be considered 'disabled' at some point.

    We really have to get beyond the notion that the future can hold nothing better than endless cutthroat competition with one another (and with machines). I believe it is clear that there is no need for any more powerful information technology than that which will be achieveable without invasion of the body by machinery, without humanoid, self-conscious and self-interested artificial superintelligences, and without biotechnic reengineering of the brain (apart from health maintenance).

    Different parts of the brain exchange information, but it is extremely unlikely that there is a universal code that can simply be copied from one region to another.

    Calvin seems to suggest that there is in at least some regions of the cortex. He suggests that some of the interesting idea "combinations" may occur when you copy a pattern from one area onto another area that is biased for running a different pattern or when to patterns run side-by-side and influence each other. I would tend to agree that long distance communications (e.g. memory fetching or physiological controls are probably hardwired with their own unique codes). If Calvin's concept of 'thought patterns' has some merit and nanobots can read them out (and fetch and restore them to the neurons in some way), then I believe one has thought transference.

    I haven't read Calvin's work, so I don't know exactly what he's saying, but your account of it suggests precisely the kind of mystical, dualistic model of brain function that is almost certainly wrong — the idea that one has certain conscious experiences as a result of the production of certain "firing patterns" or whatever. So if some other "hardware" could produce these same "patterns", then one would just as surely "have the same experience". There are so many errrors in this way of thinking that it is hard to know where to begin in criticizing it.

    I'll just say that the notion of a universal code, even one confined to a particular region of the brain, is implausible because each replication of this code wouldn't be doing anything different from the others. It would be a waste of neurons to have them all beating in synchrony for the benefit of some cosmic observer. This is not to say that different parts of the brain do not have their own local representation of the same content, or that they do not take place in globally coordinated activity (e.g. "consciousness") with each region registering its take on the "thought". But this cannot be in a code which copies from one part to another, as the coding is which neurons as well as when, and when might be copiable but which most certainly is not.

    I would say that you have transfered some of your 'mind' (static memories of experiences) into your diary.

    As a metaphor, you might get away with this, but if you insist you mean it literally, people will think you're going crazy. You're talking about writing in a book.

    If I've offloaded more and more of my sub-conscious thoughts (such as those required for much of the sensory processing in 'driving') into the exo-computer then part of my mind has migrated. One could even view anti-lock brakes and automobile anti-collision radars as first steps along this path.

    These are automation features that make driving easier. There is still a clear boundary between you and the car that you operate with your hands and feet.

    There isn't a bionic interface yet, but I can recall a time or two when I wish I had one for my anti-lock brakes.

    You might end up braking for hallucinations. An auto-driver would be safer.

    Neuronal supplementation will be a requirement for the extension of life for thousands of years

    Maintenance will require stimulating the regrowth of neural tissue at the replacement rate. What I don't see is why we need or are likely to choose a neural hypertrophy which would eventually distort us into no-longer-human creatures.

    We climbed on the slippery slope when we started cooking our food, is there any reason to stop now?

    Cooking our food is not the same as cooking ourselves. You can destroy the human race in any number of ways. "Transhumanism" is yet another, just as much an enemy of humanity as disease, war, pollution or invasion from outer space. If the end is annihilation, the means are to be avoided.

    you shouldn't be 'required' to go bionic. At the same time disallowing it seems to be like the arguments for not educating women or allowing blacks to vote.

    I take it you are neither a woman nor a black, and I hope you would have the good sense not to make such a statement in mixed company, although you never know who might be reading.

    Actually, the case for preventing the creation of technological systems which are self-interested, self-aware, and modeled on humans, with human motivations and claiming human rights, is the same as the case for preventing the creation of any other menace to public safety and the rights of actual human beings.

    What makes current era human genomes or current era human intelligence the 'right' place to stop?

    It's what we are. What makes it "right" not to "stop"? Where are we supposed to be going? Why?

    There will need to be limits on how many copies a person can make

    One person is one person.

    and how they treat uncopied minds and the resources they require.

    The technology cult asserts that the future belongs to aggressive machines — oh, but don't worry, there'll be some nice reservations set aside for those of you who prefer to "remain human" like flocks of sheep. I turn this around — let those enamored of the idea of "uploading" commit suicide if they wish, and let their computer simulations spend the rest of subjective eternity enjoying whatever kind of simulated Valhalla they prefer, in some solar-powered reservation on an asteroid, maintained and watched by the human beings who will be busy low-imapact colonizing and enjoying the spectacular natural beauties of the planets and the stars.

    While my destruction followed by the reactivation of a copy would not be my most desired path, I do take some comfort in knowing that part of me would continue to exist.

    What part? What is its mass, its color, its angular momentum? You say it exists, and would continue to exist even if your body was destroyed. What is it?

    So if I make a "backup," and point a gun at you, you will have no fear?

    Not unless you suppress my adreneline levels at the same time.

    Why? Aren't you admitting here that you don't really believe what you're saying? If you do believe it, why should you not remain calm? Maybe you want to say that not all of you believes it. Your "mind" may claim to believe this stuff, but your body knows better.

    We can even make it painless if you like. But you are going to die. I am going to make a copy and activate it, but you are going to die, sucker.

    If you make it sufficiently worth my while, and had proven to me that the copy would indeed have a full set of my experiences up until the point of death, I would have no objection to this.

    Incredible. I assume by "sufficiently worth my while" you mean that I offer you, or your copy, a lot of money. Why would it need to be a lot, if you are so sure that "A recreation is me"? I promise a very faithful copy, no more than a few insignificant errors. You should be willing to take $10 for the few moments it will take. We could set up copying booths on subway platforms, and induce people to have their bodies disassembled for pocket change while they wait for the train.

    Okay, let's say I offer you a billion dollars, payable to the copy that I will create. Except I'll create him at some randomly-chosen point on the surface of the planet. "You" may find "yourself" in Kirghizstan or Argentina, but a billion dollars richer, so it ought to be worth your while. But I will also claim the right to make an additional copy, which I will assemble in Baghdad and which will immediately be subjected to excruciating torture, Saddam's personal torturemaster serving as subcontractor.

    There won't be any difference between the copy that wakes up a billionaire and the copy that wakes up in a torture chamber, but what the hell, you already decided you were going to wake up the billionaire, so who cares about the torture victim? Don't worry about the ethics of the deal; a political prisoner will be released to make it an even trade.

    Okay, how do you make sense out of that scenario? Do you look forward to getting a billion bucks, or to dying a slow, agonizing death in Baghdad?

    For me to claim that I object to being replaced by a copy would require that I invalidate the concept that the copies are indeed identical.

    Precisely my point about your "unease".

    Should I value the real Mona Lisa over the atomic resolution copy of the Mona Lisa? Only if I have some romantic attachment to the atoms that Da Vinci used to create the original.

    Well, if you don't have any romantic attachments to anything, I suppose you might as well kill yourself. But in reality, you do have some romantic attachments, in particular to the idea of technological salvation and to certain people who have espoused this idea and whom you admire. But much of this ideology is nonsensical, as I have tried to show you, and a threat to humanity.

    I think that if you can't look at an original object, such as an old painting, and experience the thrill of what that object, uniquely, represents, i.e. the connection to the artist and the moment of creation, then I think you really are missing out on something.

  30. Anonymous Coward Says:

    Re:comes with the territory

    I happened upon this commentary which caused me to laugh! It seems the person who critiqued the LA Weekely article, meaning Mark Gubrub has certainly got a chip on his shoulder! First, I have never been in poverty (I own a home and drive a Benz); second, I have never denied my age or been afraid of my age (proud of being in my 50s); third, my art is certainly not "awful" as you put it, but perhaps you are edgy about something other than my work which causes you to be do disdainful … The article, in fact, was fairly complimentary for a leftist, angry, deathist magazine. Natasha Vita-More

  31. kenbeal Says:

    Re:Avoiding uncomfortable implications

    We already store a part of ourselves external to our body. For example, couldn't the written word be considered an externalized form of long-term memory?

    Nice. I remember back in my senior year of college (circa 1991) I told a college secretary that "I like keeping parts of my brain outside my body" when I wrote down the schedule. (Yes, I have always been somewhat geeky. Big deal now, but it cost me lots of beatings as a child.)

    Many people, if they think about the possibility of "brain networks" now, will find them strange and frightening, But such "brain networks" will evolve gradually over many years (decades probably). At each step, the advances will seem obviously helpful. A prosthetic eye for blind people?

    I cannot see from my left eye (colaboma (crater) in the center of the optic nerve, from birth). I very much look forward to the day I can replace that optic nerve and experience the world in startling 3D, which will allow me to see those funky optical illusions which teased me as a child (hold your fingers separated by an inch or so in front of your eyes, you see a sausage. I cannot see that, nor do I ever see double, no matter how much I've drunk, which I suppose makes me a better drunk driver ;-) .

    Do you know about Matrioshka Brains? That's something our society should strive to build, in the not-too-distant future. A solar system, performing computations to aid us in our domination of the universe.

Leave a Reply