Foresight Nanotech Institute Logo
Image of nano

First brain-machine interface tested in a monkey

from the we'll-have-wings-in-cyberspace dept.
Waldemar Perez writes "Scientists at Duke University and MIT tested the first ever neuro-implant in a monkey's brain for a brain-machine control interface. The monkey controlled a robotic arm 600 miles away performing such tasks as reaching for food. It holds great promise for prosthetic implants. http://www.eurekalert.org/releases/duke-mca111000.html" Excerpt: " 'One most provocative, and controversial, question is whether the brain can actually incorporate a machine as part of its representation of the body,' [the researcher] said. 'I truly believe that it is possible…If such incorporation of artificial devices works, it would quite likely be possible to augment our bodies in virtual space in ways that we never thought possible,' Nicolelis said."

14 Responses to “First brain-machine interface tested in a monkey”

  1. MarkGubrud Says:

    We are not the robots

    I note that this article is basically talking about interfacing prosthetic devices to the brain, not about extending "consciousness" into "virtual space." The argument that people (or in this case monkeys) could learn to manipulate prosthetic limbs through adaptive neural pathways, or to incorporate prosthetic sensory inputs in the same way, is reasonable enough, but we should not jump from this to the conclusion that it makes sense to say that "we" can learn to "inhabit" brains which are extended with, say, memory prostheses or math co-processors. The relationship of the animal to the prosthetic device is essentially the relationship of a user to a tool. This makes sense; "uploading" still does not.

    Do we really want to invade the body with technological prostheses, apart from cases where this would be done to correct for an injury, illness, or major disability such as blindness, deafness, paralysis, loss of a limb, etc.? The mere fact that it might be technically possible to implant souped-up night vision, robot muscles, or facts-on-file search engines controlled by mental mice, does not make it necessary or desirable to do so. You can make very nice night-vision system and encyclopedia systems and teleoperators which are entirely external to the body and which interface through the normal sensorimotor channels, and don't turn you into a replusive, dehumanized cyborg.

  2. kurt2100 Says:

    Re:We are not the robots

    I note that this article is basically talking about interfacing prosthetic devices to the brain, not about extending "consciousness" into "virtual space." The argument that people (or in this case monkeys) could learn to manipulate prosthetic limbs through adaptive neural pathways, or to incorporate prosthetic sensory inputs in the same way, is reasonable enough, but we should not jump from this to the conclusion that it makes sense to say that "we" can learn to "inhabit" brains which are extended with, say, memory prostheses or math co-processors. The relationship of the animal to the prosthetic device is essentially the relationship of a user to a tool. This makes sense; "uploading" still does not.
    This is certainly true.
    Do we really want to invade the body with technological prostheses, apart from cases where this would be done to correct for an injury, illness, or major disability such as blindness, deafness, paralysis, loss of a limb, etc.?
    It depends on where you are and what you want to do. As far as injury such as blindness, loss of limbs, etc., biological solutions such as stem-cell regenration will become available in the next few decades. Such biological solutions are far more appealing to me, asthetically speaking, than the "cyborg" stuff. Assuming that the FDA can be reformed, they will be cheaper as well.
    However, our bodies, which are mostly bags of water, are not so suited for life in space. As I mentioned before, I think biological options will prevail for the next 100-200 years at least. The distant future? Who knows.
    About "dehumanization": This is a value judgement and, as with all value judgements, is individual specific. Even if non-biological options do come into vogue, we communicate with our faces and bodies. People would atleast remain humanoid in shape and form. As for my definition of "human": if you can pull up a seat, tip back a brew, and be "matey" with me in the pub; then your human as far as I'm concerned, regardless of your "internal constitution".

  3. Saturngraphix Says:

    Re:We are not the robots

    Well spoken

    This is a great starting point for handicapped people and I am sure this is a good starting point for great things to come with brain interfacing in general.

    Dehumaniziation is a unique choice of words though…Religion called women whom painted their face as going against nature humanity…unnatural…you get the picture…
    So, here we go again…

    Personally I think we hold humanity up to a standard we shouldn't do…I think that when a person gets mem chips and artifical limbs, he is still alive…even to a point the uploading…its still a life form.

    Also, I personally believe that the second a neural network computer comes up with the notion on its own that its alive and for us to stop rebooting it, it should be considered for such a honerary title…I think we should be making a effort as to what defines life in a modern society…soon our old defination will be outdated…

    Presently, their are people today that do not fit the contemporary defination of what a lifeform is…and the line will just become fuzzier later on down the line as people become more and more cyborg like internally…

    Mark seems to make a point that its not human once the biology starts to change to a manufactured state…and perhaps he is partially correct…so we should define where humanity ends and machines begin…and perhaps come up with a term that describes the crossover breeds
    (or is this the defination of a transhuman).

    Anyhow, this is a great step…non-intrusive introduction of benefits of brain interface…and hopefully, if Marks reaction is anything to indicate how the public will fair with further introductions to these tools, it will be a easy ride with little discontent or worrying…
    It is unnerving to think of 20 years from now…but exciting about what is happening next year…and this pattern has been around for as long as I can remember…perhaps since the days of the locomotive or the steam engine

    SaturnGraphix

  4. jbash Says:

    Re:We are not the robots

    Do we really want to invade the body with technological prostheses,

    Some do, some don't. Luckily, not everybody has to make the same choice.

    apart from cases where this would be done to correct for an injury, illness, or major disability such as blindness, deafness, paralysis, loss of a limb, etc.?

    All of those are natural conditions. Why is it OK to "correct" them? This isn't sophistry; there are people who hold reasoned philosophical positions that such things should not be corrected. There's a whole movement along those lines in the deaf community.

    In your system, what is the moral distinction between fixing "disabilities" and fixing "normal" things that some people may see as deficiencies?

    If you just say that it's important to preserve humanity per se, purely and only because it is humanity, and furthermore that certain limitations are of the essence of humanity, then that's fine. However, you have to realize that that's a postulate, not something you can argue for… and therefore you're going to have to deal with people who don't choose to accept it. No amount of emotional language is going to change that fact.

    You can make very nice night-vision system and encyclopedia systems and teleoperators which are entirely external to the body and which interface through the normal sensorimotor channels,

    Come on, that's a bogus claim. Such things would not be as convenient or effective. You can say that, in your opinion, implanted/integrated versions aren't worth the price, but don't try to make anybody believe that substitutes are just as good from a functional point of view.

    and don't turn you into a replusive, dehumanized cyborg.

    Do try to get a grip, here. Not everybody agrees that everything that's not human is repulsive (I don't). Nor does everybody agree with your definition of what's human (I actually think I do; I just don't attach as much importance to it as you do).

  5. MarkGubrud Says:

    Re:We are not the robots

    You seem to have missed the main point I made in my post, which was the sharp distinction between the development of neural interface technology, which has yet to be proven very useful or effective, but at least makes ontological sense, and the popular but clearly nonsensical notion that we can "migrate to" and just as well "inhabit" artificial brains.

    However, your response zeros in on what is really at stake here, in the long run. The major issue is whether or not the human species will survive, and if so, whether it will become the prisoner of a takeover by technology.

    You can argue that there's no harm in minor "improvements," but where do you draw the line against wholesale dehumanization? My point is that we have to draw the line somewhere.

    We cannot allow the creation of self-interested, autonomous technological systems that would compete with human beings for control of our collective future. It does not matter how such systems are created. They might be built from scratch out of silicon or molecular computing elements. Or they might be developed from human beings by the progressive replacement of human flesh with nonhuman hardware. In either case, such systems would pose an intolerable threat to the freedom and even the survival of humankind.

    The argument that "transformation" ought to be acceptable as long as the process is always under control of "the individual" begs the question of when that "individual" ceases to be human (and hence to be accorded human rights), and becomes a technological artifact.

    Correction of health problems, effects of aging, injuries, and what are commonly considered as "disabilities," is clearly acceptable because the end-product is still clearly human. If deaf people, for example, don't want artificial cochleas, fine, but if they do want them, or better yet, if they want cochleas grown from their own cells, we are still a long way from the creation of a megalomaniacal cybersuperrace.

    you have to realize that that's a postulate, not something you can argue for…

    I think I'm arguing for it pretty well.

    You can make very nice night-vision system and encyclopedia systems and teleoperators which are entirely external to the body and which interface through the normal sensorimotor channels,

    Come on, that's a bogus claim. Such things would not be as convenient or effective.

    Excuse me, but I stand by my claim. It is highly likely that the normal sensorimotor channels will continue to be the most effective I/O pathways for a long time into the future, if not always. At the level of molecular nanotechnology, external systems can be highly miniaturized and easy to carry around. It will clearly be an easier engineering task to project a high-resolution image onto the retina, for example, than to interface directly with cells of the visual cortex and achieve the same high-quality image transfer. Similarly, a person can much more easily learn to manipulate a teleoperator through force-feedback systems than to retrain his motor neurons to operate a prosthesis directly.

    The only advantages I can see for implanted systems are: 1) to be able to fancy yourself some kind of souped-up superbeing; 2) stealth. If either of these are what you are wanting, then I have to question your motives.

  6. jbash Says:

    Killers, Monsters, Robots

    However, your response zeros in on what is really at stake here, in the long run. The major issue is whether or not the human species will survive, and if so, whether it will become the prisoner of a takeover by technology.

    There are a very large number of assumptions implicit in that, and I'm not at all sure that all (or indeed any) of them are justified. I won't pound on them all here, but you're assuming that the robots would be harder on humans than humans have been on each other (hardly obvious) and that we have a moral right not to develop them (arguable, but by somebody other than me).

    We cannot allow the creation of self-interested, autonomous technological systems that would compete with human beings for control of our collective future. [...] such systems would pose an intolerable threat to the freedom and even the survival of humankind.

    [Experienced Foresight people can stop reading while I throw out the standard "not-optional" spiel.]

    … but this is your most dangerous assumption.

    "We" are not "allowing" anything. There is no unanimous "we" here, nor do any of us, nor a majority of us acting together, have the power to prevent the event in question. This isn't something that anybody can allow or not allow.

    Regardless of the dangers, if "self-interested, autonomous technological systems" are possible (and they looks very possible), people will make them. That's as close as you can get to a law of human nature. It may not happen for a few hundred years, but that's not very long in the lifetime of the human species. It could be as soon as 30-40 years, and that's within my anticipated lifetime.

    This isn't like crime or war or other things people do that other people try to stop. It doesn't help to reduce it. It only takes one successful development effort to end the game.

    You could delay things maybe a few decades with an odious police state, maybe a few centuries by deliberately trading the unknown dangers for the very-well-known dangers of deliberately creating a global dark age. The technology cannot be prevented from coming. The future is not optional. You don't have to like that, but you do have to deal with it.

    The idea that this stuff can be "allowed" or "not allowed" is a dangerous one, because it might lead to people trying to stop the development of these machines, instead of trying to guide their development into the most benign paths available, and perhaps prepare defenses against the worst things they might do.

    What we're in the business of doing here is trying to arrange things so that, when (not if) dangerous technologies like these get deployed, they don't destroy us all in the process. AI is probably the most dangerous technology we talk about.

    So, which autonomous, self-interested, superintelligent non-human entities would you like? Personally, given the choice between–

    1. A relatively diverse community of things that have a human base, think a bit like humans, are constrained in their rate of development by "legacy design" from humans, recently were human, and are likely to have a soft spot for humans, and

    2. Wildly alien things, probably all copies of each other, created de novo by fallible programmers with basically no control over emergent properties like moral viewpoint

    I think I'll take the first (with a lot of regret for the amount of human evil that means accepting). If I could build something from scratch and reliably give it a better morality than humans, then I'd take that option, but it doesn't appear to be available (hell, I don't think we can even say what a "better" morality would be). Same goes for building transhumans, but giving them a better moral sense.

    … and the very worst thing I could think of would be a single, superintelligent machine, without peers or anything else that could defend against it, which was subject to unquestioned human command. Humans are capable of unmitigated evil, and that sort of concentration of power under human control would be unbelievably dangerous. I wouldn't even trust myself with that kind of power… unless, of course, the alternative were to trust somebody else. Wan :-) .

    All that said, I don't share your assumption that even wildly alien robots would necessarily be bad for humanity. They certainly might be; the danger is there. They could also be good for us. We don't know enough about what they'd be like, or about what their motivations would be like, to be sure. It's a (non-optional) crapshoot.

    You always use monster-movie language for intelligent machines. For instance, the phrases "megalomaniacal superrace", and "repulsive cyborg"… always you assume that anything not human must be evil. Not merely amoral, either. I'm sure you don't actually believe this, but you give me the impression that you think the machines would be out to get you personally. Why do you always use those sorts of words?

  7. jbash Says:

    Integration, Vision, Fashion

    Integration, Vision, Fashion

    Excuse me, but I stand by my claim. It is highly likely that the normal sensorimotor channels will continue to be the most effective I/O pathways for a long time into the future, if not always.

    "Always", clearly not. You'll get faster I/O at the CNS end of any neural pathway, and you can also get the advantage of suppressing actual movement on the output side. And this assumes you're not rewiring the CNS (before you yell, remember that even as simple an activity as playing the guitar causes the brain to rewire itself). "For a long time" depends on how long you mean, I guess.

    It will clearly be an easier engineering task to project a high-resolution image onto the retina, for example, than to interface directly with cells of the visual cortex and achieve the same high-quality image transfer.

    I didn't say "easier to build"; I said "convenient and effective". Anyway, when you started out, you might find it easier to hook up to the retina itself, and not build the projector. When you get to a more advanced level, you might want to go behind the first layer or two of the visual cortex, and produce perceptions rather than images.

    Similarly, a person can much more easily learn to manipulate a teleoperator through force-feedback systems than to retrain his motor neurons to operate a prosthesis directly.

    Do you have support for that assertion? With a sufficiently advanced level of integration, you should be able to make what the CNS experiences much more intuitive. You might not have to retrain much of anything at all.

    The only advantages I can see for implanted systems are: 1) to be able to fancy yourself some kind of souped-up superbeing; 2) stealth. If either of these are what you are wanting, then I have to question your motives.

    With respect to (1), if you've got a penis complex about implants, why wouldn't you have a penis complex about an exoskeleton? The issue is the person, not the equipment.

    However, although I know this is going to raise your blood pressure, with sufficiently advanced implants (not early-stage things that projected stuff on your retinas or operated distant waldoes, but real enhancements that, for instance, made an encyclopedia into an extension of your long-term memory, so that it felt like you just "knew" the stuff), you could be a "souped-up superbeing". Not very human, but certainly souped up. And, if somebody choses to trade humanity for that enhancement, why does that make that person's motives suspect?

    As far as (2), you say stealth, I say fashion. I think having a bunch of crap festooned all over you makes you look like an idiot. Then there's the matter of real stealth, for safety. Maybe you don't want to be lynched by a bunch of luddites who think you're a repulsive cyborg with no rights.

    Oh, and how about these:

    • Not having to put the thing on every morning.
    • It not having to resist the outside environment.
    • It not being easily lost or stolen.
    • It not catching on things.
    • Your clothes fitting better.

    Why do people get laser surgery instead of wearing glasses?

  8. jbash Says:

    Humanity, Identity, Morality

    I'm really not sure how productive this particular branch of the discussion will be. It's mostly about the tags we attach to phenomena, when we all seem to agree on what the phenomena themselves are. Although it would be nice, and would greatly simplify discussion, if we had a common set of words, I'm not sure it's going to work.

    The fundamental disagreement really seems to be about what people want to see happen, not about what it's to be called. If the terminology problems were causing misunderstandings about the underlying phenomena, then there'd be a chance for this to get somewhere, but I don't think that's the case.

    You seem to have missed the main point I made in my post, which was the sharp distinction between the development of neural interface technology, which has yet to be proven very useful or effective, but at least makes ontological sense, and the popular but clearly nonsensical notion that we can "migrate to" and just as well "inhabit" artificial brains.

    You're right that I didn't respond directly to that point.

    You and I are actually both a little unusual in these circles. We both believe that uploading, at least as frequently conceived, is a form of death. I share with you the idea that a particular physical embodiment is definitive of humanity, and furthermore that one's physical embodiment is an important determiner of one's identity (which I think is a different thing from one's humanity).

    Anything that involves dissolving your brain seems pretty deathlike to me, and a "noninvasive" upload where you destroy the "template" at some later time is even more obviously so.

    However, that's a matter of definitions. "Ontology" to use your word. Whether I die when I'm uploaded (something I'm not planning on doing) depends a lot on what I think is "me".

    I'd like to develop a little terminology, starting with identity.

    I believe that I am defined as a particular object, a chunk of protoplasm, such that the destruction of that chunk of protoplasm is the end of me. I think that this is the most reasonable definition of identity for humans (more on what's human later). I'm not sure what to call this view of identity, so I'll call it a "strongly materialist" one.

    If, on the other hand, I believed, as I think Kurzweil or Moravec or lots of people on Nanodot do, that I was a particular pattern of thought, then I could start to argue that anything that preserved that pattern was not the end of me. I would run into a lot of weird problems about copying and continuity of identity, but there's nothing fundamentally illogical about the position. I'll call this "structuralism", again because I'm not sure what it really should be called.

    I think you may find a few people who believe that my identity is defined only by my observable actions outside some boundary, without reference either to my physical constitution or to my internal organization. I think this is usually known as "functionalism", and that it's related to positivism. I personally think it starts to make less sense when you begin to be able to observe my internal state, which the technologies being assumed here would surely let you do, but it does still exist as a viewpoint.

    I think that you probably hold a strongly materialist view of identity. I'm not sure; you've talked more about humanity than about identity.

    … which brings us to the question of humanity. I think that, just as we can talk about strongly materialist, structuralist, and functionalist views of identity, we can also talk about strongly materialist, structuralist, and functionalist views of what defines humanity.

    I, and I suspect you as well, believe that a "human being" is an object which is physically constructed in a certain way… made of cells, with a certain layout, etc. Because that object is constructed in that way, it thinks and acts in certain ways that we call "human", but the definitive aspect of its humanity is its physical construction. This is the strongly materialist view.

    The functionalist view would be that a "human being" is anything that acts human. The question of in which particular ways it has to act human is a separate one that could be used to further classify functionalists, if it were necessary to do so, which I don't think it is in this discussion.

    More popular here seems to be the structuralist view that a "human being" is something that acts human because of some structural attribute(s) of its organization… but with those attribute(s) being at a "higher layer" than the raw hardware. It may not have to have cells, but it has to in some way "think like" a physical human.

    There are lots of other definitions, but I think the three I've mentioned are the big ones for materialists. Up to now, all three views (and a lot of others) have been identical, or nearly so, in terms of which actual objects they called human. The only thing that acted human was something that thought like a human because it was built like a human. In the future, that may not be the case, or at least it may be possible to cause it not to be the case.

    Note that, although there's a similarity in classification between views of identity and views of humanity, it's not necessary to hold the same view with respect to identity as with respect to humanity. These are just definitions, and we can set them up any way we want. There's no logical contradiction in having, say, a strongly materialist view of humanity and a structuralist view of identity.

    All this ontology is important because other things that have been tied to the definition of "humanity" are the possession of certain rights, responsibilities, and social standing (similarly, our concepts of life and death are tightly tied to our concepts of "identity"). There've always been big moral arguments at the edges of the definition of "human", but those edges used to be at pretty much the same place, no matter which definition you chose. Things are about to get really messy. If we continue to use the word "human" as a tag for which entities should get rights, then we're in for a big tug of war over which definition of that word wins out.

    My impression is that you very much believe that "human" should be such a moral tag, and, furthermore, that the body-based, strongly materialist definition should be used to decid what is human. Some (by no means all) people on Nanodot seem to prefer to use "human" as a moral tag, but to use different (functional or structural) definitions of what "human" means.

    I myself prefer to use the strongly materialist definition of the word "human"… but I'm not at all sure how much "human" should be a moral tag. I haven't really solidified my thinking in this area, but I think I tend toward the idea that moral entities should be defined by some structural category with a name other than "human". Maybe "person".

    One reason your stuff starts to get on my nerves is that you so aggressively assert that:

    1. Your (and my) strongly materialist definition of what is "human" is the only one that makes sense.
    2. It's completely obvious, beyond any need for discussion, proof, or consideration, that humans, considered according to that strongly materialist definition, are the only things that any sane person could think of as moral entities. This is all through what you write, as when you talk about "when that 'individual' ceases to be human (and hence to be accorded human rights), and becomes a technological artifact."

    As far as I can see, you offer no proof for either assertion; they're simply moral postulates for you.

    I believe that the first statement, that the strongly materialist definition of humanity is the only sensible one, is subject to reasonable debate, even though the strongly materialist definition is the one I myself prefer.

    When people try to come up with a definition of a word in common use, they traditionally do so by looking at what that word has been used to refer to in the past, and how that word has been thought about in the past. In the case of "human", so many definitive attributes have been used in so many ways for so long that it would seem unlikely for there to be a clear winner when those attributes became separable… and it seems there hasn't been. I think that the strong materialist definition will eventually win out, because it best preserves the majority of what has traditionally been meant by "human"… but I don't see that as a Self-Evident Truth (TM).

    The second statement, the one about humans (strongly materialist version) being the only entities having moral status, is not the sort of thing that's subject to deductive argument. It's a moral postulate– the sort of thing from which logic proceeds, but which logic cannot in itself create. It's basically a preference, even if a very important one. It's not something you arrived at by Pure Reason, or that anybody could arrive at in that fashion.

    Nonetheless, you seem to expect everybody else to accept your view automatically. If they don't, you start throwing around words like "evil" and "moral disorder", and comparing people to Hitler (in another post). You say that you're arguing effectively for your values, but I disagree; all I see is name-calling and argument by vehement assertion.

    Insofar as you argue for people to accept your moral view, I think you should use the traditions, long-established in ethical argument, of showing consequences in detail, and of analogical appeal to some sort of moral sense.

    … and I think you should moderate your language. You may believe that people who disagree with you are advocating evil. I frequently believe the same of people who disagree with me in similar circumstances. However, I've found that openly and vitriolically asserting such beliefs, when arguing with those who do not share them, seldom wins converts to my causes, and I doubt that it will win converts to yours.

    Postscript on Identity

    I'm basically a strong materialist than to respect to identity at a single point in time (and perhaps for humans). However, over a span of time, I have a view that's more complicated. I believe that identity over time is a matter of continuity.

    I probably contain relatively few of the atoms that I contained when I started forming this set of beliefs, about 20 years ago. I'm also a different physical shape, I know different things, and I've changed a certain number of opinions. Yet I still say that I'm the same person. If the present "me" had suddenly popped into being standing next to the old "me", on the other hand, they would clearly have been different people. I think that what makes new-me the same person as old-me is continuity and incremental change.

    Similarly, if today you built a robot-me and stood it next to me, I would not agree that it was me. If, on the other hand, you slowly replaced every cell and organ in my body with some kind of mechanical device, one by one, making no sudden large changes, I probably would agree that the result was me… even though it might be identical in every way with the robot I would have rejected if it were presented de novo.

    The new me might not be human any more, but it would still be me, in the same way that an aged me would be. I don't know if you'd agree with that or not, but I suspect not. Certainly, from what you've said, you'd think that, whether or not it was me, it was a "repulsive cyborg" with no rights. I'm not sure I'm ready to go there…

    I'm not sure what this shows, other than that my categories have their weaknesses. You can have a basically strongly materialist view of identity, and end up with almost structuralist results. I'd have a real problem with being uploaded into pure software, especially all at a jump, but perhaps a lot less problem with any incremental change to a recognizable body.

  9. MarkGubrud Says:

    Re:Killers, Monsters, Robots

    you're assuming that the robots would be harder on humans than humans have been on each other

    I'm not assuming that. I don't like some of the ways people treat each other, either, and I don't like the low-budget conceptions of human rights and obligations to one another that seem so popular these days, especially among those who are either on the upper tiers of the social pyramid or who fancy themselves upwardly-mobile. But at least these people are still, themselves, only human. It could be worse.

    It's not so much that I fear the elites morphing into a superrace that lesser mortals can no longer hope to compete with; capital is already self-perpetuating and technology will prove just as powerful whether it is integrated into bodies or not. But human solidarity, founded on love or at least fellow-feeling, is the only hope for our civilization. The desire to become superhuman, to turn away from humanity toward a dream of self-aggrandizement, is small-minded, ugly, and evil. It is self-negation in disguise, because we all have the capacity and need for love and for righteousness. And it is the deepest current toward our destruction, because it is only by cooperation and seeking justice that we can build a secure future.

    and that we have a moral right not to develop them

    I can't imagine what loony argument would say otherwise.

    Regardless of the dangers, if "self-interested, autonomous technological systems" are possible (and they look very possible), people will make them.

    Yeah, yeah, If guns are outlawed, only outlaws, blah, blah.

    I'm not saying we shouldn't create powerful technological systems, only that we shouldn't give them the rights of persons and allow them to take over civilization. In order to prevent that last from happening, we should not create self-interested superintelligences modeled on humans. How do we prevent this? Well, first of all, we outlaw it. Then, if we find someone is breaking the law, we punish them. Put them in jail. Take away their computers. Try to make them see the light of reason, but keep them under surveillance just to be sure. What if someone manages to create such a system anyway? Well, it is not a human being, no matter what it says, and it has no rights. So it can be destroyed, and should be, as soon as it is detected. Too bad for the system, but the crime is that of the person who created it. We, the people, cannot tolerate the existence of such a threat in our midst. It is a matter of simple necessity.

    It only takes one successful development effort to end the game.

    Nonsense. If someone breaks the law, they will try to keep it secret. If and when they are found out, the police, and if necessary the military will deal with them. They would have to have amassed quite an arsenal to resist.

    You could delay things maybe a few decades with an odious police state, maybe a few centuries by deliberately trading the unknown dangers for the very-well-known dangers of deliberately creating a global dark age.

    I am not proposing "an odious police state," just the normal rule of law. Nor a "global dark age," just a bit of reason.

    The technology cannot be prevented from coming.

    I am not talking about stopping technology, just controlling what it is used for.

    The future is not optional. You don't have to like that,

    Fascistic rhetoric.

    but you do have to deal with it.

    I am dealing with it. Way ahead of the curve.

    The idea that this stuff can be "allowed" or "not allowed" is a dangerous one, because it might lead to people trying to stop the development of these machines

    Do you think it should be legal to build anthrax bombs in your basement? That no effort should be made to prevent you from doing so?

    instead of trying to guide their development into the most benign paths available

    That is exactly what I am talking about.

    AI is probably the most dangerous technology we talk about.

    Good! So you can agree there is something here that may warrant extraordinary measures.

    So, which autonomous, self-interested, superintelligent non-human entities would you like?

    None, please.

    1. A relatively diverse community of things that have a human base, think a bit like humans, are constrained in their rate of development by "legacy design" from humans, recently were human, and are likely to have a soft spot for humans

    It is unclear how the "legacy design" would constrain "their rate of development." In any case, I don't think many members of the "transhumanist" cult would accept your constraints. If necessary, they would probably get rid of the "legacy design" as fast as possible. As for the "soft spot," as you have pointed out, even humans sometimes have trouble locating it.

    2.Wildly alien things, probably all copies of each other, created de novo by fallible programmers with basically no control over emergent properties like moral viewpoint

    Doesn't sound like a good scenario at all.

    How about we use computers and distributed information systems as tools for human beings to accomplish their work, without creating any "superintelligences" that think of themselves as autonomous entities with interests of their own? Do you think there is something useful that we could only get done with humanoid AI? Other than creating "cyberpersons" for the sake of doing so.

    … and the very worst thing I could think of would be a single, superintelligent machine, without peers or anything else that could defend against it, which was subject to unquestioned human command.

    We already have lots of machines with various superhuman intellectual capabilities. But you're right, it might be necessary to limit the amount of computer power accessible to individuals at some point, although that seems a long way off.

  10. MarkGubrud Says:

    Re:Integration, Vision, Fashion

    You'll get faster I/O at the CNS end of any neural pathway

    Perhaps, but accessing it may be a problem. It won't present in a nice 2-D array of time-domain variables. If you look for that, what you will find will be the basis for a quite crude interface. I know it is standard to claim that nanosystems can invade and make intimate contact with CNS processes in bulk, but I have seen zero serious research into the details of such a proposal. More importantly, at that stage of technology the benefit becomes questionable. Why would you need the "faster I/O"? For what purpose, that could not be served just as well by outboard systems?

    you might find it easier to hook up to the retina itself, and not build the projector.

    No way. The retina does a lot of pre-processing, and the image is sent to the brain in a compressed form. The 2D mapping is rough and geometrically distorted. It is obviously much simpler to project an image than to try to interface with millions of neurons and stimulate them in just the right coded way so as to simulate the image.

    When you get to a more advanced level, you might want to go behind the first layer or two of the visual cortex, and produce perceptions rather than images.

    The deeper you go into the brain, the harder it will be to crack the code. In any case, what "perception" can be produced that could not be more easily and reliably produced by stimulation of the senory organs?

    Similarly, a person can much more easily learn to manipulate a teleoperator through force-feedback systems than to retrain his motor neurons to operate a prosthesis directly.

    Do you have support for that assertion?

    To begin with, all experience in the field to date. And the lack of any reason to doubt that it will continue to be true in the future.

    you should be able to make what the CNS experiences much more intuitive. You might not have to retrain much of anything at all.

    Do you have any support for this assertion? You are assuming magic, not technology.

    why wouldn't you have a penis complex about an exoskeleton?

    I'm not sure what you mean by this, exactly, but if a person puts on an "exoskeleton," he knows he is using a machine. While he uses it, he may feel empowered, but when he takes it off again, he confronts once again his own humanity.

    real enhancements that, for instance, made an encyclopedia into an extension of your long-term memory, so that it felt like you just "knew" the stuff

    Pure comic-book fantasy. Go look up an article in any encyclopedia on any unfamiliar subject. After you have read it, understood it, and "digested" it a bit, you might be able to recall one or a few facts as things you now "know." If the same facts were tossed out to you by some on-board encyclopedia in response to a mental query, you wouldn't have a clue what their significance was.

    We don't know very much about how knowledge is integrated into the structure of our brains, but it is not likely a process that can be bypassed by the installation of an in-skull database. Such a system, if it could be created, would have little advantage over an external computer, since you would still have to formulate a query, get an answer, perhaps navigate through the knowledge base, and then think about it.

    I think having a bunch of crap festooned all over you makes you look like an idiot…Not having to put the thing on every morning

    Agreed. But nanotech should make the kinds of resources we are discussing very unobtrusive. And they will be ubiquitous, anyway, so why would you even bother carrying them around?

    It not having to resist the outside environment.

    Not a problem for any reasonably well-designed MNT product.

    It not being easily lost or stolen.

    Who cares? It's dirt cheap.

    It not catching on thing. Your clothes fitting better.

    Again, nonissues for MNT.

    Why do people get laser surgery instead of wearing glasses?

    Because we don't yet have nanocontacts.

  11. MarkGubrud Says:

    Re:Humanity, Identity, Morality

    I'm really not sure how productive this particular branch of the discussion will be.

    Well, to my mind, this is probably the part most likely to be productive, in the near term.

    If the terminology problems were causing misunderstandings about the underlying phenomena, then there'd be a chance for this to get somewhere, but I don't think that's the case.

    I think it is very much the case. Advocates of "uploading" and other "transhumanist" fantasies are routinely using very misleading language, and I find it is generally possible to burst their bubble just by insisting on the use of standard English to say what is meant.

    We both believe that uploading, at least as frequently conceived, is a form of death. I share with you the idea that a particular physical embodiment is definitive of humanity

    Very good.

    furthermore that one's physical embodiment is an important determiner of one's identity

    Oops! Your language is already implying the possibility of "one" having a different "physical embodiment." You also set up the notion of "one's identity" as a separate entity. By my count, that's two souls and one body. Furthermore, you have begun work on a metaphysics in which "one's identity" is "determined" by some set of (criteria? factors?) including, but perhaps not limited to, "one's physical embodiment."

    Anything that involves dissolving your brain seems pretty deathlike to me, and a "noninvasive" upload where you destroy the "template" at some later time is even more obviously so.

    Yup.

    However, that's a matter of definitions.

    It's a matter of using plain language to describe a hypothetical set of material facts, without making any re-definitions.

    "Ontology" to use your word.

    Dictionary definition: a branch of metaphysics concerned with the nature and relations of being. Not a word I use very often, but it fit.

    Whether I die when I'm uploaded depends a lot on what I think is "me".

    Does whether you die when you're hit by a train depend on what you think?

    I believe that I am defined as a particular object, a chunk of protoplasm, such that the destruction of that chunk of protoplasm is the end of me.

    I'm not sure that's such a good definition of you, but if your body was destroyed (all at once, not the normal replacement/regrowth process), that would be the end of what we do mean by "you."

    I'm not sure what to call this view of identity

    You've created this notion of "identity" as some kind of a football that can be claimed by one team or the other.

    so I'll call it a "strongly materialist" one.

    It is certainly the position of a strong materialist that if your body is destroyed, that is the end of you. It seems less clear that it would be the end of "your identity." Perhaps because it is less clear what you mean by this notion.

    If, on the other hand, I believed, as I think Kurzweil or Moravec or lots of people on Nanodot do, that I was a particular pattern of thought

    Of what? Okay, thought. Who's doing the thinking? Didn't Descartes settle this a long time ago? One thing you, at least, can be sure of: you exist. You aren't just a figment of someone else's imagination.

    then I could start to argue that anything that preserved that pattern

    Preserved what?

    was not the end of me. I would run into a lot of weird problems about copying and continuity of identity, but there's nothing fundamentally illogical about the position. I'll call this "structuralism", again because I'm not sure what it really should be called.

    Try "blockheadedness."

    Okay, lets get rid of the distracting word "identity" and try to boil this down. So if Hans or Ray wants to say "I am not a person, in the sense usually understood, but rather I am a pattern of thought," whatever that means, then it would be perfectly logical for them to say "If one copy of this pattern is destroyed, that is not the end of me," since there could still be other copies of the pattern, whatever that means.

    Okay, so if Ray says "I am Ray Kurzweil," do all copies of the pattern have to say this at once? How is this to be arranged physically? Suppose that only one copy of the pattern says this. Then another one says, "No, I am Ray Kurzweil." Well, clearly then they are different, aren't they? They have a dispute.

    It doesn't matter what Ray or Hans think they are. They are human beings. If they were destroyed, that would be then end of them. If they had themselves copied, each copy would be a separate human being. If they had their "patterns" copied into computers, each computer would be a separate system and none of them would be a human being. I'm not making judgement calls here, just writing down tautologies. Only obscurantism can make it more complicated than this.

    you may find a few people who believe that my identity is defined only by my observable actions outside some boundary, without reference either to my physical constitution or to my internal organization.

    …"identity is defined" — Mumbo-jumbo. You are a human being.

    I think this is usually known as "functionalism", and that it's related to positivism.

    Fourteen angels, whatever.

    it does still exist as a viewpoint.

    It's just sophistry. Don't dwell on it.

    I think that you probably hold a strongly materialist view of identity. I'm not sure; you've talked more about humanity than about identity.

    I hold a strongly physicalist view of humanity and of everything else. The strongly physicalist view of "identity" is that it is an ill-defined and unnecessary concept.

    Let those whose viewpoints require a weaker materialism, one allowing some space for the possible existence of nonphysical entities, admit to it.

    we can also talk about strongly materialist, structuralist, and functionalist views of what defines humanity.

    When I use the word "humanity," I do not refer to any definition or view; I refer to actual existing humanity. Any definition we make to clarify the difference between human and not must similarly be referred to humans as they are. We are the definition of humanity.

    functionalist view would be that a "human being" is anything that acts human.

    First of all, this is not what people commonly mean by the word "human." It is at best a sophistic formulation. But in that regard, it falls flat, unless you append the words, "in all ways." For example, if you cut a human, she should bleed, and if you examine her tissue under a microscope, it should look human, and if you sequence her DNA, you should get a human genome. Otherwise, if you allow anything that "acts human" in at least some situations to be considered as human, then you have to let any scarecrow be considered as human.

    There are lots of other definitions.

    These are not definitions, but attempted re-definitions. A human being is a human being. People who want to re-define the word so it can extend to other things besides human beings, things that do not now exist but that might in the future, are attempting to be deceptive about the true nature of these hypothetical future objects.

    I tend toward the idea that moral entities should be defined by some structural category with a name other than "human". Maybe "person".

    There is a legal concept of "personality" that extends to corporations, governments, etc., but does not imply actual humanity or possession of the rights of human beings. I tend to the view that the law should be rewritten so that only actual human beings are ever called "persons." In any case, the word "person" should not be used as a back-door redefinition of "human." When people say "person" they almost always (except in certain narrow legal contexts) mean "human being", and that does not include any computers or cyborgs.

    As far as I can see, you offer no proof

    What proof is needed? I am only writing down tautologies. This is no feat of intellectual prowess, I do not seek to win by convincing you that I am some kind of a wizard. I only insist that plain language be used, in the sense in which it is commonly understood.

    Nonetheless, you seem to expect everybody else to accept your view automatically. If they don't, you start throwing around words like "evil" and "moral disorder", and comparing people to Hitler (in another post)

    I am trying to indicate the seriousness of this matter. Ray Kurzweil openly advocates the extinction of the human species through its replacement by simulation. That's worse than what Hitler sought to achieve. I don't mean to say that Ray (who is actually a rather nice fellow) is worse than Hitler, but that his ideology points toward a worse outcome. Raising technology (or anything else) above humanity is a moral disorder, and it can potentially lead to great evil. These are all perfectly fine English words, and I use them to communicate my meaning.

    Similarly, if today you built a robot-me and stood it next to me, I would not agree that it was me. If, on the other hand, you slowly replaced every cell and organ in my body with some kind of mechanical device, one by one, making no sudden large changes, I probably would agree that the result was me… even though it might be identical in every way with the robot I would have rejected if it were presented de novo.

    In your scenario, you would have been replaced, so how can you say "I probably would agree…"? No doubt the thing remaining after the replacement would agree it was "you," but it clearly could not be you (who are reading these words).

    The continuity argument has worse problems. Assuming the replacement is done under the direction of some larger technological system, which is using or can obtain (by signaling) a complete specification of the replacement units, such a system could simultaneously assemble any number of identical copies. Which one then is "you"? There is no sensible answer to this question. The scenario has destroyed the human meaning you wanted to attach to "continuity."

    I'd have a real problem with being uploaded into pure software, especially all at a jump, but perhaps a lot less problem with any incremental change to a recognizable body.

    But as you just admitted, the end result would be the same, so what is your "real problem"? You know, the "recognizable body" could be fully simulated in software. What's more, you could have any body you wanted, any thing you wanted, become God of your own private universe… Why the hesitation?

  12. DavidMasterson Says:

    Re:Killers, Monsters, Robots

    I'm only going to take one statement from your post as it affects the rest of your argument. I don't totally disagree with what you're saying, but…

    It only takes one successful development effort to end the game.

    Nonsense. If someone breaks the law, they will try to keep it secret. If and when they are found out, the police, and if necessary the military will deal with them. They would have to have amassed quite an arsenal to resist.

    This is a very naive view of the world. If it were true, terrorism would not exist. But we all know that it does…

  13. MarkGubrud Says:

    reply

    Of course terrorism exists, and all sorts of crime. But we do not live under anarchy. Terrorists are not able to take over the world. Their impact is usually marginal. Criminals have a more significant impact, but only because they are more numerous. Very often terrorists and criminals get caught. When they are cornered, they are almost never able to resist the overwhelming force that governments are able to apply.

    The claim is often made that "one successful development" of AI, nanoreplicators, or some other magic totem would make a criminal or terrorist group all-powerful and able to take over the world. That is very naive. No single development that could be undertaken by a small group working on a modest scale (in secret) over a reasonable period of time would give them an overwhelming advantage over the rest of the world. Of course governments will develop contingency plans and means of dealing with possible threats. Once again, I am not talking about not developing technology, but just about what form it should take, how it should be used and for what purposes.

  14. DavidMasterson Says:

    Re:reply

    Terrorism is not about acquiring and maintaining some sort of "superior advantage" (that's what governments are for). Terrorism is about making a point (however convoluted it might be). With that mindset, terrorists don't need to think long term — they (like email spammers) only need to win once. Therefore, putting your faith in some law to deter them from trying is naive. The terrorist who blows up a building will (eventually) be caught, but that is of no comfort to the people who were in the building. Likewise, the terrorist (or scientist?) who unleashes a plague of (self-replicating) nanobots may eventually have a lot to answer for, but will it already be too late?

    Hmmm, did I just make your point or mine…?

Leave a Reply