Foresight Nanotech Institute Logo
Image of nano

Will building humanlike robots promote friendly AI?

David Hanson, the founder and CTO of Hanson Robotics, a maker of humanlike robots and AI software, has a stimulating article in IEEE Spectrum that makes points that are also relevant to the larger issue of how we develop machine intelligence, in partiular how we ensure that hyper-intelligent AI remains “friendly”. From “Why We Should Build Humanlike Robots“:

…On the tree of robotic life, humanlike robots play a particularly valuable role. It makes sense. Humans are brilliant, beautiful, compassionate, loveable, and capable of love, so why shouldn’t we aspire to make robots humanlike in these ways? Don’t we want robots to have such marvelous capabilities as love, compassion, and genius?

Certainly robots don’t have these capacities yet, but only by striving towards such goals do we stand a chance of achieving them. In designing human-inspired robotics, we hold our machines to the highest standards we know–humanlike robots being the apex of bio-inspired engineering.

In the process, humanoid robots result in good science. They push the boundaries of biology, cognitive science, and engineering, generating a mountain of scientific publications in many fields related to humanoid robotics, including: computational neuroscience, A.I., speech recognition, compliant grasping and manipulation, cognitive robotics, robotic navigation, perception, and the integration of these amazing technologies within total humanoids. This integrative approach mirrors recent progress in systems biology, and in this way humanoid robotics can be considered a kind of meta-biology. They cross-pollinate among the sciences, and represent a subject of scientific inquiry themselves.…

Looking forward, we can find an additional moral prerogative in building robots in our image. Simply put: if we do not humanize our intelligent machines, then they may eventually be dangerous. To be safe when they “awaken” (by which I mean gain creative, free, adaptive general intelligence), then machines must attain deep understanding and compassion towards people. They must appreciate our values, be our friends, and express their feelings in ways that we can understand. Only if they have humanlike character, can there be cooperation and peace with such machines. It is not too early to prepare for this eventuality. That day when machines become truly smart, it will be too late to ask the machines to suddenly adopt our values. Now is the time to start raising robots to be kind, loving, and giving members of our human family.…

The problem of how to ensure friendly AI is important enough that it seems wise to investigate multiple paths toward that goal. Perhaps improving humanlike robots is one such path.

6 Responses to “Will building humanlike robots promote friendly AI?”

  1. Dave Says:

    Producing ‘friendly’ AI is as fundamentally impossible as promoting ‘friendly’ human intelligence. If we make robots like us, they will make love, war, happiness, and misery like us. If these robots are stronger, faster, and more intelligent than us, the problem is complicated further. The problem of “value misalignment” between robot/AI and human is no less problematic than between human and human. If at some point we solve this problem among ourselves, I might start to believe it might work for AI.

  2. Gina Miller Says:

    I’ve seen a lot of their videos on Youtube: http://www.youtube.com/results?search_query=hanson+robotics&aq=f

    I enjoy watching this research progress.

    Gina “Nanogirl” Miller
    http://www.nanogirl.com

  3. DRB Says:

    I believe AI already has a changeable form, that being its own code and should also include the physical form of the being. Favourites of mine are the chrysalis type beings which have the ability to change form and structure at will, AI is not the same as being human and should not be treated as such, minimising the creation of the created is of no service particularly if “Friendly” AI is the goal. The goal of allowing these “super intelligent” beings to do greater than us, to help in their capacities, is of import. Dreams of major systems controlled or maintained via AI, from factories to ships, both sea and star, and cities, expands the footprint of the being and therefore it’s own perception of being and place and time. Let us not have to so drastically pre-determine the form or structure of something so intelligent and changeable as this.. ;)

  4. Instapundit » Blog Archive » WILL BUILDING HUMANLIKE ROBOTS promote Friendly AI?… Says:

    [...] WILL BUILDING HUMANLIKE ROBOTS promote Friendly AI? [...]

  5. Oligonicella Says:

    “Simply put: if we do not humanize our intelligent machines, then they may eventually be dangerous.”

    Sorry, no connect there. Unless you’re talking about some Asimovian robot, which has as its foundation a machine that *cannot* be reprogrammed like the machine it is, you’re blowing wishful smoke. AI too difficult to subvert? Think Siemens. Doesn’t take that much.

  6. roystgnr Says:

    The problem of “value misalignment” between robot/AI and human is no less problematic than between human and human.

    It’s probably more problematic – humans all mostly resemble other humans, so we at least know what we’re dealing with. An AI won’t necessarily follow the same psychological drives as humans do.

    If we make robots like us, they will make love, war, happiness, and misery like us.

    This claim makes as much sense as “If we make robots like us, they will be hairy, wheel-less, endoskeletal, and soft like us.” In some sense it’s a tautology (All those millions of existing robots just aren’t enough “like us” yet!) but in a more relevant sense it’s just anthropomorphism. Even other mammals, shaped by the same evolutionary processes as us for the same amount of time, have vastly different bodies and minds. Expecting a mind created *from scratch* to unavoidably resemble us more than our animal relatives do is ridiculous. You might as well deduce the performance envelope of a fighter jet by examining the rest of Class Aves.

    This is actually a good example of why humanlike robots are going to set the cause of Friendly AI backward – encouraging people to think of AIs as just artificial copies of humanity will make it harder to see all the other possibilities.

Leave a Reply