<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
		>
<channel>
	<title>Comments on: Will building humanlike robots promote friendly AI&#063;</title>
	<atom:link href="http://www.foresight.org/nanodot/?feed=rss2&#038;p=4495" rel="self" type="application/rss+xml" />
	<link>http://www.foresight.org/nanodot/?p=4495</link>
	<description>examining transformative technology</description>
	<lastBuildDate>Wed, 03 Apr 2013 18:23:47 +0000</lastBuildDate>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=3.0.4</generator>
	<item>
		<title>By: roystgnr</title>
		<link>http://www.foresight.org/nanodot/?p=4495#comment-1011436</link>
		<dc:creator>roystgnr</dc:creator>
		<pubDate>Sun, 17 Apr 2011 16:07:58 +0000</pubDate>
		<guid isPermaLink="false">http://www.foresight.org/nanodot/?p=4495#comment-1011436</guid>
		<description>&lt;blockquote&gt;The problem of “value misalignment” between robot/AI and human is no less problematic than between human and human.&lt;/blockquote&gt;

It&#039;s probably &lt;b&gt;more&lt;/b&gt; problematic - humans all mostly resemble other humans, so we at least know what we&#039;re dealing with.  An AI won&#039;t necessarily follow the same psychological drives as humans do.

&lt;blockquote&gt;If we make robots like us, they will make love, war, happiness, and misery like us.&lt;/blockquote&gt;

This claim makes as much sense as &quot;If we make robots like us, they will be hairy, wheel-less, endoskeletal, and soft like us.&quot;  In some sense it&#039;s a tautology (All those millions of existing robots just aren&#039;t enough &quot;like us&quot; yet!) but in a more relevant sense it&#039;s just anthropomorphism.  Even other mammals, shaped by the same evolutionary processes as us for the same amount of time, have vastly different bodies and minds.  Expecting a mind created *from scratch* to unavoidably resemble us more than our animal relatives do is ridiculous.  You might as well deduce the performance envelope of a fighter jet by examining the rest of Class Aves.

This is actually a good example of why humanlike robots are going to set the cause of Friendly AI &lt;b&gt;backward&lt;/b&gt; - encouraging people to think of AIs as just artificial copies of humanity will make it harder to see all the other possibilities.</description>
		<content:encoded><![CDATA[<blockquote><p>The problem of “value misalignment” between robot/AI and human is no less problematic than between human and human.</p></blockquote>
<p>It&#8217;s probably <b>more</b> problematic &#8211; humans all mostly resemble other humans, so we at least know what we&#8217;re dealing with.  An AI won&#8217;t necessarily follow the same psychological drives as humans do.</p>
<blockquote><p>If we make robots like us, they will make love, war, happiness, and misery like us.</p></blockquote>
<p>This claim makes as much sense as &#8220;If we make robots like us, they will be hairy, wheel-less, endoskeletal, and soft like us.&#8221;  In some sense it&#8217;s a tautology (All those millions of existing robots just aren&#8217;t enough &#8220;like us&#8221; yet!) but in a more relevant sense it&#8217;s just anthropomorphism.  Even other mammals, shaped by the same evolutionary processes as us for the same amount of time, have vastly different bodies and minds.  Expecting a mind created *from scratch* to unavoidably resemble us more than our animal relatives do is ridiculous.  You might as well deduce the performance envelope of a fighter jet by examining the rest of Class Aves.</p>
<p>This is actually a good example of why humanlike robots are going to set the cause of Friendly AI <b>backward</b> &#8211; encouraging people to think of AIs as just artificial copies of humanity will make it harder to see all the other possibilities.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Oligonicella</title>
		<link>http://www.foresight.org/nanodot/?p=4495#comment-1011427</link>
		<dc:creator>Oligonicella</dc:creator>
		<pubDate>Sun, 17 Apr 2011 15:38:09 +0000</pubDate>
		<guid isPermaLink="false">http://www.foresight.org/nanodot/?p=4495#comment-1011427</guid>
		<description>&quot;Simply put: if we do not humanize our intelligent machines, then they may eventually be dangerous.&quot;

Sorry, no connect there.  Unless you&#039;re talking about some Asimovian robot, which has as its foundation a machine that *cannot* be reprogrammed like the machine it is, you&#039;re blowing wishful smoke.  AI too difficult to subvert?  Think Siemens.  Doesn&#039;t take that much.</description>
		<content:encoded><![CDATA[<p>&#8220;Simply put: if we do not humanize our intelligent machines, then they may eventually be dangerous.&#8221;</p>
<p>Sorry, no connect there.  Unless you&#8217;re talking about some Asimovian robot, which has as its foundation a machine that *cannot* be reprogrammed like the machine it is, you&#8217;re blowing wishful smoke.  AI too difficult to subvert?  Think Siemens.  Doesn&#8217;t take that much.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Instapundit &#187; Blog Archive &#187; WILL BUILDING HUMANLIKE ROBOTS promote Friendly AI?&#8230;</title>
		<link>http://www.foresight.org/nanodot/?p=4495#comment-1011410</link>
		<dc:creator>Instapundit &#187; Blog Archive &#187; WILL BUILDING HUMANLIKE ROBOTS promote Friendly AI?&#8230;</dc:creator>
		<pubDate>Sun, 17 Apr 2011 14:31:03 +0000</pubDate>
		<guid isPermaLink="false">http://www.foresight.org/nanodot/?p=4495#comment-1011410</guid>
		<description>[...] WILL BUILDING HUMANLIKE ROBOTS promote Friendly AI? [...]</description>
		<content:encoded><![CDATA[<p>[...] WILL BUILDING HUMANLIKE ROBOTS promote Friendly AI? [...]</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: DRB</title>
		<link>http://www.foresight.org/nanodot/?p=4495#comment-1011391</link>
		<dc:creator>DRB</dc:creator>
		<pubDate>Sun, 17 Apr 2011 11:54:54 +0000</pubDate>
		<guid isPermaLink="false">http://www.foresight.org/nanodot/?p=4495#comment-1011391</guid>
		<description>I believe AI already has a changeable form, that being its own code and should also include the physical form of the being. Favourites of mine are the chrysalis type beings which have the ability to change form and structure at will, AI is not the same as being human and should not be treated as such, minimising the creation of the created is of no service particularly if &quot;Friendly&quot; AI is the goal. The goal of allowing these &quot;super intelligent&quot; beings to do greater than us, to help in their capacities, is of import. Dreams of major systems controlled or maintained via AI, from factories to ships, both sea and star, and cities, expands the footprint of the being and therefore it&#039;s own perception of being and place and time. Let us not have to so drastically pre-determine the form or structure of something so intelligent and changeable as this.. ;)</description>
		<content:encoded><![CDATA[<p>I believe AI already has a changeable form, that being its own code and should also include the physical form of the being. Favourites of mine are the chrysalis type beings which have the ability to change form and structure at will, AI is not the same as being human and should not be treated as such, minimising the creation of the created is of no service particularly if &#8220;Friendly&#8221; AI is the goal. The goal of allowing these &#8220;super intelligent&#8221; beings to do greater than us, to help in their capacities, is of import. Dreams of major systems controlled or maintained via AI, from factories to ships, both sea and star, and cities, expands the footprint of the being and therefore it&#8217;s own perception of being and place and time. Let us not have to so drastically pre-determine the form or structure of something so intelligent and changeable as this.. <img src='http://www.foresight.org/nanodot/wp-includes/images/smilies/icon_wink.gif' alt=';)' class='wp-smiley' /> </p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Gina Miller</title>
		<link>http://www.foresight.org/nanodot/?p=4495#comment-1010907</link>
		<dc:creator>Gina Miller</dc:creator>
		<pubDate>Fri, 15 Apr 2011 20:51:50 +0000</pubDate>
		<guid isPermaLink="false">http://www.foresight.org/nanodot/?p=4495#comment-1010907</guid>
		<description>I&#039;ve seen a lot of their videos on Youtube: http://www.youtube.com/results?search_query=hanson+robotics&amp;aq=f 

I enjoy watching this research progress. 

Gina &quot;Nanogirl&quot; Miller
www.nanogirl.com</description>
		<content:encoded><![CDATA[<p>I&#8217;ve seen a lot of their videos on Youtube: <a href="http://www.youtube.com/results?search_query=hanson+robotics&#038;aq=f" rel="nofollow">http://www.youtube.com/results?search_query=hanson+robotics&#038;aq=f</a> </p>
<p>I enjoy watching this research progress. </p>
<p>Gina &#8220;Nanogirl&#8221; Miller<br />
<a href="http://www.nanogirl.com" rel="nofollow">http://www.nanogirl.com</a></p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Dave</title>
		<link>http://www.foresight.org/nanodot/?p=4495#comment-1010873</link>
		<dc:creator>Dave</dc:creator>
		<pubDate>Fri, 15 Apr 2011 16:58:12 +0000</pubDate>
		<guid isPermaLink="false">http://www.foresight.org/nanodot/?p=4495#comment-1010873</guid>
		<description>Producing &#039;friendly&#039; AI is as fundamentally impossible as promoting &#039;friendly&#039; human intelligence.  If we make robots like us, they will make love, war, happiness, and misery like us.  If these robots are stronger, faster, and more intelligent than us, the problem is complicated further.  The problem of &quot;value misalignment&quot; between robot/AI and human is no less problematic than between human and human.  If at some point we solve this problem among ourselves, I might start to believe it might work for AI.</description>
		<content:encoded><![CDATA[<p>Producing &#8216;friendly&#8217; AI is as fundamentally impossible as promoting &#8216;friendly&#8217; human intelligence.  If we make robots like us, they will make love, war, happiness, and misery like us.  If these robots are stronger, faster, and more intelligent than us, the problem is complicated further.  The problem of &#8220;value misalignment&#8221; between robot/AI and human is no less problematic than between human and human.  If at some point we solve this problem among ourselves, I might start to believe it might work for AI.</p>
]]></content:encoded>
	</item>
</channel>
</rss>