<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
		>
<channel>
	<title>Comments on: Building Safe AI</title>
	<atom:link href="http://www.foresight.org/nanodot/?feed=rss2&#038;p=3369" rel="self" type="application/rss+xml" />
	<link>http://www.foresight.org/nanodot/?p=3369</link>
	<description>examining transformative technology</description>
	<lastBuildDate>Wed, 03 Apr 2013 18:23:47 +0000</lastBuildDate>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=3.0.4</generator>
	<item>
		<title>By: Tommy</title>
		<link>http://www.foresight.org/nanodot/?p=3369#comment-863537</link>
		<dc:creator>Tommy</dc:creator>
		<pubDate>Fri, 02 Oct 2009 14:56:54 +0000</pubDate>
		<guid isPermaLink="false">http://www.foresight.org/nanodot/?p=3369#comment-863537</guid>
		<description>I think the AI will just happen at some point and we will NOT be ready for it. Asimov warned us, the movies have shown us a few ways of what can happen when AI comes to life and while the movies do exaggerate things, they have some valid points.

The human body is incredibly fragile and no doubt accidents or even something more serious will happen. Excellent topic you brought up by the way. ;)</description>
		<content:encoded><![CDATA[<p>I think the AI will just happen at some point and we will NOT be ready for it. Asimov warned us, the movies have shown us a few ways of what can happen when AI comes to life and while the movies do exaggerate things, they have some valid points.</p>
<p>The human body is incredibly fragile and no doubt accidents or even something more serious will happen. Excellent topic you brought up by the way. <img src='http://www.foresight.org/nanodot/wp-includes/images/smilies/icon_wink.gif' alt=';)' class='wp-smiley' /> </p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Common Sense Guy</title>
		<link>http://www.foresight.org/nanodot/?p=3369#comment-860197</link>
		<dc:creator>Common Sense Guy</dc:creator>
		<pubDate>Thu, 24 Sep 2009 09:50:59 +0000</pubDate>
		<guid isPermaLink="false">http://www.foresight.org/nanodot/?p=3369#comment-860197</guid>
		<description>@JoSH:  You&#039;re equivocating.  You were not talking about &quot;good&quot; and &quot;evil,&quot; you were talking about imitating values.  

&quot;Every respect but one, that is. It should be more cooperative than I am, since it’s me that it’s cooperating with.  It should be more even-tempered, more foresightful, more diplomatic, less forgetful, more consistent, and perhaps even a tiny bit more trustworthy. Just a little bit, and in a way that’s just the way I would when I’m at my best.&quot;

A bad person&#039;s robot could be more cooperative in creating bad things.  It could be more even-tempered, more foresightful, et cetera, towards bad ENDS.  

Besides... Good and Bad are relative value judgments.  Good and Evil are maybe not, but there is little consensus on what these terms involve.  Are you suggesting that these robots will be subject to a universalized conception of Good &amp; Evil, in order to correct the moral failings of their owners/creator?  And if so then who is going to instill that universalized conception, which even ethical philosophers and theologians cannot agree on, into them?  Maybe robots can learn it for themselves, but then... it may be very different that a human society conceives of it.

You say the robot society will be slightly less evil than ours... well what is evil?  Who is defining it?  Who is ensuring that these robots imitate &quot;Good&quot; values and not &quot;Evil&quot; ones?

You are a very intelligent scientist... but these questions need to be answered by ethical philosophers, theologians and sociologists.  People that have a grasp on the values of our society and the possible higher order values of the universe.

Where does this broader-consultation factor into your view of AI and robots?  I am very curious.  I would like to see you explore these very important questions in a future post.</description>
		<content:encoded><![CDATA[<p>@JoSH:  You&#8217;re equivocating.  You were not talking about &#8220;good&#8221; and &#8220;evil,&#8221; you were talking about imitating values.  </p>
<p>&#8220;Every respect but one, that is. It should be more cooperative than I am, since it’s me that it’s cooperating with.  It should be more even-tempered, more foresightful, more diplomatic, less forgetful, more consistent, and perhaps even a tiny bit more trustworthy. Just a little bit, and in a way that’s just the way I would when I’m at my best.&#8221;</p>
<p>A bad person&#8217;s robot could be more cooperative in creating bad things.  It could be more even-tempered, more foresightful, et cetera, towards bad ENDS.  </p>
<p>Besides&#8230; Good and Bad are relative value judgments.  Good and Evil are maybe not, but there is little consensus on what these terms involve.  Are you suggesting that these robots will be subject to a universalized conception of Good &amp; Evil, in order to correct the moral failings of their owners/creator?  And if so then who is going to instill that universalized conception, which even ethical philosophers and theologians cannot agree on, into them?  Maybe robots can learn it for themselves, but then&#8230; it may be very different that a human society conceives of it.</p>
<p>You say the robot society will be slightly less evil than ours&#8230; well what is evil?  Who is defining it?  Who is ensuring that these robots imitate &#8220;Good&#8221; values and not &#8220;Evil&#8221; ones?</p>
<p>You are a very intelligent scientist&#8230; but these questions need to be answered by ethical philosophers, theologians and sociologists.  People that have a grasp on the values of our society and the possible higher order values of the universe.</p>
<p>Where does this broader-consultation factor into your view of AI and robots?  I am very curious.  I would like to see you explore these very important questions in a future post.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Tim Tyler</title>
		<link>http://www.foresight.org/nanodot/?p=3369#comment-860139</link>
		<dc:creator>Tim Tyler</dc:creator>
		<pubDate>Thu, 24 Sep 2009 06:38:58 +0000</pubDate>
		<guid isPermaLink="false">http://www.foresight.org/nanodot/?p=3369#comment-860139</guid>
		<description>Re: bad people.

Robocorp will not allow bad robots to be constructed.  Think of the headline: &quot;Robocorp robot kills five&quot;.  That sort of marketing sucks.  So: they will build in a &quot;Ghandi&quot; module - as well as probably a &quot;Thou shalt not harm Robocorp&quot; module.</description>
		<content:encoded><![CDATA[<p>Re: bad people.</p>
<p>Robocorp will not allow bad robots to be constructed.  Think of the headline: &#8220;Robocorp robot kills five&#8221;.  That sort of marketing sucks.  So: they will build in a &#8220;Ghandi&#8221; module &#8211; as well as probably a &#8220;Thou shalt not harm Robocorp&#8221; module.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: J. Storrs Hall</title>
		<link>http://www.foresight.org/nanodot/?p=3369#comment-860005</link>
		<dc:creator>J. Storrs Hall</dc:creator>
		<pubDate>Wed, 23 Sep 2009 22:24:18 +0000</pubDate>
		<guid isPermaLink="false">http://www.foresight.org/nanodot/?p=3369#comment-860005</guid>
		<description>@ Tim: Yes.  Properly imitative AIs won&#039;t happen by accident, or in the normal run of events; we must build them that way on purpose.
@CSG, David: The total ratio of evil done by bad people&#039;s robots to good done by good people&#039;s robots would be the same as today with just the people, only with bigger numbers on each side of the ratio.
But since all the robots are a little nicer, it would be better: imagine a world where 50% of people are good and 50% evil, so they just balance out. Now imagine that each person&#039;s robot was 90% like its owner and 10% good. The robot society would be 45% evil and 55% good, tipping the overall balance to the good side.</description>
		<content:encoded><![CDATA[<p>@ Tim: Yes.  Properly imitative AIs won&#8217;t happen by accident, or in the normal run of events; we must build them that way on purpose.<br />
@CSG, David: The total ratio of evil done by bad people&#8217;s robots to good done by good people&#8217;s robots would be the same as today with just the people, only with bigger numbers on each side of the ratio.<br />
But since all the robots are a little nicer, it would be better: imagine a world where 50% of people are good and 50% evil, so they just balance out. Now imagine that each person&#8217;s robot was 90% like its owner and 10% good. The robot society would be 45% evil and 55% good, tipping the overall balance to the good side.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Tim Tyler</title>
		<link>http://www.foresight.org/nanodot/?p=3369#comment-860004</link>
		<dc:creator>Tim Tyler</dc:creator>
		<pubDate>Wed, 23 Sep 2009 21:49:38 +0000</pubDate>
		<guid isPermaLink="false">http://www.foresight.org/nanodot/?p=3369#comment-860004</guid>
		<description>IMO, the first advanced machine intelligences will probably arise on large servers - with their sensors and actuators opening onto the internet. Search oracles, stockmarket players - and the like.

They will probably not be much like human beings - since at that stage we will still be building machines to compensate for our own cognitive weaknesses.</description>
		<content:encoded><![CDATA[<p>IMO, the first advanced machine intelligences will probably arise on large servers &#8211; with their sensors and actuators opening onto the internet. Search oracles, stockmarket players &#8211; and the like.</p>
<p>They will probably not be much like human beings &#8211; since at that stage we will still be building machines to compensate for our own cognitive weaknesses.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Common Sense Guy</title>
		<link>http://www.foresight.org/nanodot/?p=3369#comment-860002</link>
		<dc:creator>Common Sense Guy</dc:creator>
		<pubDate>Wed, 23 Sep 2009 18:06:17 +0000</pubDate>
		<guid isPermaLink="false">http://www.foresight.org/nanodot/?p=3369#comment-860002</guid>
		<description>What happens if someone has the following values:  it is okay to kill someone out of religious conviction, or, it is okay to kill someone of a certain ethnic group.

What if this person builds their robot, and instills those same values in them, and their robot goes and kills everyone of other religions and of other ethnic groups?

What if someone is a materialist nihilist and they hate mankind.  Is it okay then if they build their robot to emulate their values?  So that their robot has the value ingrained in them that mankind is bad?

I think you need to give this more thought.

-A concerned layperson.</description>
		<content:encoded><![CDATA[<p>What happens if someone has the following values:  it is okay to kill someone out of religious conviction, or, it is okay to kill someone of a certain ethnic group.</p>
<p>What if this person builds their robot, and instills those same values in them, and their robot goes and kills everyone of other religions and of other ethnic groups?</p>
<p>What if someone is a materialist nihilist and they hate mankind.  Is it okay then if they build their robot to emulate their values?  So that their robot has the value ingrained in them that mankind is bad?</p>
<p>I think you need to give this more thought.</p>
<p>-A concerned layperson.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: David</title>
		<link>http://www.foresight.org/nanodot/?p=3369#comment-860000</link>
		<dc:creator>David</dc:creator>
		<pubDate>Wed, 23 Sep 2009 17:05:29 +0000</pubDate>
		<guid isPermaLink="false">http://www.foresight.org/nanodot/?p=3369#comment-860000</guid>
		<description>Yes but what about &quot;bad&quot; people. Surely the problem comes when robots are raised to imitate bad people as defined by society?</description>
		<content:encoded><![CDATA[<p>Yes but what about &#8220;bad&#8221; people. Surely the problem comes when robots are raised to imitate bad people as defined by society?</p>
]]></content:encoded>
	</item>
</channel>
</rss>