<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
		>
<channel>
	<title>Comments on: Update to Friendly AI theory</title>
	<atom:link href="http://www.foresight.org/nanodot/?feed=rss2&#038;p=1557" rel="self" type="application/rss+xml" />
	<link>http://www.foresight.org/nanodot/?p=1557</link>
	<description>examining transformative technology</description>
	<lastBuildDate>Wed, 03 Apr 2013 18:23:47 +0000</lastBuildDate>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=3.0.4</generator>
	<item>
		<title>By: QuantumG</title>
		<link>http://www.foresight.org/nanodot/?p=1557#comment-4356</link>
		<dc:creator>QuantumG</dc:creator>
		<pubDate>Wed, 16 Jun 2004 00:18:58 +0000</pubDate>
		<guid isPermaLink="false">http://www.foresight.org/nanodot/?p=1557#comment-4356</guid>
		<description>&lt;p&gt;&lt;strong&gt;Evil Robot Army&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;You talk of tool-AI vs mind-AI and say that mind-AI can actually be safer than tool-AI. Tools are neither good nor evil, they can be used for either, but a mind can be constructed with a conscience. People are constructed with a conscience, and most of us are, yet we&#039;re capable of unspeakable evil. Perhaps that&#039;s human nature, so let&#039;s ignore that. You talk of an AI recognising that information it has received came from humans and therefore it will listen to arguments from humans. Presumably this is intended to make an AI &quot;listen to reason&quot; and prevent runaway harmful acts of stupidity, much like the 15 or so episodes of classic Star Trek where Kirk deals with a robot set on destroying the Enterprise. But what if those whispering into the ear of an AI are not acting in the best interests of humanity? An AI that finds religion could be the most unfriendly of them all.&lt;/p&gt;

</description>
		<content:encoded><![CDATA[<p><strong>Evil Robot Army</strong></p>
<p>You talk of tool-AI vs mind-AI and say that mind-AI can actually be safer than tool-AI. Tools are neither good nor evil, they can be used for either, but a mind can be constructed with a conscience. People are constructed with a conscience, and most of us are, yet we&#39;re capable of unspeakable evil. Perhaps that&#39;s human nature, so let&#39;s ignore that. You talk of an AI recognising that information it has received came from humans and therefore it will listen to arguments from humans. Presumably this is intended to make an AI &quot;listen to reason&quot; and prevent runaway harmful acts of stupidity, much like the 15 or so episodes of classic Star Trek where Kirk deals with a robot set on destroying the Enterprise. But what if those whispering into the ear of an AI are not acting in the best interests of humanity? An AI that finds religion could be the most unfriendly of them all.</p>
]]></content:encoded>
	</item>
</channel>
</rss>