<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
		>
<channel>
	<title>Comments on: Critique of Josh Hall&#8217;s &#8216;Ethics for Machines&#8217;</title>
	<atom:link href="http://www.foresight.org/nanodot/?feed=rss2&#038;p=200" rel="self" type="application/rss+xml" />
	<link>http://www.foresight.org/nanodot/?p=200</link>
	<description>examining transformative technology</description>
	<lastBuildDate>Wed, 03 Apr 2013 18:23:47 +0000</lastBuildDate>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=3.0.4</generator>
	<item>
		<title>By: PatGratton</title>
		<link>http://www.foresight.org/nanodot/?p=200#comment-380</link>
		<dc:creator>PatGratton</dc:creator>
		<pubDate>Thu, 07 Sep 2000 23:55:30 +0000</pubDate>
		<guid isPermaLink="false">http://www.foresight.org/nanodot/?p=200#comment-380</guid>
		<description>&lt;p&gt;&lt;strong&gt;Evolutionary Analysis&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Peter makes a number of good points about Josh&#039;s paper, particularly those in regard to clarity and apparent internal contradictions. I would like to see these addressed.&lt;/p&gt;
&lt;p&gt;I have additional comments/criticisms which are fairly orthogonal to Peter&#039;s points. These comments can be found &lt;a href=&quot;http://www.grist.org/articles/00.09.04_Machine_Ethics.htm&quot;&gt;here&lt;/a&gt;, with the major points being:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;This isn&#039;t an abstract debate! AI is incredibly dangerous - papers like this ought to outline the severity of the danger.&lt;/li&gt;
&lt;li&gt;An ethical approach to this topic should include personal as well as social ethics.&lt;/li&gt;
&lt;li&gt;&lt;em&gt;There is no progress in ethics!&lt;/em&gt; (What would you judge it by?)&lt;/li&gt;
&lt;li&gt;Our current understanding of the effect of evolution on behavior clearly indicates the opposite of what Josh suggests - specifically, that AIs will very likely evolve into ruthlessly selfish intelligences.&lt;/li&gt;
&lt;/ul&gt;

</description>
		<content:encoded><![CDATA[<p><strong>Evolutionary Analysis</strong></p>
<p>Peter makes a number of good points about Josh&#39;s paper, particularly those in regard to clarity and apparent internal contradictions. I would like to see these addressed.</p>
<p>I have additional comments/criticisms which are fairly orthogonal to Peter&#39;s points. These comments can be found <a href="http://www.grist.org/articles/00.09.04_Machine_Ethics.htm">here</a>, with the major points being:</p>
<ul>
<li>This isn&#39;t an abstract debate! AI is incredibly dangerous &#8211; papers like this ought to outline the severity of the danger.</li>
<li>An ethical approach to this topic should include personal as well as social ethics.</li>
<li><em>There is no progress in ethics!</em> (What would you judge it by?)</li>
<li>Our current understanding of the effect of evolution on behavior clearly indicates the opposite of what Josh suggests &#8211; specifically, that AIs will very likely evolve into ruthlessly selfish intelligences.</li>
</ul>
]]></content:encoded>
	</item>
	<item>
		<title>By: PeterVoss</title>
		<link>http://www.foresight.org/nanodot/?p=200#comment-379</link>
		<dc:creator>PeterVoss</dc:creator>
		<pubDate>Fri, 01 Sep 2000 15:24:19 +0000</pubDate>
		<guid isPermaLink="false">http://www.foresight.org/nanodot/?p=200#comment-379</guid>
		<description>&lt;p&gt;&lt;strong&gt;Re:A bit harsh&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;My purpose was to alert futurists to the dangers of certain common approaches and errors in ethics - particularly, using rationality and science to &lt;strong&gt;describe&lt;/strong&gt; our behavior, but not to developing &lt;strong&gt;prescitive&lt;/strong&gt; ethics. I hope that Adam Burke will get a chance to read my whole article. More generally, I would be thrilled to see this important subject of transhuman ethics receive increased rational (scientific) attention.&lt;/p&gt;

</description>
		<content:encoded><![CDATA[<p><strong>Re:A bit harsh</strong></p>
<p>My purpose was to alert futurists to the dangers of certain common approaches and errors in ethics &#8211; particularly, using rationality and science to <strong>describe</strong> our behavior, but not to developing <strong>prescitive</strong> ethics. I hope that Adam Burke will get a chance to read my whole article. More generally, I would be thrilled to see this important subject of transhuman ethics receive increased rational (scientific) attention.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Adam Burke</title>
		<link>http://www.foresight.org/nanodot/?p=200#comment-378</link>
		<dc:creator>Adam Burke</dc:creator>
		<pubDate>Fri, 01 Sep 2000 06:15:42 +0000</pubDate>
		<guid isPermaLink="false">http://www.foresight.org/nanodot/?p=200#comment-378</guid>
		<description>&lt;p&gt;&lt;strong&gt;A bit harsh&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Caveat: I have not read the response in full, having not read the Appendix apart from the summary. Although I realise that the major charge laid against &quot;Ethics for machines&quot; is lack of rigour, I think a few comments are worthwhile.&lt;/p&gt;
&lt;p&gt;I think the critique is a bit harsh. &quot;Ethics for machines&quot; does rely on intuitive ethical principles, but as I recall it doesn&#039;t claim they are the pinnacle of ethical achievement. Rather, it points out an evolutionary reason for the development of ethics, and suggests using the same mechanism for to develop ethics for machines is dangerously slow. It also carefully points out that evolutionarily if self-interest could be separated from ethics that would provide a competitive advantage. I think it&#039;s implicit (or maybe explicit) in the essay and in currently common &quot;intuitive ethical systems&quot; that this if such a separation was effected it would be a bad thing, ethicwise.&lt;/p&gt;
&lt;p&gt;As to improving ethical principles by reason and scientific principles, the original essay seemed to support such as position to me, with the author discussing creating super-ethical machines that could make their ethical discoveries known to the other conscious beings about the place, such as humans and post-humans. I also think the description of ethics as a science is too strong: ethics is still firmly a philosophy, and though systematic arguments can be applied, empirical observation is not really possible. The discussion must happen on the level of argument, with arguments being systematically tested out. Good counter-examples to my assertion, showing successful ethical experiments, would be appreciated.&lt;/p&gt;
&lt;p&gt;The technique of assuming the worst case allowable by a particular wording is worthwhile, and I think it&#039;s been used to great effect in the critique, I just think it oversteps the mark by occassionally claiming the intent of the essay was always the worst case. This may be one reason Mr Voss finds the article a little incoherent.&lt;/p&gt;

</description>
		<content:encoded><![CDATA[<p><strong>A bit harsh</strong></p>
<p>Caveat: I have not read the response in full, having not read the Appendix apart from the summary. Although I realise that the major charge laid against &quot;Ethics for machines&quot; is lack of rigour, I think a few comments are worthwhile.</p>
<p>I think the critique is a bit harsh. &quot;Ethics for machines&quot; does rely on intuitive ethical principles, but as I recall it doesn&#39;t claim they are the pinnacle of ethical achievement. Rather, it points out an evolutionary reason for the development of ethics, and suggests using the same mechanism for to develop ethics for machines is dangerously slow. It also carefully points out that evolutionarily if self-interest could be separated from ethics that would provide a competitive advantage. I think it&#39;s implicit (or maybe explicit) in the essay and in currently common &quot;intuitive ethical systems&quot; that this if such a separation was effected it would be a bad thing, ethicwise.</p>
<p>As to improving ethical principles by reason and scientific principles, the original essay seemed to support such as position to me, with the author discussing creating super-ethical machines that could make their ethical discoveries known to the other conscious beings about the place, such as humans and post-humans. I also think the description of ethics as a science is too strong: ethics is still firmly a philosophy, and though systematic arguments can be applied, empirical observation is not really possible. The discussion must happen on the level of argument, with arguments being systematically tested out. Good counter-examples to my assertion, showing successful ethical experiments, would be appreciated.</p>
<p>The technique of assuming the worst case allowable by a particular wording is worthwhile, and I think it&#39;s been used to great effect in the critique, I just think it oversteps the mark by occassionally claiming the intent of the essay was always the worst case. This may be one reason Mr Voss finds the article a little incoherent.</p>
]]></content:encoded>
	</item>
</channel>
</rss>