<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
		>
<channel>
	<title>Comments on: Superhuman Psychopaths</title>
	<atom:link href="http://www.foresight.org/nanodot/?feed=rss2&#038;p=3254" rel="self" type="application/rss+xml" />
	<link>http://www.foresight.org/nanodot/?p=3254</link>
	<description>examining transformative technology</description>
	<lastBuildDate>Wed, 03 Apr 2013 18:23:47 +0000</lastBuildDate>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=3.0.4</generator>
	<item>
		<title>By: Graham Rawlinson</title>
		<link>http://www.foresight.org/nanodot/?p=3254#comment-859853</link>
		<dc:creator>Graham Rawlinson</dc:creator>
		<pubDate>Mon, 31 Aug 2009 09:38:45 +0000</pubDate>
		<guid isPermaLink="false">http://www.foresight.org/nanodot/?p=3254#comment-859853</guid>
		<description>Apologies, the last comment on Pirsig&#039;s book should have stated Lila, An Inquiry into Morals not Morality.

Thanks

Graham</description>
		<content:encoded><![CDATA[<p>Apologies, the last comment on Pirsig&#8217;s book should have stated Lila, An Inquiry into Morals not Morality.</p>
<p>Thanks</p>
<p>Graham</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Graham Rawlinson</title>
		<link>http://www.foresight.org/nanodot/?p=3254#comment-859852</link>
		<dc:creator>Graham Rawlinson</dc:creator>
		<pubDate>Mon, 31 Aug 2009 09:23:34 +0000</pubDate>
		<guid isPermaLink="false">http://www.foresight.org/nanodot/?p=3254#comment-859852</guid>
		<description>Conscience, multiplicity, AI and risk.

I wish to enter this discussion and hope that not too many of my thoughts have been offered too many times before! Apols if so.

My background is as a psychologist and innovation facilitator. More recently I have been working with Rita Carter on the notion of Multiplicity, that we all have multiple minds which each have their own memories, motivations and access to skill sets. I know this concept will not be new to AI people as computers have elements of Multiplicity built in.

It seems to me that the desire or need to place conscience in artificially intelligent machines (though I confess I am not entirely sure I know what non-artificially intelligent machines are) that need is based best on an assessment of risk.

I start with the proposition that risk lies in handing power to one thing/person or a small group of things/people such that normal corrections against excess cannot be managed without risk and high cost.

Human history is full of attempts to manage that risk and each of us may have their own view of how successful those attempts have been. So our desire or need to hand conscience over to machines can be put against a more general suggestion that neither machines nor people should be handed such power as to not have it able to be corrected easily by other forces.

In that sense the machine element of the problem does not exist as being different.

Having said that, the attempt to place conscience inside a machine may be very informative in trying to understand how we might do so for human beings also. I am not sure I or other people know how to place conscience inside the mind of another in such a way as to have it rule over decision making.
Many people with the very best of moral upbringing have engaged later in atrocities. We do now how to get people to act morally when they are not tempted by too great a prize or challenged by too great a threat, but after that, well, the jury is surely out.

At the other end there are criminal gangs and terrorist groups who would declare their behaviour in the strongest moral terms. Family/religion is everything.

My third point brings in Multiplicity (see www.ritacarter.co.uk for the book) and I think this risk management has lots of connections to Multiplicity as I think the reason why we have multiple selves is to help us manage our thinking and decision making and reduce the risk of over-powerful personas taking actions which are harmful to ourselves or our communities (and of course our genes).

If our high risk decision making has to pass through a parliament of minds (French - Parler), then this is fine whether it is machine or human or, and maybe this is always the best option, human and machine?

As AI gets more powerful there should come a time when we hand one of the keys to decision making to these machines, always ensuring that humans have some keys too and that all are needed for risky decisions. This would have the additional advantage of reducing the risk of relying on the human conscience, which has been at different times and for different peoples a failure and a success.

As a final note, I do think conscience &#039;morality&#039; could be much simpler than we might imagine, and some parts of an understanding of morality may be found in Pirsig&#039;s book, Lila, an enquiry into Morality.

Thanks

Graham Rawlinson
Author of PhD The Significance of Letter Position in Word Recognition, Nottingham University (not Cambridge as was wrongly circulated)</description>
		<content:encoded><![CDATA[<p>Conscience, multiplicity, AI and risk.</p>
<p>I wish to enter this discussion and hope that not too many of my thoughts have been offered too many times before! Apols if so.</p>
<p>My background is as a psychologist and innovation facilitator. More recently I have been working with Rita Carter on the notion of Multiplicity, that we all have multiple minds which each have their own memories, motivations and access to skill sets. I know this concept will not be new to AI people as computers have elements of Multiplicity built in.</p>
<p>It seems to me that the desire or need to place conscience in artificially intelligent machines (though I confess I am not entirely sure I know what non-artificially intelligent machines are) that need is based best on an assessment of risk.</p>
<p>I start with the proposition that risk lies in handing power to one thing/person or a small group of things/people such that normal corrections against excess cannot be managed without risk and high cost.</p>
<p>Human history is full of attempts to manage that risk and each of us may have their own view of how successful those attempts have been. So our desire or need to hand conscience over to machines can be put against a more general suggestion that neither machines nor people should be handed such power as to not have it able to be corrected easily by other forces.</p>
<p>In that sense the machine element of the problem does not exist as being different.</p>
<p>Having said that, the attempt to place conscience inside a machine may be very informative in trying to understand how we might do so for human beings also. I am not sure I or other people know how to place conscience inside the mind of another in such a way as to have it rule over decision making.<br />
Many people with the very best of moral upbringing have engaged later in atrocities. We do now how to get people to act morally when they are not tempted by too great a prize or challenged by too great a threat, but after that, well, the jury is surely out.</p>
<p>At the other end there are criminal gangs and terrorist groups who would declare their behaviour in the strongest moral terms. Family/religion is everything.</p>
<p>My third point brings in Multiplicity (see <a href="http://www.ritacarter.co.uk" rel="nofollow">http://www.ritacarter.co.uk</a> for the book) and I think this risk management has lots of connections to Multiplicity as I think the reason why we have multiple selves is to help us manage our thinking and decision making and reduce the risk of over-powerful personas taking actions which are harmful to ourselves or our communities (and of course our genes).</p>
<p>If our high risk decision making has to pass through a parliament of minds (French &#8211; Parler), then this is fine whether it is machine or human or, and maybe this is always the best option, human and machine?</p>
<p>As AI gets more powerful there should come a time when we hand one of the keys to decision making to these machines, always ensuring that humans have some keys too and that all are needed for risky decisions. This would have the additional advantage of reducing the risk of relying on the human conscience, which has been at different times and for different peoples a failure and a success.</p>
<p>As a final note, I do think conscience &#8216;morality&#8217; could be much simpler than we might imagine, and some parts of an understanding of morality may be found in Pirsig&#8217;s book, Lila, an enquiry into Morality.</p>
<p>Thanks</p>
<p>Graham Rawlinson<br />
Author of PhD The Significance of Letter Position in Word Recognition, Nottingham University (not Cambridge as was wrongly circulated)</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Spike McLarty</title>
		<link>http://www.foresight.org/nanodot/?p=3254#comment-859850</link>
		<dc:creator>Spike McLarty</dc:creator>
		<pubDate>Mon, 31 Aug 2009 08:07:52 +0000</pubDate>
		<guid isPermaLink="false">http://www.foresight.org/nanodot/?p=3254#comment-859850</guid>
		<description>OK... how about some pointers to the literature on how to implement this easy little after-the-hard-work-is-done bolt-on conscience?  I&#039;ve ordered my copy of Beyond AI, so maybe this is out of order - but surely if we think AIs could be superhuman sociopaths, their moral sense is not the last problem to solve, nor the last piece of code to debug?  Doesn&#039;t a binding moral imperative necessarily require systematic distortion of some aspect of cognition? That doesn&#039;t sound like something you can just &#039;add&#039;.</description>
		<content:encoded><![CDATA[<p>OK&#8230; how about some pointers to the literature on how to implement this easy little after-the-hard-work-is-done bolt-on conscience?  I&#8217;ve ordered my copy of Beyond AI, so maybe this is out of order &#8211; but surely if we think AIs could be superhuman sociopaths, their moral sense is not the last problem to solve, nor the last piece of code to debug?  Doesn&#8217;t a binding moral imperative necessarily require systematic distortion of some aspect of cognition? That doesn&#8217;t sound like something you can just &#8216;add&#8217;.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: J. Storrs Hall</title>
		<link>http://www.foresight.org/nanodot/?p=3254#comment-859810</link>
		<dc:creator>J. Storrs Hall</dc:creator>
		<pubDate>Sun, 23 Aug 2009 18:03:10 +0000</pubDate>
		<guid isPermaLink="false">http://www.foresight.org/nanodot/?p=3254#comment-859810</guid>
		<description>@Michael: You&#039;re a good man and your moderate tone is appreciated.  
You&#039;ll find that I spend a lot of time in Beyond AI discussing how to build an AI in the first place, since I feel that it is the brittleness and lack of common sense in current AI practice that will be causing most the AI-related tragedies for the next decade at least.  (Read Tom Godwin&#039;s &quot;The Gulf Between&quot; for which the cover illustration of my book was first painted.)
Once we get that part figured out, which is a really hard problem and will in my estimation take us a full decade to understand, building a proper conscience will be a moderate additional increment.  But from today&#039;s standing start, it&#039;s a huge task, equivalent to the entire language understanding and generation problem, plus.
The reason it&#039;s depressingly simple is that &lt;i&gt;once we know how the conscience works&lt;/i&gt; we only need change the human heuristic of &quot;be as selfish as possible to start (we call this childish) and move to cooperation as the environment demands&quot; to the opposite direction (i.e. start as unselfish as possible, etc).
Eliezer is a brilliant autodidact but an autodidact nonetheless.  Thus those of us who have spent 30 years studying AI in academia and 50 years studying it in science fiction have seen most of the ideas he is known for, well before he appeared. Two examples: &quot;seed AI&quot; and &quot;friendly AI&quot;: The first is essentially what Alan Turing proposed in 1950 in the classic Mind paper (where the Turing test is proposed) under the name of &quot;the child machine&quot;.  The second is very much what Asimov had in mind with his robot series and the Three Laws.  If you doubt he had such a broad conception in mind, read &lt;a href=&quot;http://en.wikipedia.org/wiki/The_Evitable_Conflict&quot; rel=&quot;nofollow&quot;&gt;The Evitable Conflict&lt;/a&gt; (the last story in &quot;I, Robot&quot;), which also appeared in 1950.</description>
		<content:encoded><![CDATA[<p>@Michael: You&#8217;re a good man and your moderate tone is appreciated.<br />
You&#8217;ll find that I spend a lot of time in Beyond AI discussing how to build an AI in the first place, since I feel that it is the brittleness and lack of common sense in current AI practice that will be causing most the AI-related tragedies for the next decade at least.  (Read Tom Godwin&#8217;s &#8220;The Gulf Between&#8221; for which the cover illustration of my book was first painted.)<br />
Once we get that part figured out, which is a really hard problem and will in my estimation take us a full decade to understand, building a proper conscience will be a moderate additional increment.  But from today&#8217;s standing start, it&#8217;s a huge task, equivalent to the entire language understanding and generation problem, plus.<br />
The reason it&#8217;s depressingly simple is that <i>once we know how the conscience works</i> we only need change the human heuristic of &#8220;be as selfish as possible to start (we call this childish) and move to cooperation as the environment demands&#8221; to the opposite direction (i.e. start as unselfish as possible, etc).<br />
Eliezer is a brilliant autodidact but an autodidact nonetheless.  Thus those of us who have spent 30 years studying AI in academia and 50 years studying it in science fiction have seen most of the ideas he is known for, well before he appeared. Two examples: &#8220;seed AI&#8221; and &#8220;friendly AI&#8221;: The first is essentially what Alan Turing proposed in 1950 in the classic Mind paper (where the Turing test is proposed) under the name of &#8220;the child machine&#8221;.  The second is very much what Asimov had in mind with his robot series and the Three Laws.  If you doubt he had such a broad conception in mind, read <a href="http://en.wikipedia.org/wiki/The_Evitable_Conflict" rel="nofollow">The Evitable Conflict</a> (the last story in &#8220;I, Robot&#8221;), which also appeared in 1950.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Phil Bowermaster</title>
		<link>http://www.foresight.org/nanodot/?p=3254#comment-859804</link>
		<dc:creator>Phil Bowermaster</dc:creator>
		<pubDate>Sat, 22 Aug 2009 20:03:07 +0000</pubDate>
		<guid isPermaLink="false">http://www.foresight.org/nanodot/?p=3254#comment-859804</guid>
		<description>Depending on how powerful the superintelligence is, the more moral we make it, the more control of the world we will likely relinquish to it. Or rather, the more control of the world it will likely &lt;a href=&quot;http://www.blog.speculist.com/archives/002128.html&quot; rel=&quot;nofollow&quot;&gt;take from us.&lt;/a&gt; Which isn&#039;t necessarily a bad thing.</description>
		<content:encoded><![CDATA[<p>Depending on how powerful the superintelligence is, the more moral we make it, the more control of the world we will likely relinquish to it. Or rather, the more control of the world it will likely <a href="http://www.blog.speculist.com/archives/002128.html" rel="nofollow">take from us.</a> Which isn&#8217;t necessarily a bad thing.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Michael Anissimov</title>
		<link>http://www.foresight.org/nanodot/?p=3254#comment-859803</link>
		<dc:creator>Michael Anissimov</dc:creator>
		<pubDate>Sat, 22 Aug 2009 17:51:57 +0000</pubDate>
		<guid isPermaLink="false">http://www.foresight.org/nanodot/?p=3254#comment-859803</guid>
		<description>Josh, maybe I am being unfair to your ideas.  My critical comments stem from various references to your ideas which I read in &quot;Moral Machines&quot; by Wallach and Allen, as well as hearing you speak at Singularity Summit 2007.  I have your most recent book but received it only quite recently and honestly have not read it yet.  I will try to read it before I make any more references to your ideas, so I can be sure I know what I&#039;m talking about.  

To replace &quot;without real work&quot;, maybe I should have said &quot;with relatively little work&quot;, as you imply above with the phrase &quot;depressingly easy&quot;.  That would be accurate, yes?

I only arrived in this community in 2001, so I can&#039;t say for sure whether the ideas you list first appeared from you or Eliezer.  Do you have any references to mailing list posts I might be able to look at to settle the issue?</description>
		<content:encoded><![CDATA[<p>Josh, maybe I am being unfair to your ideas.  My critical comments stem from various references to your ideas which I read in &#8220;Moral Machines&#8221; by Wallach and Allen, as well as hearing you speak at Singularity Summit 2007.  I have your most recent book but received it only quite recently and honestly have not read it yet.  I will try to read it before I make any more references to your ideas, so I can be sure I know what I&#8217;m talking about.  </p>
<p>To replace &#8220;without real work&#8221;, maybe I should have said &#8220;with relatively little work&#8221;, as you imply above with the phrase &#8220;depressingly easy&#8221;.  That would be accurate, yes?</p>
<p>I only arrived in this community in 2001, so I can&#8217;t say for sure whether the ideas you list first appeared from you or Eliezer.  Do you have any references to mailing list posts I might be able to look at to settle the issue?</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Superhuman Psychopaths &#124; Everything News Portal!</title>
		<link>http://www.foresight.org/nanodot/?p=3254#comment-859802</link>
		<dc:creator>Superhuman Psychopaths &#124; Everything News Portal!</dc:creator>
		<pubDate>Sat, 22 Aug 2009 16:03:36 +0000</pubDate>
		<guid isPermaLink="false">http://www.foresight.org/nanodot/?p=3254#comment-859802</guid>
		<description>[...] Read more &#8230;  [...]</description>
		<content:encoded><![CDATA[<p>[...] Read more &#8230;  [...]</p>
]]></content:encoded>
	</item>
</channel>
</rss>