<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
		>
<channel>
	<title>Comments on: Singularity, part 3</title>
	<atom:link href="http://www.foresight.org/nanodot/?feed=rss2&#038;p=2962" rel="self" type="application/rss+xml" />
	<link>http://www.foresight.org/nanodot/?p=2962</link>
	<description>examining transformative technology</description>
	<lastBuildDate>Wed, 03 Apr 2013 18:23:47 +0000</lastBuildDate>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=3.0.4</generator>
	<item>
		<title>By: </title>
		<link>http://www.foresight.org/nanodot/?p=2962#comment-815334</link>
		<dc:creator></dc:creator>
		<pubDate>Fri, 20 Feb 2009 22:01:51 +0000</pubDate>
		<guid isPermaLink="false">http://www.foresight.org/nanodot/?p=2962#comment-815334</guid>
		<description>When you here who posted this: I see nothing in the three parts of this article series to convince me that any sort of Singularity will happen before 2060-65. There is no way it can be sooner than that. 



My question: Are you saying we will not see society-transforming nano replicator assembler devices before that time, or, we will not see a Kurzweil style Singularity before that time?</description>
		<content:encoded><![CDATA[<p>When you here who posted this: I see nothing in the three parts of this article series to convince me that any sort of Singularity will happen before 2060-65. There is no way it can be sooner than that. </p>
<p>My question: Are you saying we will not see society-transforming nano replicator assembler devices before that time, or, we will not see a Kurzweil style Singularity before that time?</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Instapundit &#187; Blog Archive &#187; MORE ON THE SINGULARITY, from J. Storrs Hall.</title>
		<link>http://www.foresight.org/nanodot/?p=2962#comment-815123</link>
		<dc:creator>Instapundit &#187; Blog Archive &#187; MORE ON THE SINGULARITY, from J. Storrs Hall.</dc:creator>
		<pubDate>Thu, 19 Feb 2009 23:53:19 +0000</pubDate>
		<guid isPermaLink="false">http://www.foresight.org/nanodot/?p=2962#comment-815123</guid>
		<description>[...] MORE ON THE SINGULARITY, from J. Storrs Hall. [...]</description>
		<content:encoded><![CDATA[<p>[...] MORE ON THE SINGULARITY, from J. Storrs Hall. [...]</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: </title>
		<link>http://www.foresight.org/nanodot/?p=2962#comment-815078</link>
		<dc:creator></dc:creator>
		<pubDate>Thu, 19 Feb 2009 19:02:48 +0000</pubDate>
		<guid isPermaLink="false">http://www.foresight.org/nanodot/?p=2962#comment-815078</guid>
		<description>+ Sincerely, Eliezer Yudkowsky.</description>
		<content:encoded><![CDATA[<p>+ Sincerely, Eliezer Yudkowsky.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: </title>
		<link>http://www.foresight.org/nanodot/?p=2962#comment-815077</link>
		<dc:creator></dc:creator>
		<pubDate>Thu, 19 Feb 2009 19:02:05 +0000</pubDate>
		<guid isPermaLink="false">http://www.foresight.org/nanodot/?p=2962#comment-815077</guid>
		<description>Robin Hanson and I had this discussion at great length on Overcoming Bias.

See &lt;a href=&quot;http://www.overcomingbias.com/2008/12/recursive-self.html&quot; rel=&quot;nofollow&quot;&gt;Recursive Self-Improvement&lt;/a&gt; and &lt;a href=&quot;http://www.overcomingbias.com/2008/12/hard-takeoff.html&quot; rel=&quot;nofollow&quot;&gt;Hard Takeoff&lt;/a&gt; for a summary of some of my views, and &lt;a href=&quot;http://www.overcomingbias.com/2008/12/sustained-recur.html&quot; rel=&quot;nofollow&quot;&gt;Sustained Strong Recursion&lt;/a&gt; for an in-depth exploration of a common sticking point, what is &quot;strongly recursive&quot; and what is not.  Some of Robin&#039;s views summarized &lt;a href=&quot;http://www.overcomingbias.com/2008/12/wrapping-up.html&quot; rel=&quot;nofollow&quot;&gt;here&lt;/a&gt;.</description>
		<content:encoded><![CDATA[<p>Robin Hanson and I had this discussion at great length on Overcoming Bias.</p>
<p>See <a href="http://www.overcomingbias.com/2008/12/recursive-self.html" rel="nofollow">Recursive Self-Improvement</a> and <a href="http://www.overcomingbias.com/2008/12/hard-takeoff.html" rel="nofollow">Hard Takeoff</a> for a summary of some of my views, and <a href="http://www.overcomingbias.com/2008/12/sustained-recur.html" rel="nofollow">Sustained Strong Recursion</a> for an in-depth exploration of a common sticking point, what is &#8220;strongly recursive&#8221; and what is not.  Some of Robin&#8217;s views summarized <a href="http://www.overcomingbias.com/2008/12/wrapping-up.html" rel="nofollow">here</a>.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: </title>
		<link>http://www.foresight.org/nanodot/?p=2962#comment-815075</link>
		<dc:creator></dc:creator>
		<pubDate>Thu, 19 Feb 2009 18:23:48 +0000</pubDate>
		<guid isPermaLink="false">http://www.foresight.org/nanodot/?p=2962#comment-815075</guid>
		<description>It is surprising how even thoughtful scientists continue to anthropomorphise their projections of AI, tending to presume an essential dedicated hardware component and a singular point-of-view/unitary though process, similar to that of a human only more capable.  Artilects of the future will not be Robby The Robot, and may transcend multiple hardware hosts simultaneously.  Artilects are likely to have multiple points of veiw, transcendant presences across whatever network resources exist, and an ability to self-replicate and self-improve, as well as share data and coordinate plans with others of their kind.  Their activities will be purposeful to whatever value system they start with or evolve.  The decision making processes of battlefield robots today look up the value that existence is preferable to non-existence, to promote battlefield survivability.  How long before a future artilect determines that having anyone around who can pull the plug is an unacceptable risk?</description>
		<content:encoded><![CDATA[<p>It is surprising how even thoughtful scientists continue to anthropomorphise their projections of AI, tending to presume an essential dedicated hardware component and a singular point-of-view/unitary though process, similar to that of a human only more capable.  Artilects of the future will not be Robby The Robot, and may transcend multiple hardware hosts simultaneously.  Artilects are likely to have multiple points of veiw, transcendant presences across whatever network resources exist, and an ability to self-replicate and self-improve, as well as share data and coordinate plans with others of their kind.  Their activities will be purposeful to whatever value system they start with or evolve.  The decision making processes of battlefield robots today look up the value that existence is preferable to non-existence, to promote battlefield survivability.  How long before a future artilect determines that having anyone around who can pull the plug is an unacceptable risk?</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: </title>
		<link>http://www.foresight.org/nanodot/?p=2962#comment-814903</link>
		<dc:creator></dc:creator>
		<pubDate>Wed, 18 Feb 2009 20:38:41 +0000</pubDate>
		<guid isPermaLink="false">http://www.foresight.org/nanodot/?p=2962#comment-814903</guid>
		<description>The INTELLIGENCE is only for biological entities.
In my project (Software Formula for 2000 Years!!!) I created a new concept: Informational Capacity!!
This is for the informational entities named INFORMATIONAL INDIVIDUAL. It is a stupidity to discuss now for the intelligence of the blackboxes. This was 40 years ago.</description>
		<content:encoded><![CDATA[<p>The INTELLIGENCE is only for biological entities.<br />
In my project (Software Formula for 2000 Years!!!) I created a new concept: Informational Capacity!!<br />
This is for the informational entities named INFORMATIONAL INDIVIDUAL. It is a stupidity to discuss now for the intelligence of the blackboxes. This was 40 years ago.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: </title>
		<link>http://www.foresight.org/nanodot/?p=2962#comment-814884</link>
		<dc:creator></dc:creator>
		<pubDate>Wed, 18 Feb 2009 19:58:58 +0000</pubDate>
		<guid isPermaLink="false">http://www.foresight.org/nanodot/?p=2962#comment-814884</guid>
		<description>Very interesting article. I had never thought to consider AI development as a purely economic model.

One thing slightly overlooked is that the expected singularity is not just an exponential increase in computing power. It&#039;s also the exponential increase in productivity due to molecular manufacturing, plus an exponential increase in resources from space exploitation. Add (multiply?) these three factors together and the impact could be far more than the conservative prediction in the article.

One question. If the hoped for reduction in world-wide poverty actually happens. Is it more likely to result in an exponential increase in human productivity due to inceased wealth and education? or will it result in an age of world-wide decadence?</description>
		<content:encoded><![CDATA[<p>Very interesting article. I had never thought to consider AI development as a purely economic model.</p>
<p>One thing slightly overlooked is that the expected singularity is not just an exponential increase in computing power. It&#8217;s also the exponential increase in productivity due to molecular manufacturing, plus an exponential increase in resources from space exploitation. Add (multiply?) these three factors together and the impact could be far more than the conservative prediction in the article.</p>
<p>One question. If the hoped for reduction in world-wide poverty actually happens. Is it more likely to result in an exponential increase in human productivity due to inceased wealth and education? or will it result in an age of world-wide decadence?</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: </title>
		<link>http://www.foresight.org/nanodot/?p=2962#comment-814705</link>
		<dc:creator></dc:creator>
		<pubDate>Wed, 18 Feb 2009 05:35:54 +0000</pubDate>
		<guid isPermaLink="false">http://www.foresight.org/nanodot/?p=2962#comment-814705</guid>
		<description>I see nothing in the three parts of this article series to convince me that any sort of Singularity will happen before 2060-65.  There is no way it can be sooner than that.</description>
		<content:encoded><![CDATA[<p>I see nothing in the three parts of this article series to convince me that any sort of Singularity will happen before 2060-65.  There is no way it can be sooner than that.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: JamesG</title>
		<link>http://www.foresight.org/nanodot/?p=2962#comment-814602</link>
		<dc:creator>JamesG</dc:creator>
		<pubDate>Wed, 18 Feb 2009 00:18:11 +0000</pubDate>
		<guid isPermaLink="false">http://www.foresight.org/nanodot/?p=2962#comment-814602</guid>
		<description>Also, it may be that to function, the AI would need at least human level computation for some essential operations, but that same level of computation could be focused and more efficiently used some of the time.</description>
		<content:encoded><![CDATA[<p>Also, it may be that to function, the AI would need at least human level computation for some essential operations, but that same level of computation could be focused and more efficiently used some of the time.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: JamesG</title>
		<link>http://www.foresight.org/nanodot/?p=2962#comment-814600</link>
		<dc:creator>JamesG</dc:creator>
		<pubDate>Wed, 18 Feb 2009 00:15:02 +0000</pubDate>
		<guid isPermaLink="false">http://www.foresight.org/nanodot/?p=2962#comment-814600</guid>
		<description>You assume that AI would only be as capable as a human, given equivalent computation.  I don&#039;t think this is true, being a computer and not an evolved brain, it should have many advantages.  Such as huge speed advantage in basic arithmetic, and huge advantages in memory, whereas you or I could manage 7 numbers in our head at max, the AI could easily handle trillions or more, and perform acurate operations on them as well as search and so on.  Basically I don&#039;t think you&#039;re &#039;using your imagination&#039; here.  This is why AI will be able to self-improve many times faster than us, and each improvement will speed up the rate of improvement and so on.  Now, no, it&#039;s not certain, but it&#039;s a plausible theory and worth investigating further.</description>
		<content:encoded><![CDATA[<p>You assume that AI would only be as capable as a human, given equivalent computation.  I don&#8217;t think this is true, being a computer and not an evolved brain, it should have many advantages.  Such as huge speed advantage in basic arithmetic, and huge advantages in memory, whereas you or I could manage 7 numbers in our head at max, the AI could easily handle trillions or more, and perform acurate operations on them as well as search and so on.  Basically I don&#8217;t think you&#8217;re &#8216;using your imagination&#8217; here.  This is why AI will be able to self-improve many times faster than us, and each improvement will speed up the rate of improvement and so on.  Now, no, it&#8217;s not certain, but it&#8217;s a plausible theory and worth investigating further.</p>
]]></content:encoded>
	</item>
</channel>
</rss>