<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
		>
<channel>
	<title>Comments on: Smart Cascio article in Atlantic</title>
	<atom:link href="http://www.foresight.org/nanodot/?feed=rss2&#038;p=3077" rel="self" type="application/rss+xml" />
	<link>http://www.foresight.org/nanodot/?p=3077</link>
	<description>examining transformative technology</description>
	<lastBuildDate>Wed, 03 Apr 2013 18:23:47 +0000</lastBuildDate>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=3.0.4</generator>
	<item>
		<title>By: Joe Sampson</title>
		<link>http://www.foresight.org/nanodot/?p=3077#comment-859465</link>
		<dc:creator>Joe Sampson</dc:creator>
		<pubDate>Mon, 29 Jun 2009 06:44:03 +0000</pubDate>
		<guid isPermaLink="false">http://www.foresight.org/nanodot/?p=3077#comment-859465</guid>
		<description>I believe the most likely scenario is a merging of computers and humans as Kurzweil describes. In fact you see it today. I am smarter given access to a cell phone and computer. You could in a way think of these as extensions of my brain. I can search google on my phone and look up news, etc at any time. I can send emails look on social networking sites, use GPS. This trend will continue and the connection between human / cell phone (or whatever we start calling it) will become much more intimate. Also, the software available on the phone (and internet) will become more powerful. Theoretically as computers advance enough, the brain would be fully replaced with a computing substrate, this will happen as cell phone upgrades (assuming the name doesn&#039;t change). Separate AI probably will not be able to pass us because we will continue upgrading our brains as we do now by buying the latest tech gadgets.</description>
		<content:encoded><![CDATA[<p>I believe the most likely scenario is a merging of computers and humans as Kurzweil describes. In fact you see it today. I am smarter given access to a cell phone and computer. You could in a way think of these as extensions of my brain. I can search google on my phone and look up news, etc at any time. I can send emails look on social networking sites, use GPS. This trend will continue and the connection between human / cell phone (or whatever we start calling it) will become much more intimate. Also, the software available on the phone (and internet) will become more powerful. Theoretically as computers advance enough, the brain would be fully replaced with a computing substrate, this will happen as cell phone upgrades (assuming the name doesn&#8217;t change). Separate AI probably will not be able to pass us because we will continue upgrading our brains as we do now by buying the latest tech gadgets.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Cascio thoughts on augmenting intelligence and nanotechnology marketing &#171; FrogHeart</title>
		<link>http://www.foresight.org/nanodot/?p=3077#comment-859433</link>
		<dc:creator>Cascio thoughts on augmenting intelligence and nanotechnology marketing &#171; FrogHeart</dc:creator>
		<pubDate>Fri, 19 Jun 2009 18:12:22 +0000</pubDate>
		<guid isPermaLink="false">http://www.foresight.org/nanodot/?p=3077#comment-859433</guid>
		<description>[...] For another take on Cascio&#8217;s article, go to the Foresight Institute here. [...]</description>
		<content:encoded><![CDATA[<p>[...] For another take on Cascio&#8217;s article, go to the Foresight Institute here. [...]</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: the Foresight Institute &#187; The Software of Civilization</title>
		<link>http://www.foresight.org/nanodot/?p=3077#comment-859432</link>
		<dc:creator>the Foresight Institute &#187; The Software of Civilization</dc:creator>
		<pubDate>Fri, 19 Jun 2009 16:13:04 +0000</pubDate>
		<guid isPermaLink="false">http://www.foresight.org/nanodot/?p=3077#comment-859432</guid>
		<description>[...] Smart Cascio article in Atlantic [...]</description>
		<content:encoded><![CDATA[<p>[...] Smart Cascio article in Atlantic [...]</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Comment on Smart Cascio article in Atlantic by Michael Anissimov : About Pharmacy Blogs</title>
		<link>http://www.foresight.org/nanodot/?p=3077#comment-859429</link>
		<dc:creator>Comment on Smart Cascio article in Atlantic by Michael Anissimov : About Pharmacy Blogs</dc:creator>
		<pubDate>Fri, 19 Jun 2009 04:13:25 +0000</pubDate>
		<guid isPermaLink="false">http://www.foresight.org/nanodot/?p=3077#comment-859429</guid>
		<description>[...] Jamais Cascio has an article in the current Atlantic about how humans are getting smarter.  Read More here [...]</description>
		<content:encoded><![CDATA[<p>[...] Jamais Cascio has an article in the current Atlantic about how humans are getting smarter.  Read More here [...]</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Michael Anissimov</title>
		<link>http://www.foresight.org/nanodot/?p=3077#comment-859428</link>
		<dc:creator>Michael Anissimov</dc:creator>
		<pubDate>Fri, 19 Jun 2009 02:58:43 +0000</pubDate>
		<guid isPermaLink="false">http://www.foresight.org/nanodot/?p=3077#comment-859428</guid>
		<description>Thanks for the response.  Essentially, it sounds like you&#039;re saying, &quot;AI will be kept separately in so many pieces that it will be integrated into society as a whole rather than being an independent force&quot;.  I can see this happening for a while, but once you cross that threshold where any one AI can think creatively and quickly without the human bottleneck, doesn&#039;t that have special implications?  

You mention that generally intelligent AI would require some pretty brute-force methods.  This means you think that it will be so computationally costly that it will never accelerate independently ahead of humans, even if it can use additional computing power to accelerate its own thinking speed, and humans can&#039;t?  

It sounds like you&#039;re saying, &quot;narrow AI will be ubiquitous in the long term, while general AI will start to exist, but it will be so expensive and inconvenient that it will never surpass human civilization.&quot;  Wouldn&#039;t there be a powerful incentive to create AI that is entirely independent of human supervision, and once that AI is created, couldn&#039;t it be applied to making itself more efficient and therefore less computation-hungry?  

Taking a guess in a different direction, I think a stronger reason why many people are skeptical about the notion of a hard takeoff is that they think that the virtual/physical barrier cannot be crossed that easily.  (I assume) they believe that AI will never start independently fabricating robotics and controlling them autonomously because humans will always forbid that from happening.  But won&#039;t there be a large incentive to develop such systems, which can fabricate robotics and use them to pursue goals even if humans are asleep or otherwise uninterested in supervising the low-level stuff?</description>
		<content:encoded><![CDATA[<p>Thanks for the response.  Essentially, it sounds like you&#8217;re saying, &#8220;AI will be kept separately in so many pieces that it will be integrated into society as a whole rather than being an independent force&#8221;.  I can see this happening for a while, but once you cross that threshold where any one AI can think creatively and quickly without the human bottleneck, doesn&#8217;t that have special implications?  </p>
<p>You mention that generally intelligent AI would require some pretty brute-force methods.  This means you think that it will be so computationally costly that it will never accelerate independently ahead of humans, even if it can use additional computing power to accelerate its own thinking speed, and humans can&#8217;t?  </p>
<p>It sounds like you&#8217;re saying, &#8220;narrow AI will be ubiquitous in the long term, while general AI will start to exist, but it will be so expensive and inconvenient that it will never surpass human civilization.&#8221;  Wouldn&#8217;t there be a powerful incentive to create AI that is entirely independent of human supervision, and once that AI is created, couldn&#8217;t it be applied to making itself more efficient and therefore less computation-hungry?  </p>
<p>Taking a guess in a different direction, I think a stronger reason why many people are skeptical about the notion of a hard takeoff is that they think that the virtual/physical barrier cannot be crossed that easily.  (I assume) they believe that AI will never start independently fabricating robotics and controlling them autonomously because humans will always forbid that from happening.  But won&#8217;t there be a large incentive to develop such systems, which can fabricate robotics and use them to pursue goals even if humans are asleep or otherwise uninterested in supervising the low-level stuff?</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Smart Cascio article in Atlantic &#124; Everything News Portal!</title>
		<link>http://www.foresight.org/nanodot/?p=3077#comment-859427</link>
		<dc:creator>Smart Cascio article in Atlantic &#124; Everything News Portal!</dc:creator>
		<pubDate>Thu, 18 Jun 2009 22:09:20 +0000</pubDate>
		<guid isPermaLink="false">http://www.foresight.org/nanodot/?p=3077#comment-859427</guid>
		<description>[...] Read more &#8230;  [...]</description>
		<content:encoded><![CDATA[<p>[...] Read more &#8230;  [...]</p>
]]></content:encoded>
	</item>
</channel>
</rss>