<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
		>
<channel>
	<title>Comments on: Singularity Institute releases &#8216;Levels of Organization&#8217;</title>
	<atom:link href="http://www.foresight.org/nanodot/?feed=rss2&#038;p=1123" rel="self" type="application/rss+xml" />
	<link>http://www.foresight.org/nanodot/?p=1123</link>
	<description>examining transformative technology</description>
	<lastBuildDate>Wed, 03 Apr 2013 18:23:47 +0000</lastBuildDate>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=3.0.4</generator>
	<item>
		<title>By: Shadow</title>
		<link>http://www.foresight.org/nanodot/?p=1123#comment-2555</link>
		<dc:creator>Shadow</dc:creator>
		<pubDate>Tue, 30 Apr 2002 17:44:21 +0000</pubDate>
		<guid isPermaLink="false">http://www.foresight.org/nanodot/?p=1123#comment-2555</guid>
		<description>&lt;p&gt;&lt;strong&gt;Re:Is a general theory needed?&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;I don&#039;t think there&#039;s any concievable &quot;application&quot; for a true AI in the normal sense. Such a technology would by definition be intelligent life. We don&#039;t generally think of intelligent life in terms of its applications. The only realistic impetus for development of true AI is the desire to discover (is it invention or is it discovery? philosophical question, I guess; maybe &quot;meet&quot; would be a more approriate word) new forms of intelligence. Regardless, I think we will certainly have to know a great deal more about how our own intelligence operates before this would even become vaguely possible. That&#039;s one purpose for research in this area. There are other potential applications for understanding our own intelligence though. If we can get to the point that we have an accurate symbolic understanding of how the human mind works, then we could theoretically synthesize those symbols (at this level of understanding, AI couldn&#039;t be very far away). Given that power, we could possibly devise a method for digitizing our thoughts at a symbolic level. The applications of such technology would be tremendous. Telekinetics and telepathy could become reality. Imagine humans telekinetically controlling fleets of nanites. They&#039;d be like sorcerors. Eh, maybe I&#039;m just dreaming, but the potential is there, however far into the depths of the future it may be. That&#039;s how I&#039;d justify the research.&lt;/p&gt;

</description>
		<content:encoded><![CDATA[<p><strong>Re:Is a general theory needed?</strong></p>
<p>I don&#39;t think there&#39;s any concievable &quot;application&quot; for a true AI in the normal sense. Such a technology would by definition be intelligent life. We don&#39;t generally think of intelligent life in terms of its applications. The only realistic impetus for development of true AI is the desire to discover (is it invention or is it discovery? philosophical question, I guess; maybe &quot;meet&quot; would be a more approriate word) new forms of intelligence. Regardless, I think we will certainly have to know a great deal more about how our own intelligence operates before this would even become vaguely possible. That&#39;s one purpose for research in this area. There are other potential applications for understanding our own intelligence though. If we can get to the point that we have an accurate symbolic understanding of how the human mind works, then we could theoretically synthesize those symbols (at this level of understanding, AI couldn&#39;t be very far away). Given that power, we could possibly devise a method for digitizing our thoughts at a symbolic level. The applications of such technology would be tremendous. Telekinetics and telepathy could become reality. Imagine humans telekinetically controlling fleets of nanites. They&#39;d be like sorcerors. Eh, maybe I&#39;m just dreaming, but the potential is there, however far into the depths of the future it may be. That&#39;s how I&#39;d justify the research.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Shadow</title>
		<link>http://www.foresight.org/nanodot/?p=1123#comment-2558</link>
		<dc:creator>Shadow</dc:creator>
		<pubDate>Tue, 30 Apr 2002 16:47:21 +0000</pubDate>
		<guid isPermaLink="false">http://www.foresight.org/nanodot/?p=1123#comment-2558</guid>
		<description>&lt;p&gt;&lt;strong&gt;Is this really new?&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Didn&#039;t Hoffstadter (I&#039;m sure I&#039;m misspelling his name) write an entire book -- &lt;em&gt;&quot;Godel,Escher,Bach&quot;&lt;/em&gt;-- on the exact same subject? What&#039;s the new material in this paper?&lt;/p&gt;

</description>
		<content:encoded><![CDATA[<p><strong>Is this really new?</strong></p>
<p>Didn&#39;t Hoffstadter (I&#39;m sure I&#39;m misspelling his name) write an entire book &#8212; <em>&quot;Godel,Escher,Bach&quot;</em>&#8211; on the exact same subject? What&#39;s the new material in this paper?</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Mr_Farlops</title>
		<link>http://www.foresight.org/nanodot/?p=1123#comment-2554</link>
		<dc:creator>Mr_Farlops</dc:creator>
		<pubDate>Thu, 25 Apr 2002 21:26:04 +0000</pubDate>
		<guid isPermaLink="false">http://www.foresight.org/nanodot/?p=1123#comment-2554</guid>
		<description>&lt;p&gt;&lt;strong&gt;Re:Is a general theory needed?&lt;/strong&gt;&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&quot;Hell we could probably do the domestic servant bit now&quot;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Hmm. I don&#039;t know. That car autopilot that CMU is developing still occasionally mistakes trees for roadways. I don&#039;t want 250 kilos of confused robot butler rampaging through my house, just because I moved some papers or books around!&lt;/p&gt;

</description>
		<content:encoded><![CDATA[<p><strong>Re:Is a general theory needed?</strong></p>
<blockquote>
<p>&quot;Hell we could probably do the domestic servant bit now&quot;</p>
</blockquote>
<p>Hmm. I don&#39;t know. That car autopilot that CMU is developing still occasionally mistakes trees for roadways. I don&#39;t want 250 kilos of confused robot butler rampaging through my house, just because I moved some papers or books around!</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Corwin</title>
		<link>http://www.foresight.org/nanodot/?p=1123#comment-2557</link>
		<dc:creator>Corwin</dc:creator>
		<pubDate>Thu, 25 Apr 2002 15:48:53 +0000</pubDate>
		<guid isPermaLink="false">http://www.foresight.org/nanodot/?p=1123#comment-2557</guid>
		<description>&lt;p&gt;&lt;strong&gt;Re:Is a general theory needed?&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Yeah but you don&#039;t have to water AI.... they can be programmed to plug themselves in... ;)&lt;/p&gt;

</description>
		<content:encoded><![CDATA[<p><strong>Re:Is a general theory needed?</strong></p>
<p>Yeah but you don&#39;t have to water AI&#8230;. they can be programmed to plug themselves in&#8230; <img src='http://www.foresight.org/nanodot/wp-includes/images/smilies/icon_wink.gif' alt=';)' class='wp-smiley' /> </p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Steve_Moniz</title>
		<link>http://www.foresight.org/nanodot/?p=1123#comment-2556</link>
		<dc:creator>Steve_Moniz</dc:creator>
		<pubDate>Thu, 25 Apr 2002 13:45:04 +0000</pubDate>
		<guid isPermaLink="false">http://www.foresight.org/nanodot/?p=1123#comment-2556</guid>
		<description>&lt;p&gt;&lt;strong&gt;Re:Is a general theory needed?&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;I&#039;m not sure I &lt;em&gt;want&lt;/em&gt; to bring strong AI into this world...I can&#039;t even keep house plants alive!&lt;/p&gt;

</description>
		<content:encoded><![CDATA[<p><strong>Re:Is a general theory needed?</strong></p>
<p>I&#39;m not sure I <em>want</em> to bring strong AI into this world&#8230;I can&#39;t even keep house plants alive!</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Corwin</title>
		<link>http://www.foresight.org/nanodot/?p=1123#comment-2553</link>
		<dc:creator>Corwin</dc:creator>
		<pubDate>Thu, 25 Apr 2002 05:07:43 +0000</pubDate>
		<guid isPermaLink="false">http://www.foresight.org/nanodot/?p=1123#comment-2553</guid>
		<description>&lt;p&gt;&lt;strong&gt;Re:Is a general theory needed?&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Hell we could probably do the domestic servant bit now.... AI is far enough along for that. Black and White may have been a lame game.... but when AI at that level can be found in a consumer level product.... MIT has much better. ;)&lt;/p&gt;

</description>
		<content:encoded><![CDATA[<p><strong>Re:Is a general theory needed?</strong></p>
<p>Hell we could probably do the domestic servant bit now&#8230;. AI is far enough along for that. Black and White may have been a lame game&#8230;. but when AI at that level can be found in a consumer level product&#8230;. MIT has much better. <img src='http://www.foresight.org/nanodot/wp-includes/images/smilies/icon_wink.gif' alt=';)' class='wp-smiley' /> </p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Mr_Farlops</title>
		<link>http://www.foresight.org/nanodot/?p=1123#comment-2552</link>
		<dc:creator>Mr_Farlops</dc:creator>
		<pubDate>Thu, 25 Apr 2002 01:40:51 +0000</pubDate>
		<guid isPermaLink="false">http://www.foresight.org/nanodot/?p=1123#comment-2552</guid>
		<description>&lt;p&gt;&lt;strong&gt;Re:Is a general theory needed?&lt;/strong&gt;&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&quot;I&#039;m generally quite opposed to tinkering with complicated or emergent systems unless we understand the basic principles involved.&quot;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Yes. You do have a point there, these creatures will be at least as unpredictable as people are. And for that reason we may not want to mess around with that until we are good are ready.&lt;/p&gt;
&lt;p&gt;Also it&#039;s true that a lot of the things we want done, don&#039;t really require strong AI. We just need a set of programs robust enough to not get confused by the mildly unpredictable situations we place them in. For example, if the robot butler doesn&#039;t have to spend twelve hours remapping the room after you move the furniture a bit, we are making some progress. That&#039;s just ordinary, weak AI and, Microsoft&#039;s paperclip aside, we are already making progress on that. But will this lead to superhuman intelligence, assuming such a thing is possible? I doubt it. Perhaps these weak AI applications might be superhumanly intelligent (again define that as you wish.) in their very limited fields, Deep Blue and chess for example, but I don&#039;t think most people will really think of these tools as being intelligent or conscious let alone superhumanly intelligent. Which means they aren&#039;t relevent to Moravec&#039;s brain taping idea.&lt;/p&gt;
&lt;p&gt;And I do agree that we really should understand a lot more about how our brains work before we attempt to recreate them in artificial life or attempt to improve on them. The prospect of psychotics with god-like intelligence is very chilling.&lt;/p&gt;
&lt;p&gt;&lt;br /&gt;
&lt;br /&gt;&lt;/p&gt;

</description>
		<content:encoded><![CDATA[<p><strong>Re:Is a general theory needed?</strong></p>
<blockquote>
<p>&quot;I&#39;m generally quite opposed to tinkering with complicated or emergent systems unless we understand the basic principles involved.&quot;</p>
</blockquote>
<p>Yes. You do have a point there, these creatures will be at least as unpredictable as people are. And for that reason we may not want to mess around with that until we are good are ready.</p>
<p>Also it&#39;s true that a lot of the things we want done, don&#39;t really require strong AI. We just need a set of programs robust enough to not get confused by the mildly unpredictable situations we place them in. For example, if the robot butler doesn&#39;t have to spend twelve hours remapping the room after you move the furniture a bit, we are making some progress. That&#39;s just ordinary, weak AI and, Microsoft&#39;s paperclip aside, we are already making progress on that. But will this lead to superhuman intelligence, assuming such a thing is possible? I doubt it. Perhaps these weak AI applications might be superhumanly intelligent (again define that as you wish.) in their very limited fields, Deep Blue and chess for example, but I don&#39;t think most people will really think of these tools as being intelligent or conscious let alone superhumanly intelligent. Which means they aren&#39;t relevent to Moravec&#39;s brain taping idea.</p>
<p>And I do agree that we really should understand a lot more about how our brains work before we attempt to recreate them in artificial life or attempt to improve on them. The prospect of psychotics with god-like intelligence is very chilling.</p></p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Corwin</title>
		<link>http://www.foresight.org/nanodot/?p=1123#comment-2551</link>
		<dc:creator>Corwin</dc:creator>
		<pubDate>Wed, 24 Apr 2002 19:11:27 +0000</pubDate>
		<guid isPermaLink="false">http://www.foresight.org/nanodot/?p=1123#comment-2551</guid>
		<description>&lt;p&gt;&lt;strong&gt;Re:Is a general theory needed?&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;You know, if you don&#039;t stop this you&#039;re going to go blind Kad...&lt;/p&gt;

</description>
		<content:encoded><![CDATA[<p><strong>Re:Is a general theory needed?</strong></p>
<p>You know, if you don&#39;t stop this you&#39;re going to go blind Kad&#8230;</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Kadamose</title>
		<link>http://www.foresight.org/nanodot/?p=1123#comment-2550</link>
		<dc:creator>Kadamose</dc:creator>
		<pubDate>Wed, 24 Apr 2002 19:08:36 +0000</pubDate>
		<guid isPermaLink="false">http://www.foresight.org/nanodot/?p=1123#comment-2550</guid>
		<description>&lt;p&gt;&lt;strong&gt;Re:Is a general theory needed?&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;How can you say that you&#039;re going to live your lives, learn, grow, and progress when you are all simply repeating the same mistakes as your pathetic ancestors? Yes, they did not have the technology that we do now (This only applies to anything after circa 3000 BC), but they did have the same mindset (i.e. Opportunity=$$$=Power=War)&lt;br /&gt;
&lt;br /&gt;
If I were a god, I would find mankind unfit to even live in the first place. Things must change, otherwise, we&#039;re all dead anyway, regardless of how far our technology takes us.&lt;/p&gt;

</description>
		<content:encoded><![CDATA[<p><strong>Re:Is a general theory needed?</strong></p>
<p>How can you say that you&#39;re going to live your lives, learn, grow, and progress when you are all simply repeating the same mistakes as your pathetic ancestors? Yes, they did not have the technology that we do now (This only applies to anything after circa 3000 BC), but they did have the same mindset (i.e. Opportunity=$$$=Power=War)</p>
<p>If I were a god, I would find mankind unfit to even live in the first place. Things must change, otherwise, we&#39;re all dead anyway, regardless of how far our technology takes us.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Corwin</title>
		<link>http://www.foresight.org/nanodot/?p=1123#comment-2549</link>
		<dc:creator>Corwin</dc:creator>
		<pubDate>Wed, 24 Apr 2002 18:55:56 +0000</pubDate>
		<guid isPermaLink="false">http://www.foresight.org/nanodot/?p=1123#comment-2549</guid>
		<description>&lt;p&gt;&lt;strong&gt;Re:Is a general theory needed?&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Well I suspect it&#039;s safe to say that we here would all really prefer it if you didn&#039;t breed anyway. (Not that it sounds like that will be much of an issue...) So why don&#039;t you just go live on a mountaintop in Sri Lanka and masturbate over Zechariah Stichin books and leave the rest of us alone to live our lives, learn, grow, progress, and yes occasionally get laid. Okie? Okie.&lt;br /&gt;
&lt;br /&gt;
*bubye*&lt;/p&gt;

</description>
		<content:encoded><![CDATA[<p><strong>Re:Is a general theory needed?</strong></p>
<p>Well I suspect it&#39;s safe to say that we here would all really prefer it if you didn&#39;t breed anyway. (Not that it sounds like that will be much of an issue&#8230;) So why don&#39;t you just go live on a mountaintop in Sri Lanka and masturbate over Zechariah Stichin books and leave the rest of us alone to live our lives, learn, grow, progress, and yes occasionally get laid. Okie? Okie.</p>
<p>*bubye*</p>
]]></content:encoded>
	</item>
</channel>
</rss>