<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
		>
<channel>
	<title>Comments on: Human Level AI</title>
	<atom:link href="http://www.foresight.org/nanodot/?feed=rss2&#038;p=3356" rel="self" type="application/rss+xml" />
	<link>http://www.foresight.org/nanodot/?p=3356</link>
	<description>examining transformative technology</description>
	<lastBuildDate>Wed, 03 Apr 2013 18:23:47 +0000</lastBuildDate>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=3.0.4</generator>
	<item>
		<title>By: complementaire sante</title>
		<link>http://www.foresight.org/nanodot/?p=3356#comment-906058</link>
		<dc:creator>complementaire sante</dc:creator>
		<pubDate>Wed, 28 Jul 2010 16:49:33 +0000</pubDate>
		<guid isPermaLink="false">http://www.foresight.org/nanodot/?p=3356#comment-906058</guid>
		<description>Hello , Tr?s interessant Je traite du m?me sujet sur mon blog. Je me permettrai de m?inspirer de votre texte. En vous citant bien sur et si vous le permettez. Je parle aussi de sujet comme mutuelle et garantie hospitalisation ou comme mutuelle pour jeune. Merci, Alfie</description>
		<content:encoded><![CDATA[<p>Hello , Tr?s interessant Je traite du m?me sujet sur mon blog. Je me permettrai de m?inspirer de votre texte. En vous citant bien sur et si vous le permettez. Je parle aussi de sujet comme mutuelle et garantie hospitalisation ou comme mutuelle pour jeune. Merci, Alfie</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: James Hoppe</title>
		<link>http://www.foresight.org/nanodot/?p=3356#comment-860504</link>
		<dc:creator>James Hoppe</dc:creator>
		<pubDate>Fri, 25 Sep 2009 04:19:28 +0000</pubDate>
		<guid isPermaLink="false">http://www.foresight.org/nanodot/?p=3356#comment-860504</guid>
		<description>A strong AI breakthrough, while inevitable in some timeline, is too much like SETI (the Search for Extra Terrestrial Intelligence) and research missions in space or the oceans, to get the funds.  There does not appear to me any hurry to build tools for global good.  These projects exist, but get short thrift by the budgets.  The search for salvation from space, ocean, or artificial intelligence seems to me to have little relevance in a world where the first, and missing step is obviously to care very much to save ourselves, which we seem in no hurry to do.

Who decides what gets built in the world? And when?

Obviously an energy machine or a computing machine design are the best hopes for a technological breakthrough to save us all, but efforts toward &quot;good&quot; problem solving projects such as global warming, global hunger, and a sustainable energy have historically been starved.  Why should AI be any different?

Unless there&#039;s a new way of thinking dawning, there is simply not enough money to build strong AI by 2025.  There is enough science.  There is enough data.  There are enough words.  I bet the money isn&#039;t there.</description>
		<content:encoded><![CDATA[<p>A strong AI breakthrough, while inevitable in some timeline, is too much like SETI (the Search for Extra Terrestrial Intelligence) and research missions in space or the oceans, to get the funds.  There does not appear to me any hurry to build tools for global good.  These projects exist, but get short thrift by the budgets.  The search for salvation from space, ocean, or artificial intelligence seems to me to have little relevance in a world where the first, and missing step is obviously to care very much to save ourselves, which we seem in no hurry to do.</p>
<p>Who decides what gets built in the world? And when?</p>
<p>Obviously an energy machine or a computing machine design are the best hopes for a technological breakthrough to save us all, but efforts toward &#8220;good&#8221; problem solving projects such as global warming, global hunger, and a sustainable energy have historically been starved.  Why should AI be any different?</p>
<p>Unless there&#8217;s a new way of thinking dawning, there is simply not enough money to build strong AI by 2025.  There is enough science.  There is enough data.  There are enough words.  I bet the money isn&#8217;t there.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Mario</title>
		<link>http://www.foresight.org/nanodot/?p=3356#comment-859987</link>
		<dc:creator>Mario</dc:creator>
		<pubDate>Mon, 21 Sep 2009 16:13:31 +0000</pubDate>
		<guid isPermaLink="false">http://www.foresight.org/nanodot/?p=3356#comment-859987</guid>
		<description>Let me know if one can buy a microprocessor with a trillion transistors and 3D interconeations no more then a nanometer wide. Then forget about contemporary cpu design. If that day comes, I will tell you what the next step is. To simulate intelligence is one thing, to create intelligence is madness. Who will want to do that? Of course, it will be the ultimate human creation - literarely ! Will it ever happen? Yes. In the next 15 years? Don&#039;t think so. In my life time? (next 50years) I don&#039;t hope so. One think is sure - it will happened? Why? Because no ET bothered to contact us although we begin to realize, they are out there for sure. If you don&#039;t understand how I came to this conclusion, just too bad :-) If you are jung enough - one day you will . . .</description>
		<content:encoded><![CDATA[<p>Let me know if one can buy a microprocessor with a trillion transistors and 3D interconeations no more then a nanometer wide. Then forget about contemporary cpu design. If that day comes, I will tell you what the next step is. To simulate intelligence is one thing, to create intelligence is madness. Who will want to do that? Of course, it will be the ultimate human creation &#8211; literarely ! Will it ever happen? Yes. In the next 15 years? Don&#8217;t think so. In my life time? (next 50years) I don&#8217;t hope so. One think is sure &#8211; it will happened? Why? Because no ET bothered to contact us although we begin to realize, they are out there for sure. If you don&#8217;t understand how I came to this conclusion, just too bad <img src='http://www.foresight.org/nanodot/wp-includes/images/smilies/icon_smile.gif' alt=':-)' class='wp-smiley' />  If you are jung enough &#8211; one day you will . . .</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Tristan Yates</title>
		<link>http://www.foresight.org/nanodot/?p=3356#comment-859976</link>
		<dc:creator>Tristan Yates</dc:creator>
		<pubDate>Sun, 20 Sep 2009 05:51:51 +0000</pubDate>
		<guid isPermaLink="false">http://www.foresight.org/nanodot/?p=3356#comment-859976</guid>
		<description>When I was 13 I was interested in AI.  I read all of the books and within about six months understood why advanced machine intelligence was impossible.  Bottom line is there&#039;s no functional theory of mind.  I love how in movies AIs are both master problem solvers and relentless automatons, as if there were no conflict between the two modes of operation.  People don&#039;t do what they are told, sometimes for very good reasons, and sometimes for very bad reasons.  Why would we expect AIs to be any more capable and reliable than individual humans?  Will the AI lock its muscles and dream like we do?  Show me the spec and source code for a human brain, all one hundred billion neurons, and maybe I&#039;ll think differently.</description>
		<content:encoded><![CDATA[<p>When I was 13 I was interested in AI.  I read all of the books and within about six months understood why advanced machine intelligence was impossible.  Bottom line is there&#8217;s no functional theory of mind.  I love how in movies AIs are both master problem solvers and relentless automatons, as if there were no conflict between the two modes of operation.  People don&#8217;t do what they are told, sometimes for very good reasons, and sometimes for very bad reasons.  Why would we expect AIs to be any more capable and reliable than individual humans?  Will the AI lock its muscles and dream like we do?  Show me the spec and source code for a human brain, all one hundred billion neurons, and maybe I&#8217;ll think differently.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: TheRadicalModerate</title>
		<link>http://www.foresight.org/nanodot/?p=3356#comment-859973</link>
		<dc:creator>TheRadicalModerate</dc:creator>
		<pubDate>Sat, 19 Sep 2009 15:46:08 +0000</pubDate>
		<guid isPermaLink="false">http://www.foresight.org/nanodot/?p=3356#comment-859973</guid>
		<description>Does &quot;human-level&quot; mean &quot;acts like a human&quot; or does it mean &quot;processes the same amount of data as a human but doesn&#039;t necessarily act human&quot;?  The answer to this question depends on which of the two AI camps you fall into.

On one side, you have the knowledge-based expert system / inference engine / symbolic processing folks, who think that the key to getting a machine to think and act like a human is to model the structure of knowledge by adorning symbols with relationships and properties, then throwing compute cycles at that massive data structure until you can process enough of it to start making human-like responses.  This approach has produced some very useful systems that are in wide use today, but I&#039;m skeptical about it &lt;i&gt;ever&lt;/i&gt; producing anything that acts like a human being.

On the other side, you have the neural networking folks, who care very little about the structure of knowledge and view it as an emergent property of larger and larger networks of self-organizing pattern recognition systems.  The neural network people have the advantage that, to a certain extent, they don&#039;t have to worry about the structure of knowledge.  They merely have to mimic something like the structure of the brain and they&#039;re likely to get interesting results.

The other advantage of the neural approach is that it&#039;s very easy to model when you have the same level of computing power as the brain.  When you can simulate a hundred billion neurons, all connected together via about a hundred trillion to a quadrillion synapses, you&#039;re there, to some degree.

We&#039;re just not that far off from being able to do that simulation.  (In fact, I have a spreadsheet that says that you could probably build such a system today if you were willing to throw hundreds of millions of dollars at the problem and maintain a network of about 100,000 fiber-optic cables.)  But the trick to such systems is that we don&#039;t know quite enough about all the various ways that neurons process synaptic information and, once we know that, we don&#039;t know enough about how the brain accomplishes various functions through different local patterns of connections and, maybe even more important, how those local, limited-function regions connect together to produce the flexible system that is the human brain.

So it&#039;s quite possible that, long before we can produce something that acts like a human, we&#039;ll be able to produce systems that do incredibly useful work, but which behave more like insane humans, or even something that&#039;s completely alien but pretty smart.  Given that proviso, I think that 2025 is an entirely reasonable date.</description>
		<content:encoded><![CDATA[<p>Does &#8220;human-level&#8221; mean &#8220;acts like a human&#8221; or does it mean &#8220;processes the same amount of data as a human but doesn&#8217;t necessarily act human&#8221;?  The answer to this question depends on which of the two AI camps you fall into.</p>
<p>On one side, you have the knowledge-based expert system / inference engine / symbolic processing folks, who think that the key to getting a machine to think and act like a human is to model the structure of knowledge by adorning symbols with relationships and properties, then throwing compute cycles at that massive data structure until you can process enough of it to start making human-like responses.  This approach has produced some very useful systems that are in wide use today, but I&#8217;m skeptical about it <i>ever</i> producing anything that acts like a human being.</p>
<p>On the other side, you have the neural networking folks, who care very little about the structure of knowledge and view it as an emergent property of larger and larger networks of self-organizing pattern recognition systems.  The neural network people have the advantage that, to a certain extent, they don&#8217;t have to worry about the structure of knowledge.  They merely have to mimic something like the structure of the brain and they&#8217;re likely to get interesting results.</p>
<p>The other advantage of the neural approach is that it&#8217;s very easy to model when you have the same level of computing power as the brain.  When you can simulate a hundred billion neurons, all connected together via about a hundred trillion to a quadrillion synapses, you&#8217;re there, to some degree.</p>
<p>We&#8217;re just not that far off from being able to do that simulation.  (In fact, I have a spreadsheet that says that you could probably build such a system today if you were willing to throw hundreds of millions of dollars at the problem and maintain a network of about 100,000 fiber-optic cables.)  But the trick to such systems is that we don&#8217;t know quite enough about all the various ways that neurons process synaptic information and, once we know that, we don&#8217;t know enough about how the brain accomplishes various functions through different local patterns of connections and, maybe even more important, how those local, limited-function regions connect together to produce the flexible system that is the human brain.</p>
<p>So it&#8217;s quite possible that, long before we can produce something that acts like a human, we&#8217;ll be able to produce systems that do incredibly useful work, but which behave more like insane humans, or even something that&#8217;s completely alien but pretty smart.  Given that proviso, I think that 2025 is an entirely reasonable date.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Sean Ryan</title>
		<link>http://www.foresight.org/nanodot/?p=3356#comment-859972</link>
		<dc:creator>Sean Ryan</dc:creator>
		<pubDate>Sat, 19 Sep 2009 15:04:03 +0000</pubDate>
		<guid isPermaLink="false">http://www.foresight.org/nanodot/?p=3356#comment-859972</guid>
		<description>An interesting question is what this does to the market for unskilled and semi-skilled labor. Manufacturing jobs of this kind have largely moved offshore and will ostensibly continue to do so. Unskilled workers have managed to retain some economic bargaining power in jobs that cannot be moved offshore, however. The aforementioned janitor’s duties can’t be done from China, nor can DMV clerks do their jobs from India. Indeed, a huge proportion of government jobs seem designed to fund middle-class lifestyles for those without the skills to earn them.

If AI advances within the next 15 years to the point that a large and increasing proportion of unskilled (and typically unionized) jobs can be done by machines, then the implications are as dire for unskilled workers (and labor unions) as they are positive for society as a whole.

Andy Stern’s SEIU has been the fastest-growing union in the nation for years due primarily to organizing just this class of worker. One wonders: will the purple-shirted thugs at recent health care protests cease to exist, or will they be replaced by robots, too - Andy Stern’s own private army of Obamaist Cylons.

http://www.pecuniarius.com/blog/?p=182</description>
		<content:encoded><![CDATA[<p>An interesting question is what this does to the market for unskilled and semi-skilled labor. Manufacturing jobs of this kind have largely moved offshore and will ostensibly continue to do so. Unskilled workers have managed to retain some economic bargaining power in jobs that cannot be moved offshore, however. The aforementioned janitor’s duties can’t be done from China, nor can DMV clerks do their jobs from India. Indeed, a huge proportion of government jobs seem designed to fund middle-class lifestyles for those without the skills to earn them.</p>
<p>If AI advances within the next 15 years to the point that a large and increasing proportion of unskilled (and typically unionized) jobs can be done by machines, then the implications are as dire for unskilled workers (and labor unions) as they are positive for society as a whole.</p>
<p>Andy Stern’s SEIU has been the fastest-growing union in the nation for years due primarily to organizing just this class of worker. One wonders: will the purple-shirted thugs at recent health care protests cease to exist, or will they be replaced by robots, too &#8211; Andy Stern’s own private army of Obamaist Cylons.</p>
<p><a href="http://www.pecuniarius.com/blog/?p=182" rel="nofollow">http://www.pecuniarius.com/blog/?p=182</a></p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Valerie</title>
		<link>http://www.foresight.org/nanodot/?p=3356#comment-859971</link>
		<dc:creator>Valerie</dc:creator>
		<pubDate>Sat, 19 Sep 2009 15:02:48 +0000</pubDate>
		<guid isPermaLink="false">http://www.foresight.org/nanodot/?p=3356#comment-859971</guid>
		<description>&quot;...everything a janitor can do&quot;?   I question that one, even if it is limited to cleaning, which covers a hell of a lot more than vacuuming and mopping, much less maintenance.</description>
		<content:encoded><![CDATA[<p>&#8220;&#8230;everything a janitor can do&#8221;?   I question that one, even if it is limited to cleaning, which covers a hell of a lot more than vacuuming and mopping, much less maintenance.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Artificial Expectations &#171; Mycroft HOLMES 4</title>
		<link>http://www.foresight.org/nanodot/?p=3356#comment-859970</link>
		<dc:creator>Artificial Expectations &#171; Mycroft HOLMES 4</dc:creator>
		<pubDate>Sat, 19 Sep 2009 14:43:11 +0000</pubDate>
		<guid isPermaLink="false">http://www.foresight.org/nanodot/?p=3356#comment-859970</guid>
		<description>[...] Instapundit, the Foresight Institute  discusses a World Future Society forecast predicting human level AI by [...]</description>
		<content:encoded><![CDATA[<p>[...] Instapundit, the Foresight Institute  discusses a World Future Society forecast predicting human level AI by [...]</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Craig Zimmerman</title>
		<link>http://www.foresight.org/nanodot/?p=3356#comment-859969</link>
		<dc:creator>Craig Zimmerman</dc:creator>
		<pubDate>Sat, 19 Sep 2009 12:30:02 +0000</pubDate>
		<guid isPermaLink="false">http://www.foresight.org/nanodot/?p=3356#comment-859969</guid>
		<description>&quot;...if it has to be able to create art and literature...&quot;   What would be remarkable wouldn&#039;t be the ability to create art and literature, it would be the desire, the overwhelming internal need to create. which would signify a breakthrough.</description>
		<content:encoded><![CDATA[<p>&#8220;&#8230;if it has to be able to create art and literature&#8230;&#8221;   What would be remarkable wouldn&#8217;t be the ability to create art and literature, it would be the desire, the overwhelming internal need to create. which would signify a breakthrough.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Slocum</title>
		<link>http://www.foresight.org/nanodot/?p=3356#comment-859967</link>
		<dc:creator>Slocum</dc:creator>
		<pubDate>Sat, 19 Sep 2009 11:18:57 +0000</pubDate>
		<guid isPermaLink="false">http://www.foresight.org/nanodot/?p=3356#comment-859967</guid>
		<description>&lt;i&gt;&quot;I’m fairly certain that we’ll have AI that’s capable of a wide range of human tasks by 2025 — housemaids, butlers, chauffeurs, police and security guards, lots of desk and sales jobs, etc.&quot; &lt;/i&gt;

And I&#039;m fairly certain that we won&#039;t even be  close in 15 years.  Not only do all those positions require human language and vision capabilities (which are both far, far beyond the abilities of current AI systems), they also require a great deal of human judgement (police?!?  With guns?!?).  

No, that pattern has been, and will likely to continue to be, that we will have artificial systems that can do a limited (but useful) subset of tasks that humans can do -- but only under controlled conditions and in a &#039;brittle&#039; way that shows little of human flexibility and robustness.  So, OCR is incredibly useful -- but the scans have to be clean, straight, and hi-resolution.  Humans can read mildly degraded text easily that OCR systems fail at.  But that&#039;s OK -- OCR is still extremely useful.  Similar situation with natural language translation -- artificial systems don&#039;t do it the way humans do, and they make absurd errors that no human translator would make, but they are very useful in providing a rough, first cut.</description>
		<content:encoded><![CDATA[<p><i>&#8220;I’m fairly certain that we’ll have AI that’s capable of a wide range of human tasks by 2025 — housemaids, butlers, chauffeurs, police and security guards, lots of desk and sales jobs, etc.&#8221; </i></p>
<p>And I&#8217;m fairly certain that we won&#8217;t even be  close in 15 years.  Not only do all those positions require human language and vision capabilities (which are both far, far beyond the abilities of current AI systems), they also require a great deal of human judgement (police?!?  With guns?!?).  </p>
<p>No, that pattern has been, and will likely to continue to be, that we will have artificial systems that can do a limited (but useful) subset of tasks that humans can do &#8212; but only under controlled conditions and in a &#8216;brittle&#8217; way that shows little of human flexibility and robustness.  So, OCR is incredibly useful &#8212; but the scans have to be clean, straight, and hi-resolution.  Humans can read mildly degraded text easily that OCR systems fail at.  But that&#8217;s OK &#8212; OCR is still extremely useful.  Similar situation with natural language translation &#8212; artificial systems don&#8217;t do it the way humans do, and they make absurd errors that no human translator would make, but they are very useful in providing a rough, first cut.</p>
]]></content:encoded>
	</item>
</channel>
</rss>