<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
		>
<channel>
	<title>Comments on: Is AI really possible?</title>
	<atom:link href="http://www.foresight.org/nanodot/?feed=rss2&#038;p=3702" rel="self" type="application/rss+xml" />
	<link>http://www.foresight.org/nanodot/?p=3702</link>
	<description>examining transformative technology</description>
	<lastBuildDate>Wed, 03 Apr 2013 18:23:47 +0000</lastBuildDate>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=3.0.4</generator>
	<item>
		<title>By: Charles Collins</title>
		<link>http://www.foresight.org/nanodot/?p=3702#comment-866500</link>
		<dc:creator>Charles Collins</dc:creator>
		<pubDate>Sat, 06 Feb 2010 07:36:44 +0000</pubDate>
		<guid isPermaLink="false">http://www.foresight.org/nanodot/?p=3702#comment-866500</guid>
		<description>I am going with never, if you mean fully sentient (unless you are dealing with engineered biological components).</description>
		<content:encoded><![CDATA[<p>I am going with never, if you mean fully sentient (unless you are dealing with engineered biological components).</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: miron</title>
		<link>http://www.foresight.org/nanodot/?p=3702#comment-866383</link>
		<dc:creator>miron</dc:creator>
		<pubDate>Wed, 03 Feb 2010 08:08:33 +0000</pubDate>
		<guid isPermaLink="false">http://www.foresight.org/nanodot/?p=3702#comment-866383</guid>
		<description>I&#039;m a fan of reverse engineering the brain.

I wrote a &lt;a href=&quot;http://hyper.to/blog/link/human-scale-memory-timelin/&quot; rel=&quot;nofollow&quot;&gt;couple of estimators&lt;/a&gt; for when human scale CPU and memory becomes available for $1M.  Looks like around 2020.  If the &lt;a href=&quot;http://thebeautifulbrain.com/2010/02/bluebrain-film-preview/&quot; rel=&quot;nofollow&quot;&gt;Blue Brain project&lt;/a&gt; is also successful, then that&#039;s the likely time frame.

If there&#039;s a shortcut to AGI through algorithm work, then it will likely happen before that, which may be bad news.</description>
		<content:encoded><![CDATA[<p>I&#8217;m a fan of reverse engineering the brain.</p>
<p>I wrote a <a href="http://hyper.to/blog/link/human-scale-memory-timelin/" rel="nofollow">couple of estimators</a> for when human scale CPU and memory becomes available for $1M.  Looks like around 2020.  If the <a href="http://thebeautifulbrain.com/2010/02/bluebrain-film-preview/" rel="nofollow">Blue Brain project</a> is also successful, then that&#8217;s the likely time frame.</p>
<p>If there&#8217;s a shortcut to AGI through algorithm work, then it will likely happen before that, which may be bad news.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Valkyrie Ice</title>
		<link>http://www.foresight.org/nanodot/?p=3702#comment-866235</link>
		<dc:creator>Valkyrie Ice</dc:creator>
		<pubDate>Thu, 28 Jan 2010 04:42:38 +0000</pubDate>
		<guid isPermaLink="false">http://www.foresight.org/nanodot/?p=3702#comment-866235</guid>
		<description>Let me clarify one thing first.

I believe we will have non-sentient human level &quot;AI&quot; within the next decade. We&#039;ll have a robot capable of limited intelligence. A sales clerk could be extremely adept at all needed jobs a sales clerk would need to perform their job, but wouldn&#039;t know how to say, drive a car, answer questions outside of its knowledge base, etc. Limited AGI would probably be a good term. It could seem perfectly human, so long as it is not faced with a task outside it&#039;s specific programming. A maid could be extremely versatile, and very human, but I wouldn&#039;t expect it to be able to decide it hated being a maid because it wanted to be a race car driver instead. We&#039;re pretty close to this level of AI now.

For SENTIENT AI, I would say 2035-2050. Sentient AI being 100% equal to human versatility in thought and knowledge. This would be the AI that is indistinguishable from a human upload. And I would probably say such an AI would likely develop at nearly the same time as human uploading becomes possible. Either one could be the breakthrough that leads to the other.</description>
		<content:encoded><![CDATA[<p>Let me clarify one thing first.</p>
<p>I believe we will have non-sentient human level &#8220;AI&#8221; within the next decade. We&#8217;ll have a robot capable of limited intelligence. A sales clerk could be extremely adept at all needed jobs a sales clerk would need to perform their job, but wouldn&#8217;t know how to say, drive a car, answer questions outside of its knowledge base, etc. Limited AGI would probably be a good term. It could seem perfectly human, so long as it is not faced with a task outside it&#8217;s specific programming. A maid could be extremely versatile, and very human, but I wouldn&#8217;t expect it to be able to decide it hated being a maid because it wanted to be a race car driver instead. We&#8217;re pretty close to this level of AI now.</p>
<p>For SENTIENT AI, I would say 2035-2050. Sentient AI being 100% equal to human versatility in thought and knowledge. This would be the AI that is indistinguishable from a human upload. And I would probably say such an AI would likely develop at nearly the same time as human uploading becomes possible. Either one could be the breakthrough that leads to the other.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Alfred Neunzoller</title>
		<link>http://www.foresight.org/nanodot/?p=3702#comment-866216</link>
		<dc:creator>Alfred Neunzoller</dc:creator>
		<pubDate>Tue, 26 Jan 2010 16:24:36 +0000</pubDate>
		<guid isPermaLink="false">http://www.foresight.org/nanodot/?p=3702#comment-866216</guid>
		<description>I think it&#039;s going to be technically possible in the next decade, but will only actually be built in the 20&#039;s.</description>
		<content:encoded><![CDATA[<p>I think it&#8217;s going to be technically possible in the next decade, but will only actually be built in the 20&#8242;s.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Will</title>
		<link>http://www.foresight.org/nanodot/?p=3702#comment-866215</link>
		<dc:creator>Will</dc:creator>
		<pubDate>Tue, 26 Jan 2010 14:37:07 +0000</pubDate>
		<guid isPermaLink="false">http://www.foresight.org/nanodot/?p=3702#comment-866215</guid>
		<description>@James: I admire your optimism and enthusiasm, and I do think there&#039;s an outside chance that you&#039;ll be proven totally correct. That said I&#039;m mostly with dz on his response.

&quot;Pigeon level&quot; AI, and indeed any other animal equivalent AI, can either be understood as the information processing capability of a pigeon (which, as Fred pointed out, is more impressive than we might at first think), or the extent to which a pigeon can optimize its environment. Read the first section here: http://sl4.org/wiki/KnowabilityOfFAI if you haven&#039;t already for exactly what intelligence means in terms of optimization. Pigeon AI doesn&#039;t need to act like a pigeon, it just needs to be able to affect the world to the degree that a pigeon does. As dz pointed out, there are lots of applications, military or otherwise, for something even with that level of intelligence.</description>
		<content:encoded><![CDATA[<p>@James: I admire your optimism and enthusiasm, and I do think there&#8217;s an outside chance that you&#8217;ll be proven totally correct. That said I&#8217;m mostly with dz on his response.</p>
<p>&#8220;Pigeon level&#8221; AI, and indeed any other animal equivalent AI, can either be understood as the information processing capability of a pigeon (which, as Fred pointed out, is more impressive than we might at first think), or the extent to which a pigeon can optimize its environment. Read the first section here: <a href="http://sl4.org/wiki/KnowabilityOfFAI" rel="nofollow">http://sl4.org/wiki/KnowabilityOfFAI</a> if you haven&#8217;t already for exactly what intelligence means in terms of optimization. Pigeon AI doesn&#8217;t need to act like a pigeon, it just needs to be able to affect the world to the degree that a pigeon does. As dz pointed out, there are lots of applications, military or otherwise, for something even with that level of intelligence.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: biobob</title>
		<link>http://www.foresight.org/nanodot/?p=3702#comment-866212</link>
		<dc:creator>biobob</dc:creator>
		<pubDate>Tue, 26 Jan 2010 07:46:17 +0000</pubDate>
		<guid isPermaLink="false">http://www.foresight.org/nanodot/?p=3702#comment-866212</guid>
		<description>Instead of a big iron version of AI, it is one of my contentions that we will use our growing understanding of genetics to clone an organic brain to do the job [although I am somewhat puzzled at what exactly an AI would be good for].  After all, evolution has done ALL the software and hardware debugging for us over the eons.

Alternatively, humans are plentiful, cheap, and already have natural intelligence of a sort - perhaps we will do AI the old fashioned way - with people, rofl

...</description>
		<content:encoded><![CDATA[<p>Instead of a big iron version of AI, it is one of my contentions that we will use our growing understanding of genetics to clone an organic brain to do the job [although I am somewhat puzzled at what exactly an AI would be good for].  After all, evolution has done ALL the software and hardware debugging for us over the eons.</p>
<p>Alternatively, humans are plentiful, cheap, and already have natural intelligence of a sort &#8211; perhaps we will do AI the old fashioned way &#8211; with people, rofl</p>
<p>&#8230;</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Eric Williams</title>
		<link>http://www.foresight.org/nanodot/?p=3702#comment-866188</link>
		<dc:creator>Eric Williams</dc:creator>
		<pubDate>Tue, 26 Jan 2010 02:05:42 +0000</pubDate>
		<guid isPermaLink="false">http://www.foresight.org/nanodot/?p=3702#comment-866188</guid>
		<description>@ J. Storrs Hall:
&lt;blockquote&gt;So (true) AI will start out with an “AI baby” and grow up. An AI baby won’t be like a human baby — it’ll have wikipedia built in, and never need to be toilet trained — and the first one will have all sorts of weird cognitive deficits because we don’t have the software right. But I wouldn’t be too surprised to see lots of AI babies in the 20s, or even the late teens. If that was a human baby, that means an adult AI by 2040. How its development will actually go is anyone’s guess.&lt;/blockquote&gt;

This argument has a flaw to me: it&#039;s assuming realtime development of the &quot;AI baby&quot;. Over those 10 years, since supercomputer power is doubling every 14 months and has been for 50 years, the AGI will be experiencing 500 years in the span of 1 year for us (10 years after it reached realtime speeds). Also, consider all of the reasons for humans&#039; slow development: we sleep 1/3 of the time, we play 1/3 of the time, much of our work is spent regurgitating and relearning. we take in information at an incredibly slow rate (reading at a few hundred words per minute). An AGI that could watch videos and learn, or &quot;read&quot; (process text) and learn, could watch videos at thousands of times the speeds we do, and &quot;read&quot; at millions of time the speeds that we do. it seems like, even without the hardware speed increase, an AGI would grow up in weeks.</description>
		<content:encoded><![CDATA[<p>@ J. Storrs Hall:</p>
<blockquote><p>So (true) AI will start out with an “AI baby” and grow up. An AI baby won’t be like a human baby — it’ll have wikipedia built in, and never need to be toilet trained — and the first one will have all sorts of weird cognitive deficits because we don’t have the software right. But I wouldn’t be too surprised to see lots of AI babies in the 20s, or even the late teens. If that was a human baby, that means an adult AI by 2040. How its development will actually go is anyone’s guess.</p></blockquote>
<p>This argument has a flaw to me: it&#8217;s assuming realtime development of the &#8220;AI baby&#8221;. Over those 10 years, since supercomputer power is doubling every 14 months and has been for 50 years, the AGI will be experiencing 500 years in the span of 1 year for us (10 years after it reached realtime speeds). Also, consider all of the reasons for humans&#8217; slow development: we sleep 1/3 of the time, we play 1/3 of the time, much of our work is spent regurgitating and relearning. we take in information at an incredibly slow rate (reading at a few hundred words per minute). An AGI that could watch videos and learn, or &#8220;read&#8221; (process text) and learn, could watch videos at thousands of times the speeds we do, and &#8220;read&#8221; at millions of time the speeds that we do. it seems like, even without the hardware speed increase, an AGI would grow up in weeks.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Eric Williams</title>
		<link>http://www.foresight.org/nanodot/?p=3702#comment-866187</link>
		<dc:creator>Eric Williams</dc:creator>
		<pubDate>Tue, 26 Jan 2010 01:59:13 +0000</pubDate>
		<guid isPermaLink="false">http://www.foresight.org/nanodot/?p=3702#comment-866187</guid>
		<description>Well, first, i think some more operational definitions are in order. Let&#039;s assume &quot;human-level AI&quot; means &quot;capable of non-emotional reasoning and problem solving at the level of the average human in realtime&quot;. Basically, what we currently understand as the functionality of the neocortex (Strong AI). I think the &quot;in realtime&quot; is important, because if we can run an AGI at 1/1000th the speed of a human brain (still quite a feat), we&#039;re quite a few years from being able to interact with it.

Quite a few people picked 2030, but i didn&#039;t see any real reasoning behind the number other than increasing computing speeds. L Zoel touched on simulation, this seems like a good baseline for a pessimistic projection. Let&#039;s use the Blue Brain project as our benchmark, since it&#039;s the farthest along in true neuronal simulation (with interconnects and not simple point neurons).

&lt;a href=&quot;http://bluebrain.epfl.ch/page18924.html&quot; rel=&quot;nofollow&quot;&gt;Blue Brain can simulate a rat-level neocortical column (~10,000 neurons) in realtime on IBM Blue Gene/L supercomputer (36 TFLOPS).&lt;/a&gt; These are advanced neuronal simulations at the cellular level, including interconnects between neurons. A human neocortical column has ~50,000 neurons (varies of course). Assuming the complexity squares with increasing NCC&#039;s (due to interconnects), 25x more computational power is required to simulate 1 neocortical column, roughly 1 petaflop, in the range of the fastest supercomputers today.The human neocortex has between 2-5 million neocortical columns. This means that a zettaflop computer (1 million times more powerful than today&#039;s fastest supercomputers) is required to run the blue brain simulation, in its current state, on the scale of a human neocortex.

Now, this is incredibly inefficient. We aren&#039;t actually writing intelligence algorithms, just simulating the brain down the cellular level. A fellow from Sandia labs predicts that &lt;a href=&quot;http://portal.acm.org/citation.cfm?id=1049989&quot; rel=&quot;nofollow&quot;&gt;with a zettaflop computer, we could model the entire world&#039;s weather patterns at a resolution of under 100m for 2 weeks&lt;/a&gt;. Clearly this is far beyond the scope of what 1 human brain is capable of, yet the hardware required to do both is identical. I think it speaks to the inefficiency of the simulation, and the potential for simplification of an AI model.

But even with this pessimistic outcome of AI, if the colloquial version of Moore&#039;s Law holds, by 2030 we have the processing power to do this. Any other advances in actual AI algorithms (Jeff Hawkins&#039; Nupic software excels at the pattern recognition many here have mentioned, i think his HTM theory holds much promise) could speed things along. I think 2030-2050 is a sure thing if computers keep pace, and it looks like they will, to me. Shrinking MOSFETs down to 16nm by 2016, 3d chip stacking, optical chip interconnects, self-assembling CNTFETS, graphene clock multipliers, these are all things being experimented with and tested now that don&#039;t require any wildcard technologies (like quantum computing,  single photon transistors, molecular computing, etc).

2030-2050 has my vote...</description>
		<content:encoded><![CDATA[<p>Well, first, i think some more operational definitions are in order. Let&#8217;s assume &#8220;human-level AI&#8221; means &#8220;capable of non-emotional reasoning and problem solving at the level of the average human in realtime&#8221;. Basically, what we currently understand as the functionality of the neocortex (Strong AI). I think the &#8220;in realtime&#8221; is important, because if we can run an AGI at 1/1000th the speed of a human brain (still quite a feat), we&#8217;re quite a few years from being able to interact with it.</p>
<p>Quite a few people picked 2030, but i didn&#8217;t see any real reasoning behind the number other than increasing computing speeds. L Zoel touched on simulation, this seems like a good baseline for a pessimistic projection. Let&#8217;s use the Blue Brain project as our benchmark, since it&#8217;s the farthest along in true neuronal simulation (with interconnects and not simple point neurons).</p>
<p><a href="http://bluebrain.epfl.ch/page18924.html" rel="nofollow">Blue Brain can simulate a rat-level neocortical column (~10,000 neurons) in realtime on IBM Blue Gene/L supercomputer (36 TFLOPS).</a> These are advanced neuronal simulations at the cellular level, including interconnects between neurons. A human neocortical column has ~50,000 neurons (varies of course). Assuming the complexity squares with increasing NCC&#8217;s (due to interconnects), 25x more computational power is required to simulate 1 neocortical column, roughly 1 petaflop, in the range of the fastest supercomputers today.The human neocortex has between 2-5 million neocortical columns. This means that a zettaflop computer (1 million times more powerful than today&#8217;s fastest supercomputers) is required to run the blue brain simulation, in its current state, on the scale of a human neocortex.</p>
<p>Now, this is incredibly inefficient. We aren&#8217;t actually writing intelligence algorithms, just simulating the brain down the cellular level. A fellow from Sandia labs predicts that <a href="http://portal.acm.org/citation.cfm?id=1049989" rel="nofollow">with a zettaflop computer, we could model the entire world&#8217;s weather patterns at a resolution of under 100m for 2 weeks</a>. Clearly this is far beyond the scope of what 1 human brain is capable of, yet the hardware required to do both is identical. I think it speaks to the inefficiency of the simulation, and the potential for simplification of an AI model.</p>
<p>But even with this pessimistic outcome of AI, if the colloquial version of Moore&#8217;s Law holds, by 2030 we have the processing power to do this. Any other advances in actual AI algorithms (Jeff Hawkins&#8217; Nupic software excels at the pattern recognition many here have mentioned, i think his HTM theory holds much promise) could speed things along. I think 2030-2050 is a sure thing if computers keep pace, and it looks like they will, to me. Shrinking MOSFETs down to 16nm by 2016, 3d chip stacking, optical chip interconnects, self-assembling CNTFETS, graphene clock multipliers, these are all things being experimented with and tested now that don&#8217;t require any wildcard technologies (like quantum computing,  single photon transistors, molecular computing, etc).</p>
<p>2030-2050 has my vote&#8230;</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: dz</title>
		<link>http://www.foresight.org/nanodot/?p=3702#comment-866158</link>
		<dc:creator>dz</dc:creator>
		<pubDate>Mon, 25 Jan 2010 19:49:15 +0000</pubDate>
		<guid isPermaLink="false">http://www.foresight.org/nanodot/?p=3702#comment-866158</guid>
		<description>James Gentile,

Respectfully, I disagree.  I think the US Armed Forces would love a pigeon level AI that could handle most of the flying of their unmanned drones, or that could control the Big Dog quadruped robot.  I don&#039;t think you will see billions of dollars spent on human level AI until well after supercomputers have greatly exceeded the hardware needed for it.  At that point, the military applications would be clear to everyone.

Another major supporter of general AI could be financial firms that would benefit from a superfast financial analyst, or labor poor polities that need an inexpensive replacement for physical labor (Japan, Singapore, some parts of US and Europe).  

By the time any of these groups bankroll GAI, the hardware will let the software run much faster so that any GAI created will be able to think much faster than humans.  Until then, underfunded research will continue to make inroads in GAI design.</description>
		<content:encoded><![CDATA[<p>James Gentile,</p>
<p>Respectfully, I disagree.  I think the US Armed Forces would love a pigeon level AI that could handle most of the flying of their unmanned drones, or that could control the Big Dog quadruped robot.  I don&#8217;t think you will see billions of dollars spent on human level AI until well after supercomputers have greatly exceeded the hardware needed for it.  At that point, the military applications would be clear to everyone.</p>
<p>Another major supporter of general AI could be financial firms that would benefit from a superfast financial analyst, or labor poor polities that need an inexpensive replacement for physical labor (Japan, Singapore, some parts of US and Europe).  </p>
<p>By the time any of these groups bankroll GAI, the hardware will let the software run much faster so that any GAI created will be able to think much faster than humans.  Until then, underfunded research will continue to make inroads in GAI design.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: James Gentile</title>
		<link>http://www.foresight.org/nanodot/?p=3702#comment-866156</link>
		<dc:creator>James Gentile</dc:creator>
		<pubDate>Mon, 25 Jan 2010 18:09:47 +0000</pubDate>
		<guid isPermaLink="false">http://www.foresight.org/nanodot/?p=3702#comment-866156</guid>
		<description>Re: Fred Hapgood, I believe pigeon level AI is already possible, but how are you going to get investors to pour millions/billions into a project that is going to result in a computer that can only fly around, eat bugs and poop on cars?  I don&#039;t think AI is going to get any kind of real investments until human level AI is possible, the human brain operates at around 10^15 computations per seconds, several entities plan to have supercomputers in this range next year and the year after (2011 and 2012).  All the necessary brain algorithms are known according to people like the CEO of AI company Novamente, so what&#039;s the hold up then? It&#039;s that people who matter (investors, govt, corps) aren&#039;t going to be interested in pet AI, they will only be interested in REAL human level AI and that will be possible in the next year or two on only the most powerful supercomputers.</description>
		<content:encoded><![CDATA[<p>Re: Fred Hapgood, I believe pigeon level AI is already possible, but how are you going to get investors to pour millions/billions into a project that is going to result in a computer that can only fly around, eat bugs and poop on cars?  I don&#8217;t think AI is going to get any kind of real investments until human level AI is possible, the human brain operates at around 10^15 computations per seconds, several entities plan to have supercomputers in this range next year and the year after (2011 and 2012).  All the necessary brain algorithms are known according to people like the CEO of AI company Novamente, so what&#8217;s the hold up then? It&#8217;s that people who matter (investors, govt, corps) aren&#8217;t going to be interested in pet AI, they will only be interested in REAL human level AI and that will be possible in the next year or two on only the most powerful supercomputers.</p>
]]></content:encoded>
	</item>
</channel>
</rss>