<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
		>
<channel>
	<title>Comments on: AI: how close are we?</title>
	<atom:link href="http://www.foresight.org/nanodot/?feed=rss2&#038;p=3707" rel="self" type="application/rss+xml" />
	<link>http://www.foresight.org/nanodot/?p=3707</link>
	<description>examining transformative technology</description>
	<lastBuildDate>Wed, 03 Apr 2013 18:23:47 +0000</lastBuildDate>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=3.0.4</generator>
	<item>
		<title>By: Jeremy Roberts</title>
		<link>http://www.foresight.org/nanodot/?p=3707#comment-885711</link>
		<dc:creator>Jeremy Roberts</dc:creator>
		<pubDate>Fri, 04 Jun 2010 10:41:07 +0000</pubDate>
		<guid isPermaLink="false">http://www.foresight.org/nanodot/?p=3707#comment-885711</guid>
		<description>This is a fascinating topic, and I have a optimistic perspective. I agree that we are a miracle of adaptability, but machines could eventually be so too, and maybe they will need time to evolve, if we recreate our own setting that triggered the way we evolved</description>
		<content:encoded><![CDATA[<p>This is a fascinating topic, and I have a optimistic perspective. I agree that we are a miracle of adaptability, but machines could eventually be so too, and maybe they will need time to evolve, if we recreate our own setting that triggered the way we evolved</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: dz</title>
		<link>http://www.foresight.org/nanodot/?p=3707#comment-866448</link>
		<dc:creator>dz</dc:creator>
		<pubDate>Thu, 04 Feb 2010 22:36:42 +0000</pubDate>
		<guid isPermaLink="false">http://www.foresight.org/nanodot/?p=3707#comment-866448</guid>
		<description>Jared,

100 billion neurons x 10,000 connections = 1 quadrillion connections firing at 200 Hz.  7 neurotransmitters can be represented by 3 bits of data.  So say, 600 quadrillion bits flying around per second.  That&#039;s 600 petaflops, roughly.  10 petaflop supercomputers are under construction today so we will be well within the 600 petaflop range in 10 years.  

Unless you are a dualist, then you must concede that an artificial brain can be built at least as blackbox, even if we don&#039;t understand how it all works.  Cells in mouse hypothalamus have been replaced by silicon chips - we aren&#039;t sure what is being done with the signals sent out of the chips, but they mimic exactly what the cells were doing before they were replaced.  The mice function normally.

For 50 years people have made predictions regarding AI, but have not been able to substantiate those claims.  Today we are already able to replace neurons with silicon and fully simulate a neocortical cortex column.  Rather than trying to create an expert system that can manage millions of rules and somehow look intelligent, we are now copying the structure and function of brains directly.

AI researchers are not so much building an airplane as they are building an artificial bird.  Hopefully, the plane will come later, once the bird can fly 30,000 kph :-)</description>
		<content:encoded><![CDATA[<p>Jared,</p>
<p>100 billion neurons x 10,000 connections = 1 quadrillion connections firing at 200 Hz.  7 neurotransmitters can be represented by 3 bits of data.  So say, 600 quadrillion bits flying around per second.  That&#8217;s 600 petaflops, roughly.  10 petaflop supercomputers are under construction today so we will be well within the 600 petaflop range in 10 years.  </p>
<p>Unless you are a dualist, then you must concede that an artificial brain can be built at least as blackbox, even if we don&#8217;t understand how it all works.  Cells in mouse hypothalamus have been replaced by silicon chips &#8211; we aren&#8217;t sure what is being done with the signals sent out of the chips, but they mimic exactly what the cells were doing before they were replaced.  The mice function normally.</p>
<p>For 50 years people have made predictions regarding AI, but have not been able to substantiate those claims.  Today we are already able to replace neurons with silicon and fully simulate a neocortical cortex column.  Rather than trying to create an expert system that can manage millions of rules and somehow look intelligent, we are now copying the structure and function of brains directly.</p>
<p>AI researchers are not so much building an airplane as they are building an artificial bird.  Hopefully, the plane will come later, once the bird can fly 30,000 kph <img src='http://www.foresight.org/nanodot/wp-includes/images/smilies/icon_smile.gif' alt=':-)' class='wp-smiley' /> </p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Jared</title>
		<link>http://www.foresight.org/nanodot/?p=3707#comment-866445</link>
		<dc:creator>Jared</dc:creator>
		<pubDate>Thu, 04 Feb 2010 19:52:22 +0000</pubDate>
		<guid isPermaLink="false">http://www.foresight.org/nanodot/?p=3707#comment-866445</guid>
		<description>A very interesting and thought-provoking article. One of many that I have read over the span of my life. Many of of the comments are also quite thought-provoking.

It is interesting how closely the debate, research and execution of AI follows the patterns of the debate between free will and determinism (which could also just as easily be stated: the philosophy of consciousness).

However, my one minor quibble is with your final statement where you speculate that things might reach a state by the year 2020 that would enable effective implementation of AI. I find that statement ironic only in context of the history of AI. About once every 5 or 10 years some expert comes out ans says &quot;We&#039;ll see true AI in 10 to 15 years&quot; - and they&#039;ve been saying that since the 50&#039;s.

I think the difficulties are far greater than anyone connected with the field is able to comprehend. Why else is there such optimism over such a long period of time? Technological advances grant us an even deeper sense of optimism, and why not? We are creating things that people wouldn&#039;t have dreamed possible even 10 years ago. But despite the advances, we&#039;re still at least 10 years away from AI - and we always have been (apparently).

It is my opinion that I will not see true AI in my lifetime (which, hopefully, will be at least another 40 years or so). Despite advances in computer technology (even assuming the advent of quantum computers) we are left with a fundamental gap in the ability of our programing languages to breach the gap between &quot;act in accordance with your programing&quot; (which as a previous commenter said can be very clever indeed) and what humans evaluate as &quot;creativity&quot; or &quot;genius&quot;. 

Perhaps I will be deemed to fall into the fold of &quot;mysterianism&quot; which holds that human consciousness is unique and has some magical quality that can never be imbued into computers. I favor a much more simplistic definition: we are not binary.

The complexity of the human brain includes: approximately 100 billion neurons, with each neuron intertwined and connected with upwards of 10,000 other neurons, each one capable of using 7 different known neurotransmitters used in solo or combination to convey specific messages across the network. I&#039;m afraid to even try and calculate the math for how that works out in raw computational power.

As soon as you&#039;re able to develop a computer that can even approach that kind of messaging complexity, then I&#039;ll begin to believe that AI is possible.</description>
		<content:encoded><![CDATA[<p>A very interesting and thought-provoking article. One of many that I have read over the span of my life. Many of of the comments are also quite thought-provoking.</p>
<p>It is interesting how closely the debate, research and execution of AI follows the patterns of the debate between free will and determinism (which could also just as easily be stated: the philosophy of consciousness).</p>
<p>However, my one minor quibble is with your final statement where you speculate that things might reach a state by the year 2020 that would enable effective implementation of AI. I find that statement ironic only in context of the history of AI. About once every 5 or 10 years some expert comes out ans says &#8220;We&#8217;ll see true AI in 10 to 15 years&#8221; &#8211; and they&#8217;ve been saying that since the 50&#8242;s.</p>
<p>I think the difficulties are far greater than anyone connected with the field is able to comprehend. Why else is there such optimism over such a long period of time? Technological advances grant us an even deeper sense of optimism, and why not? We are creating things that people wouldn&#8217;t have dreamed possible even 10 years ago. But despite the advances, we&#8217;re still at least 10 years away from AI &#8211; and we always have been (apparently).</p>
<p>It is my opinion that I will not see true AI in my lifetime (which, hopefully, will be at least another 40 years or so). Despite advances in computer technology (even assuming the advent of quantum computers) we are left with a fundamental gap in the ability of our programing languages to breach the gap between &#8220;act in accordance with your programing&#8221; (which as a previous commenter said can be very clever indeed) and what humans evaluate as &#8220;creativity&#8221; or &#8220;genius&#8221;. </p>
<p>Perhaps I will be deemed to fall into the fold of &#8220;mysterianism&#8221; which holds that human consciousness is unique and has some magical quality that can never be imbued into computers. I favor a much more simplistic definition: we are not binary.</p>
<p>The complexity of the human brain includes: approximately 100 billion neurons, with each neuron intertwined and connected with upwards of 10,000 other neurons, each one capable of using 7 different known neurotransmitters used in solo or combination to convey specific messages across the network. I&#8217;m afraid to even try and calculate the math for how that works out in raw computational power.</p>
<p>As soon as you&#8217;re able to develop a computer that can even approach that kind of messaging complexity, then I&#8217;ll begin to believe that AI is possible.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: mb</title>
		<link>http://www.foresight.org/nanodot/?p=3707#comment-866428</link>
		<dc:creator>mb</dc:creator>
		<pubDate>Thu, 04 Feb 2010 05:39:38 +0000</pubDate>
		<guid isPermaLink="false">http://www.foresight.org/nanodot/?p=3707#comment-866428</guid>
		<description>AI which can work as a software engineer -- how close are we?

This seems to be the crucial question.</description>
		<content:encoded><![CDATA[<p>AI which can work as a software engineer &#8212; how close are we?</p>
<p>This seems to be the crucial question.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: TMavenger</title>
		<link>http://www.foresight.org/nanodot/?p=3707#comment-866417</link>
		<dc:creator>TMavenger</dc:creator>
		<pubDate>Thu, 04 Feb 2010 00:52:40 +0000</pubDate>
		<guid isPermaLink="false">http://www.foresight.org/nanodot/?p=3707#comment-866417</guid>
		<description>This interesting discussion suffers from the fundamental flaw in the field of Artificial Intelligence: It is focused on the wrong subject. Consequently we have wasted decades groping for an acceptable definition of intelligence when the answer is obvious:

Intelligence is the ability to solve problems through mental effort. 

Unfortunately this definition undermines the entire field of AI, for several reasons:

1. Computers HAVE NO PROBLEMS. They are neither alive nor self-aware. Therefore the concept of &quot;problem&quot; cannot apply to them. Nevertheless, this is not a significant objection, because we are simply trying to get them to solve OUR problems.

2. Use of the word &quot;Intelligence&quot; in AI is a misnomer. Many of the characteristics commonly understood to constitute human intelligence were trivial problems, solved very early in the development of computers (for example, the ability to do complex math in a very short time with absolute accuracy). Or the ability to generate logical results reliably from indefinitely complex premises. Or the ability to play a passable game of chess (substitute any other game with well defined rules.

On the other hand, we have had very limited success in getting computers to replicate abilities which are trivial for humans, for example the ability to recognize a face, carry on a conversation (pass the Turing Test), or generate a mathematical proof. 

The field of &quot;Artificial Intelligence&quot; is largely concerned with human capabilities that are NOT generally considered characteristics of intelligent humans. Instead it attempts to reproduce those things humans do WITHOUT KNOWING HOW. This is the crux of the problem. Any activity that can be expressed as an algorithm can be coded onto a Turing Machine, but in order to produce an algorithm the coder must first understand how to perform the activity. Thus, complex mathematics was easily accomplished on computers, because WE KNOW EXACTLY HOW WE DO IT. Chess was not difficult to program because it has a small set of rules and a well-known set of strategies that lent themselves to algorithmic programming.  Facial recognition, on the other hand, is something we do WITHOUT KNOWING HOW. The problem, therefore, is figuring out how we do accomplish an activity before we can tell a machine how to do it. We have made some progress on some of these problems, but we are very far from answering the fundamental questions such as the nature of consciousness. These problems have resisted understanding by philosophers for at least 2500 years, and are not likely to be solved by computer scientists in 50. For this reason I don&#039;t expect human intelligence to be replicated on machines in the foreseeable future, if ever. 

This is also why the field of Artificial Intelligence should more correctly be called Artificial Instinct.</description>
		<content:encoded><![CDATA[<p>This interesting discussion suffers from the fundamental flaw in the field of Artificial Intelligence: It is focused on the wrong subject. Consequently we have wasted decades groping for an acceptable definition of intelligence when the answer is obvious:</p>
<p>Intelligence is the ability to solve problems through mental effort. </p>
<p>Unfortunately this definition undermines the entire field of AI, for several reasons:</p>
<p>1. Computers HAVE NO PROBLEMS. They are neither alive nor self-aware. Therefore the concept of &#8220;problem&#8221; cannot apply to them. Nevertheless, this is not a significant objection, because we are simply trying to get them to solve OUR problems.</p>
<p>2. Use of the word &#8220;Intelligence&#8221; in AI is a misnomer. Many of the characteristics commonly understood to constitute human intelligence were trivial problems, solved very early in the development of computers (for example, the ability to do complex math in a very short time with absolute accuracy). Or the ability to generate logical results reliably from indefinitely complex premises. Or the ability to play a passable game of chess (substitute any other game with well defined rules.</p>
<p>On the other hand, we have had very limited success in getting computers to replicate abilities which are trivial for humans, for example the ability to recognize a face, carry on a conversation (pass the Turing Test), or generate a mathematical proof. </p>
<p>The field of &#8220;Artificial Intelligence&#8221; is largely concerned with human capabilities that are NOT generally considered characteristics of intelligent humans. Instead it attempts to reproduce those things humans do WITHOUT KNOWING HOW. This is the crux of the problem. Any activity that can be expressed as an algorithm can be coded onto a Turing Machine, but in order to produce an algorithm the coder must first understand how to perform the activity. Thus, complex mathematics was easily accomplished on computers, because WE KNOW EXACTLY HOW WE DO IT. Chess was not difficult to program because it has a small set of rules and a well-known set of strategies that lent themselves to algorithmic programming.  Facial recognition, on the other hand, is something we do WITHOUT KNOWING HOW. The problem, therefore, is figuring out how we do accomplish an activity before we can tell a machine how to do it. We have made some progress on some of these problems, but we are very far from answering the fundamental questions such as the nature of consciousness. These problems have resisted understanding by philosophers for at least 2500 years, and are not likely to be solved by computer scientists in 50. For this reason I don&#8217;t expect human intelligence to be replicated on machines in the foreseeable future, if ever. </p>
<p>This is also why the field of Artificial Intelligence should more correctly be called Artificial Instinct.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Alex Kilpatrick</title>
		<link>http://www.foresight.org/nanodot/?p=3707#comment-866416</link>
		<dc:creator>Alex Kilpatrick</dc:creator>
		<pubDate>Thu, 04 Feb 2010 00:38:06 +0000</pubDate>
		<guid isPermaLink="false">http://www.foresight.org/nanodot/?p=3707#comment-866416</guid>
		<description>&quot;I think we have the techniques now to build an AI at the hypo/dia border, equivalent to a dull but functional human.&quot;

I did my PhD research in AI (used Peter Norvig&#039;s book in my graduate studies, by the way).  I wrote a program for my dissertation that &quot;learned&quot; by itself, but ultimately I left the field.  All of the so-called gains in AI are still a million miles away from the &quot;dull but functional human&quot;  There are some things like playing chess that computers do really well.  And intelligent humans do those things too.   But that in no way means the computer is remotely intelligent.  

The whole AI field is nothing but clever programming.  Some of those programs are quite clever indeed, but they represent the intelligence of their creators, not the programs.  Some programs may appear intelligent in very narrow domains, but they are extremely brittle -- they will not be useful at all even on the borders of the domains for which they were designed.  I have yet to see a program that has a modicum of intelligence or adaptability outside of a very, very narrow domain.   They are more like an idiot savant that can add up sums of large numbers but can&#039;t figure out how to open a door.

People really underestimate the magic of human intelligence, even in the dull but functional humans.  Humans are a miracle of adaptability that a computer will never even approach.  It isn&#039;t a question of FLOPS or GPUs.  We have such an incredibly limited understanding of our own intelligence, how can we have the arrogance to think we can make an intelligence computer?</description>
		<content:encoded><![CDATA[<p>&#8220;I think we have the techniques now to build an AI at the hypo/dia border, equivalent to a dull but functional human.&#8221;</p>
<p>I did my PhD research in AI (used Peter Norvig&#8217;s book in my graduate studies, by the way).  I wrote a program for my dissertation that &#8220;learned&#8221; by itself, but ultimately I left the field.  All of the so-called gains in AI are still a million miles away from the &#8220;dull but functional human&#8221;  There are some things like playing chess that computers do really well.  And intelligent humans do those things too.   But that in no way means the computer is remotely intelligent.  </p>
<p>The whole AI field is nothing but clever programming.  Some of those programs are quite clever indeed, but they represent the intelligence of their creators, not the programs.  Some programs may appear intelligent in very narrow domains, but they are extremely brittle &#8212; they will not be useful at all even on the borders of the domains for which they were designed.  I have yet to see a program that has a modicum of intelligence or adaptability outside of a very, very narrow domain.   They are more like an idiot savant that can add up sums of large numbers but can&#8217;t figure out how to open a door.</p>
<p>People really underestimate the magic of human intelligence, even in the dull but functional humans.  Humans are a miracle of adaptability that a computer will never even approach.  It isn&#8217;t a question of FLOPS or GPUs.  We have such an incredibly limited understanding of our own intelligence, how can we have the arrogance to think we can make an intelligence computer?</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: FGH</title>
		<link>http://www.foresight.org/nanodot/?p=3707#comment-866411</link>
		<dc:creator>FGH</dc:creator>
		<pubDate>Wed, 03 Feb 2010 23:37:35 +0000</pubDate>
		<guid isPermaLink="false">http://www.foresight.org/nanodot/?p=3707#comment-866411</guid>
		<description>Harvard economics professor Kenneth Rogoff recently penned an essay of AI. It&#039;s available at the following link:
http://www.project-syndicate.org/commentary/rogoff64</description>
		<content:encoded><![CDATA[<p>Harvard economics professor Kenneth Rogoff recently penned an essay of AI. It&#8217;s available at the following link:<br />
<a href="http://www.project-syndicate.org/commentary/rogoff64" rel="nofollow">http://www.project-syndicate.org/commentary/rogoff64</a></p>
]]></content:encoded>
	</item>
	<item>
		<title>By: glenn</title>
		<link>http://www.foresight.org/nanodot/?p=3707#comment-866405</link>
		<dc:creator>glenn</dc:creator>
		<pubDate>Wed, 03 Feb 2010 21:22:50 +0000</pubDate>
		<guid isPermaLink="false">http://www.foresight.org/nanodot/?p=3707#comment-866405</guid>
		<description>Has anyone considered the difference between an artificial intelligence and an artificial conscienceness?</description>
		<content:encoded><![CDATA[<p>Has anyone considered the difference between an artificial intelligence and an artificial conscienceness?</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Rich Vail</title>
		<link>http://www.foresight.org/nanodot/?p=3707#comment-866403</link>
		<dc:creator>Rich Vail</dc:creator>
		<pubDate>Wed, 03 Feb 2010 21:00:00 +0000</pubDate>
		<guid isPermaLink="false">http://www.foresight.org/nanodot/?p=3707#comment-866403</guid>
		<description>We&#039;re well past that...just look at Washington DC...congress is a great example of AI...

I think that it will be at least another decade before we&#039;re there...even a super computer can&#039;t match the pure computational power of a human brain.</description>
		<content:encoded><![CDATA[<p>We&#8217;re well past that&#8230;just look at Washington DC&#8230;congress is a great example of AI&#8230;</p>
<p>I think that it will be at least another decade before we&#8217;re there&#8230;even a super computer can&#8217;t match the pure computational power of a human brain.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: chuckb</title>
		<link>http://www.foresight.org/nanodot/?p=3707#comment-866400</link>
		<dc:creator>chuckb</dc:creator>
		<pubDate>Wed, 03 Feb 2010 20:06:16 +0000</pubDate>
		<guid isPermaLink="false">http://www.foresight.org/nanodot/?p=3707#comment-866400</guid>
		<description>Hope springs eternal. I&#039;ve been following AI advances (??) since I was an engineering student at the University of Illinois in the early 70&#039;s. The one constant has been the claim that we&#039;re on the verge and that any day now (actually, the guess is usually 10 years or so) the required breakthrough(s) will come, whether it is technical or economic. The AGW people learned their lesson. They no longer project 10 to 20 years in the future. They&#039;re found that it can be unpleasant when your prognostications come back to bite you in the butt. Now they push their projections to 100 or more years.
I&#039;m as fascinated as the next guy with human intelligence. I just wish you guys would admit, at least to yourselves, that we don’t have a clue what it is. We can imitate all kinds of behavior and that will, like so many other technological advancements, help to make the world a better place to live. But the idea that we can create intelligent machines is an exercise in faith and nothing more.</description>
		<content:encoded><![CDATA[<p>Hope springs eternal. I&#8217;ve been following AI advances (??) since I was an engineering student at the University of Illinois in the early 70&#8242;s. The one constant has been the claim that we&#8217;re on the verge and that any day now (actually, the guess is usually 10 years or so) the required breakthrough(s) will come, whether it is technical or economic. The AGW people learned their lesson. They no longer project 10 to 20 years in the future. They&#8217;re found that it can be unpleasant when your prognostications come back to bite you in the butt. Now they push their projections to 100 or more years.<br />
I&#8217;m as fascinated as the next guy with human intelligence. I just wish you guys would admit, at least to yourselves, that we don’t have a clue what it is. We can imitate all kinds of behavior and that will, like so many other technological advancements, help to make the world a better place to live. But the idea that we can create intelligent machines is an exercise in faith and nothing more.</p>
]]></content:encoded>
	</item>
</channel>
</rss>