<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
		>
<channel>
	<title>Comments on: A brief history of AI</title>
	<atom:link href="http://www.foresight.org/nanodot/?feed=rss2&#038;p=3705" rel="self" type="application/rss+xml" />
	<link>http://www.foresight.org/nanodot/?p=3705</link>
	<description>examining transformative technology</description>
	<lastBuildDate>Wed, 03 Apr 2013 18:23:47 +0000</lastBuildDate>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=3.0.4</generator>
	<item>
		<title>By: J. Storrs Hall</title>
		<link>http://www.foresight.org/nanodot/?p=3705#comment-866233</link>
		<dc:creator>J. Storrs Hall</dc:creator>
		<pubDate>Wed, 27 Jan 2010 22:15:36 +0000</pubDate>
		<guid isPermaLink="false">http://www.foresight.org/nanodot/?p=3705#comment-866233</guid>
		<description>Radical: have a look at this:
 http://mind.ucsd.edu/papers/intro-emulation/intro-em.pdf</description>
		<content:encoded><![CDATA[<p>Radical: have a look at this:<br />
 <a href="http://mind.ucsd.edu/papers/intro-emulation/intro-em.pdf" rel="nofollow">http://mind.ucsd.edu/papers/intro-emulation/intro-em.pdf</a></p>
]]></content:encoded>
	</item>
	<item>
		<title>By: TheRadicalModerate</title>
		<link>http://www.foresight.org/nanodot/?p=3705#comment-866230</link>
		<dc:creator>TheRadicalModerate</dc:creator>
		<pubDate>Wed, 27 Jan 2010 16:05:15 +0000</pubDate>
		<guid isPermaLink="false">http://www.foresight.org/nanodot/?p=3705#comment-866230</guid>
		<description>When it comes to the neural back channels, I tend to favor the idea I first learned from the Jeff Hawkins book (I won&#039;t pretend to understand its proper academic ontogeny):  The back-channels are used more for prediction than they are for classical control.  If you model the brain as a forward-directed graph of pattern recognizers, the back-channels allow the higher-level entities to tell the lower-level entities things like, &quot;I recognize the stuff you&#039;ve just sent me, things are going well, and based on that recognition, the next thing you should recognize is probably this.&quot;  From an evolutionary standpoint, this is probably a huge energy savings in the brain, since the amount of signaling, and consequent neural activation, between entities is vastly reduced when most of them are merely reporting up the chain, &quot;Yup, what you expected is indeed what I recognized next,&quot; instead of, &quot;Whoa!  something&#039;s way different!  Time to start learning and engaging high-level cognitive functions to figure out what&#039;s going on!&quot;

This isn&#039;t to say that stable control isn&#039;t essential; it obviously is.  However, the control feedback is merely more sensory information that can get mixed into the feed-forward pathways, at every level in the directed graph.  In other words, higher-level areas direct motor functions (essentially a feed-forward operation) while at the same time they watch proprioceptive sensory input (also a feed-forward operation).  The back-channel is then reserved for things like, &quot;Uh-oh!  The arm is out of position and I&#039;m not going to hit the saber-tooth tiger I&#039;m aiming at with the rock that I&#039;m throwing.&quot;  It won&#039;t do you much good for the current rock, but it&#039;ll help you throw the next rock better.  Of course, as you get lower and lower in the cognitive hierarchy, the neural entities become less cognitive and more like real control systems.  But that&#039;s the difference between intelligence and complex control.

I don&#039;t hold out much hope for all the legacies of GOFAI, even for higher-level cognitive functions.  The development that we need to make major progress is the ability to scale our current neural network models from software-driven simulators that can handle 10^5 or 10^6 virtual synapses to hardware-driven systems with massive connectivity, able to handle 10^11 to 10^14 synapses.  This kind of connectivity gives you the opportunity to solve a lot of the control and perceptual problems using genetic algorithms that wire stuff up in progressively more interesting ways, while using learning and plasticity to fine-tune the connectivity.  What we&#039;ll wind up with are pretty smart machines that work, even though we can&#039;t tell you exactly &lt;i&gt;why&lt;/i&gt; they work.</description>
		<content:encoded><![CDATA[<p>When it comes to the neural back channels, I tend to favor the idea I first learned from the Jeff Hawkins book (I won&#8217;t pretend to understand its proper academic ontogeny):  The back-channels are used more for prediction than they are for classical control.  If you model the brain as a forward-directed graph of pattern recognizers, the back-channels allow the higher-level entities to tell the lower-level entities things like, &#8220;I recognize the stuff you&#8217;ve just sent me, things are going well, and based on that recognition, the next thing you should recognize is probably this.&#8221;  From an evolutionary standpoint, this is probably a huge energy savings in the brain, since the amount of signaling, and consequent neural activation, between entities is vastly reduced when most of them are merely reporting up the chain, &#8220;Yup, what you expected is indeed what I recognized next,&#8221; instead of, &#8220;Whoa!  something&#8217;s way different!  Time to start learning and engaging high-level cognitive functions to figure out what&#8217;s going on!&#8221;</p>
<p>This isn&#8217;t to say that stable control isn&#8217;t essential; it obviously is.  However, the control feedback is merely more sensory information that can get mixed into the feed-forward pathways, at every level in the directed graph.  In other words, higher-level areas direct motor functions (essentially a feed-forward operation) while at the same time they watch proprioceptive sensory input (also a feed-forward operation).  The back-channel is then reserved for things like, &#8220;Uh-oh!  The arm is out of position and I&#8217;m not going to hit the saber-tooth tiger I&#8217;m aiming at with the rock that I&#8217;m throwing.&#8221;  It won&#8217;t do you much good for the current rock, but it&#8217;ll help you throw the next rock better.  Of course, as you get lower and lower in the cognitive hierarchy, the neural entities become less cognitive and more like real control systems.  But that&#8217;s the difference between intelligence and complex control.</p>
<p>I don&#8217;t hold out much hope for all the legacies of GOFAI, even for higher-level cognitive functions.  The development that we need to make major progress is the ability to scale our current neural network models from software-driven simulators that can handle 10^5 or 10^6 virtual synapses to hardware-driven systems with massive connectivity, able to handle 10^11 to 10^14 synapses.  This kind of connectivity gives you the opportunity to solve a lot of the control and perceptual problems using genetic algorithms that wire stuff up in progressively more interesting ways, while using learning and plasticity to fine-tune the connectivity.  What we&#8217;ll wind up with are pretty smart machines that work, even though we can&#8217;t tell you exactly <i>why</i> they work.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: J. Storrs Hall</title>
		<link>http://www.foresight.org/nanodot/?p=3705#comment-866218</link>
		<dc:creator>J. Storrs Hall</dc:creator>
		<pubDate>Tue, 26 Jan 2010 17:45:45 +0000</pubDate>
		<guid isPermaLink="false">http://www.foresight.org/nanodot/?p=3705#comment-866218</guid>
		<description>Certainly you can be an intelligent human and still need to go to the doctor from time to time. So intelligence doesn&#039;t require strict long-term autonomy.  
However, I was thinking more along the lines of software.  In the brain, as I&#039;m sure you know, there&#039;s actually more retrograde traffic than feedforward along many of the pathways -- e.g. more proprioceptive and sensory feedback along motor pathways than motor signals going out.  
I would claim that this phenomenon is not limited to distal control and sensing but to every aspect of internal computation.  One of the reason that symbolic AI kept hitting a glass ceiling is that there was no feedback inside the programs&#039; logic, and thus every part of them was always operating by dead reckoning.
Feedback can&#039;t be built into the lowest level of the standard sequential model of computation because the model arranges operations in time -- and there can&#039;t be signals going backwards in time.  Conceiving of the same computation as being as parallel as possible -- which is how the brain does things -- allows for a lot more feedback to be built into the very lowest level primitives.  (simple example: for every forward signal, there is a feedback signal saying how much precision and bandwidth is needed downstream.)  
That doesn&#039;t mean that can&#039;t be emulated in sequential software -- indeed, that&#039;s exactly how it will be implemented in any systems that will be built any time soon.  But doing so requires an extra level of design discipline above the standard feedforward-only, waterfall, fire-and-forget, sequential paradigm.</description>
		<content:encoded><![CDATA[<p>Certainly you can be an intelligent human and still need to go to the doctor from time to time. So intelligence doesn&#8217;t require strict long-term autonomy.<br />
However, I was thinking more along the lines of software.  In the brain, as I&#8217;m sure you know, there&#8217;s actually more retrograde traffic than feedforward along many of the pathways &#8212; e.g. more proprioceptive and sensory feedback along motor pathways than motor signals going out.<br />
I would claim that this phenomenon is not limited to distal control and sensing but to every aspect of internal computation.  One of the reason that symbolic AI kept hitting a glass ceiling is that there was no feedback inside the programs&#8217; logic, and thus every part of them was always operating by dead reckoning.<br />
Feedback can&#8217;t be built into the lowest level of the standard sequential model of computation because the model arranges operations in time &#8212; and there can&#8217;t be signals going backwards in time.  Conceiving of the same computation as being as parallel as possible &#8212; which is how the brain does things &#8212; allows for a lot more feedback to be built into the very lowest level primitives.  (simple example: for every forward signal, there is a feedback signal saying how much precision and bandwidth is needed downstream.)<br />
That doesn&#8217;t mean that can&#8217;t be emulated in sequential software &#8212; indeed, that&#8217;s exactly how it will be implemented in any systems that will be built any time soon.  But doing so requires an extra level of design discipline above the standard feedforward-only, waterfall, fire-and-forget, sequential paradigm.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Tim Tyler</title>
		<link>http://www.foresight.org/nanodot/?p=3705#comment-866213</link>
		<dc:creator>Tim Tyler</dc:creator>
		<pubDate>Tue, 26 Jan 2010 10:30:38 +0000</pubDate>
		<guid isPermaLink="false">http://www.foresight.org/nanodot/?p=3705#comment-866213</guid>
		<description>Re: that can only come from incorporating feedback and automatic resource management into the basic fabric of its computing platform

What does that mean? In which senses do modern computer systems not do these things already? You mean you want data-centre robots to replace failed hard drives?  That kind of thing is probably not a pre-requisite for machine intelligence.</description>
		<content:encoded><![CDATA[<p>Re: that can only come from incorporating feedback and automatic resource management into the basic fabric of its computing platform</p>
<p>What does that mean? In which senses do modern computer systems not do these things already? You mean you want data-centre robots to replace failed hard drives?  That kind of thing is probably not a pre-requisite for machine intelligence.</p>
]]></content:encoded>
	</item>
</channel>
</rss>