<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
		>
<channel>
	<title>Comments on: AI: Summing up</title>
	<atom:link href="http://www.foresight.org/nanodot/?feed=rss2&#038;p=3773" rel="self" type="application/rss+xml" />
	<link>http://www.foresight.org/nanodot/?p=3773</link>
	<description>examining transformative technology</description>
	<lastBuildDate>Wed, 03 Apr 2013 18:23:47 +0000</lastBuildDate>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=3.0.4</generator>
	<item>
		<title>By: jdelphiki</title>
		<link>http://www.foresight.org/nanodot/?p=3773#comment-866833</link>
		<dc:creator>jdelphiki</dc:creator>
		<pubDate>Thu, 25 Feb 2010 16:32:26 +0000</pubDate>
		<guid isPermaLink="false">http://www.foresight.org/nanodot/?p=3773#comment-866833</guid>
		<description>Certainly, we&#039;ll need machines that have the ability to learn past their original programming.  The problem is finding the dividing line between what&#039;s occurred from the original programming and what&#039;s occurring beyond.

Human learning, itself, is based on millions of years of evolution:  innate biology mixed with instinctive behavior that led us to the point where we were eventually able to see into the abstract and learn beyond our personal experiences.  But our ability to learn is still based on all that evolved biology and instinct.  Are we &lt;i&gt;also&lt;/i&gt; clever machines that rely on our evolved baseline routines in ways that only appear to be intelligence and learning?

More important, how do we create machines to do even this?

I think it&#039;s not enough to create a &quot;brain&quot;.  We have to be able to figure out how to make machines that can use their base programming explore and learn about their enviornment around them.  Maybe even more than that, I think that the machines will have to have a &quot;drive&quot; to learn; an inherent need to find out more about its environment, much the way that an infant takes its base genetic programming and learns/explores its own environment.  

Right now, we&#039;re good at creating machines that operate quite nicely on &quot;instinct&quot; alone.  But even an infant has the innate ability to learn what it likes or dislikes, needs or doesn&#039;t need.  We might be able to eventually create machines that can learn about their environment, but unless we can find a way of making machines that can respond to stimulus out of its own needs and values, we&#039;ll have a hard time creating the separation between clever program and intelligent machines.</description>
		<content:encoded><![CDATA[<p>Certainly, we&#8217;ll need machines that have the ability to learn past their original programming.  The problem is finding the dividing line between what&#8217;s occurred from the original programming and what&#8217;s occurring beyond.</p>
<p>Human learning, itself, is based on millions of years of evolution:  innate biology mixed with instinctive behavior that led us to the point where we were eventually able to see into the abstract and learn beyond our personal experiences.  But our ability to learn is still based on all that evolved biology and instinct.  Are we <i>also</i> clever machines that rely on our evolved baseline routines in ways that only appear to be intelligence and learning?</p>
<p>More important, how do we create machines to do even this?</p>
<p>I think it&#8217;s not enough to create a &#8220;brain&#8221;.  We have to be able to figure out how to make machines that can use their base programming explore and learn about their enviornment around them.  Maybe even more than that, I think that the machines will have to have a &#8220;drive&#8221; to learn; an inherent need to find out more about its environment, much the way that an infant takes its base genetic programming and learns/explores its own environment.  </p>
<p>Right now, we&#8217;re good at creating machines that operate quite nicely on &#8220;instinct&#8221; alone.  But even an infant has the innate ability to learn what it likes or dislikes, needs or doesn&#8217;t need.  We might be able to eventually create machines that can learn about their environment, but unless we can find a way of making machines that can respond to stimulus out of its own needs and values, we&#8217;ll have a hard time creating the separation between clever program and intelligent machines.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Peter</title>
		<link>http://www.foresight.org/nanodot/?p=3773#comment-866826</link>
		<dc:creator>Peter</dc:creator>
		<pubDate>Thu, 25 Feb 2010 09:42:12 +0000</pubDate>
		<guid isPermaLink="false">http://www.foresight.org/nanodot/?p=3773#comment-866826</guid>
		<description>Ref:John Blake
February 24th, 2010 at 9:56 PM   
Wish I could have put that comment together. IMO as good as it gets.</description>
		<content:encoded><![CDATA[<p>Ref:John Blake<br />
February 24th, 2010 at 9:56 PM<br />
Wish I could have put that comment together. IMO as good as it gets.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: John Blake</title>
		<link>http://www.foresight.org/nanodot/?p=3773#comment-866824</link>
		<dc:creator>John Blake</dc:creator>
		<pubDate>Thu, 25 Feb 2010 04:56:52 +0000</pubDate>
		<guid isPermaLink="false">http://www.foresight.org/nanodot/?p=3773#comment-866824</guid>
		<description>When hyper-linked IT nodes reach a certain level of complexity c. 2030, the resulting Emergent Order may not be discernible but it will exist.  Whether sentient self-awareness will accompany this development, who knows... such issues, including holographic attributes, are entirely beyond mathematicians&#039; purview today.

Emergent Order is THE central question in AI (as the cliche has it, intelligence as such is not artificial; by definition, it transcends programmed design).

AI researchers can only start things off.  Like &quot;genetic algorithms&quot;, no-one knows or can know where Emergent Order leads.  When different central foci exist in competition, the result will be a second-order Emergent Organism, and so on down the line.</description>
		<content:encoded><![CDATA[<p>When hyper-linked IT nodes reach a certain level of complexity c. 2030, the resulting Emergent Order may not be discernible but it will exist.  Whether sentient self-awareness will accompany this development, who knows&#8230; such issues, including holographic attributes, are entirely beyond mathematicians&#8217; purview today.</p>
<p>Emergent Order is THE central question in AI (as the cliche has it, intelligence as such is not artificial; by definition, it transcends programmed design).</p>
<p>AI researchers can only start things off.  Like &#8220;genetic algorithms&#8221;, no-one knows or can know where Emergent Order leads.  When different central foci exist in competition, the result will be a second-order Emergent Organism, and so on down the line.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: hushashi</title>
		<link>http://www.foresight.org/nanodot/?p=3773#comment-866823</link>
		<dc:creator>hushashi</dc:creator>
		<pubDate>Thu, 25 Feb 2010 00:26:02 +0000</pubDate>
		<guid isPermaLink="false">http://www.foresight.org/nanodot/?p=3773#comment-866823</guid>
		<description>AI will only &quot;exist&quot; when a system is able to, by itself, realize that it has earned nothing it &quot;knows&quot;; every piece of knowledge upon which it relies is something it was fed, and its view of the world and all knowledge it relies upon is based fundamentally on the trust of its designers rather than anythign it has done.  

Once that line is breached, true intelligence can emerge.  Until then, self-awareness means nothing and AI will remain a propeller-head dominated circle jerk.</description>
		<content:encoded><![CDATA[<p>AI will only &#8220;exist&#8221; when a system is able to, by itself, realize that it has earned nothing it &#8220;knows&#8221;; every piece of knowledge upon which it relies is something it was fed, and its view of the world and all knowledge it relies upon is based fundamentally on the trust of its designers rather than anythign it has done.  </p>
<p>Once that line is breached, true intelligence can emerge.  Until then, self-awareness means nothing and AI will remain a propeller-head dominated circle jerk.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: PacRim Jim</title>
		<link>http://www.foresight.org/nanodot/?p=3773#comment-866817</link>
		<dc:creator>PacRim Jim</dc:creator>
		<pubDate>Wed, 24 Feb 2010 22:36:33 +0000</pubDate>
		<guid isPermaLink="false">http://www.foresight.org/nanodot/?p=3773#comment-866817</guid>
		<description>There is a critical threshold which, once passed, will enable rapid AI improvement. That threshold is the ability of an AI program to learn and modify itself based on what it learned. At gigahertz speed, learning will accelerate at a runaway pace.</description>
		<content:encoded><![CDATA[<p>There is a critical threshold which, once passed, will enable rapid AI improvement. That threshold is the ability of an AI program to learn and modify itself based on what it learned. At gigahertz speed, learning will accelerate at a runaway pace.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: TheRadicalModerate</title>
		<link>http://www.foresight.org/nanodot/?p=3773#comment-866815</link>
		<dc:creator>TheRadicalModerate</dc:creator>
		<pubDate>Wed, 24 Feb 2010 20:29:01 +0000</pubDate>
		<guid isPermaLink="false">http://www.foresight.org/nanodot/?p=3773#comment-866815</guid>
		<description>It still seems to me that not much progress gets made until we understand whether the future lies with the nets-of-neural-nets approach or the alogorithmically self-constructing ontology approach.  The human brain seems perfectly capable of self-constructing its own ontologies via some kind of neural or regional self-organization.  Whatever it&#039;s doing can&#039;t be particularly complex.

On the other hand, the brain&#039;s developmental wiring--the genetics that governs how white matter connects one cortical region to brainstem structures, limbic system structures, and other cortical regions--is fearsomely complex and the product of tens of millions of years of evolution.  Whether the engineering required to mimic that evolution is more or less complex than the engineering required to produce a completely synthetic method for self-constructing ontologies will govern what happens.</description>
		<content:encoded><![CDATA[<p>It still seems to me that not much progress gets made until we understand whether the future lies with the nets-of-neural-nets approach or the alogorithmically self-constructing ontology approach.  The human brain seems perfectly capable of self-constructing its own ontologies via some kind of neural or regional self-organization.  Whatever it&#8217;s doing can&#8217;t be particularly complex.</p>
<p>On the other hand, the brain&#8217;s developmental wiring&#8211;the genetics that governs how white matter connects one cortical region to brainstem structures, limbic system structures, and other cortical regions&#8211;is fearsomely complex and the product of tens of millions of years of evolution.  Whether the engineering required to mimic that evolution is more or less complex than the engineering required to produce a completely synthetic method for self-constructing ontologies will govern what happens.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: jdelphiki</title>
		<link>http://www.foresight.org/nanodot/?p=3773#comment-866814</link>
		<dc:creator>jdelphiki</dc:creator>
		<pubDate>Wed, 24 Feb 2010 19:51:21 +0000</pubDate>
		<guid isPermaLink="false">http://www.foresight.org/nanodot/?p=3773#comment-866814</guid>
		<description>We humans tend to be fairly egocentric in our perception of intelligence.  We set criteria that we feel represents a baseline for artifiicial intelligence, but we tend to ignore how much of that baseline would also have to include instinctive capabilties that we&#039;ve evolved into over a few million years, or learned reactions that we pick up intuitively from the aspect of having to use our bodies and brains in the variety of environments in which we live.

In the example of teaching a machine how to sweep, the machine would already be at a disadvantage if it did not first have built into it the ingrained capability of having hands, arms, musculature, etc. like ours...along with the years of coordinated developmental practice of using them all.

So, do we judge the machine&#039;s inability to intuit how to sweep by observation entirely on its lack of intelligence or do we factor in the innate advantage we have by having designed brooms that work with the way we humans are built?

Put another way, it took us a long time to figure out the basic intelligence we now perceive in, say, dolphins or ravens.  Dolphins have been shown to have self-awareness, to pass on skills to their young, etc.  Ravens show remarkable problem-solving skills, including the adapted use of tools (like, for instance, car tires to crack open nuts).  Would we doubt the overall intelligence of a dolphin or a raven for not being able to &quot;get&quot; how to operate a broom?  

I agree that we may not have reached a point where our computers are capable of transcending their baseline programming to intuit their own conclusions.  What I&#039;m &lt;i&gt;not&lt;/i&gt; quite certain is how much of our own intelligence is transcendent of our own baseline programming.  

Maybe to get at true AI, we have to focus on simple learning machines first.  The rest should come on its own.</description>
		<content:encoded><![CDATA[<p>We humans tend to be fairly egocentric in our perception of intelligence.  We set criteria that we feel represents a baseline for artifiicial intelligence, but we tend to ignore how much of that baseline would also have to include instinctive capabilties that we&#8217;ve evolved into over a few million years, or learned reactions that we pick up intuitively from the aspect of having to use our bodies and brains in the variety of environments in which we live.</p>
<p>In the example of teaching a machine how to sweep, the machine would already be at a disadvantage if it did not first have built into it the ingrained capability of having hands, arms, musculature, etc. like ours&#8230;along with the years of coordinated developmental practice of using them all.</p>
<p>So, do we judge the machine&#8217;s inability to intuit how to sweep by observation entirely on its lack of intelligence or do we factor in the innate advantage we have by having designed brooms that work with the way we humans are built?</p>
<p>Put another way, it took us a long time to figure out the basic intelligence we now perceive in, say, dolphins or ravens.  Dolphins have been shown to have self-awareness, to pass on skills to their young, etc.  Ravens show remarkable problem-solving skills, including the adapted use of tools (like, for instance, car tires to crack open nuts).  Would we doubt the overall intelligence of a dolphin or a raven for not being able to &#8220;get&#8221; how to operate a broom?  </p>
<p>I agree that we may not have reached a point where our computers are capable of transcending their baseline programming to intuit their own conclusions.  What I&#8217;m <i>not</i> quite certain is how much of our own intelligence is transcendent of our own baseline programming.  </p>
<p>Maybe to get at true AI, we have to focus on simple learning machines first.  The rest should come on its own.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: rhhardin</title>
		<link>http://www.foresight.org/nanodot/?p=3773#comment-866812</link>
		<dc:creator>rhhardin</dc:creator>
		<pubDate>Wed, 24 Feb 2010 19:23:21 +0000</pubDate>
		<guid isPermaLink="false">http://www.foresight.org/nanodot/?p=3773#comment-866812</guid>
		<description>AI is the field with the longest-running future promise of any I know of.

I suspect it&#039;s the same thing that used to attract young males to philosophy.

Males abstract and simplify.  Wittgenstein showed how that eliminated the solution to the problem.

Lots of things are like that.  AI is probably one.</description>
		<content:encoded><![CDATA[<p>AI is the field with the longest-running future promise of any I know of.</p>
<p>I suspect it&#8217;s the same thing that used to attract young males to philosophy.</p>
<p>Males abstract and simplify.  Wittgenstein showed how that eliminated the solution to the problem.</p>
<p>Lots of things are like that.  AI is probably one.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Fred Hapgood</title>
		<link>http://www.foresight.org/nanodot/?p=3773#comment-866811</link>
		<dc:creator>Fred Hapgood</dc:creator>
		<pubDate>Wed, 24 Feb 2010 17:52:47 +0000</pubDate>
		<guid isPermaLink="false">http://www.foresight.org/nanodot/?p=3773#comment-866811</guid>
		<description>Recently a lecture was given at Harvard by a Cornell scientist named Itai Cohen on the Flight of the Fruit Fly. In part the abstract read: 

There comes a time in each of our lives where we grab a thick section of the morning paper, roll it up and set off to do battle with one of nature’s most accomplished aviators - the fly. If however, instead of swatting we could magnify our view and experience the world in slow motion we would be privy to a world-class ballet full of graceful figure-eight wing strokes, effortless pirouettes, and astonishing acrobatics. After watching such a magnificent display, who among us could destroy this virtuoso? How do flies produce acrobatic maneuvers with such precision? What control mechanisms do they need to maneuver? More abstractly, what problem are they solving as they fly? Despite pioneering studies of flight control in tethered insects, robotic wing experiments, and fluid dynamics simulations that have revealed basic mechanisms for unsteady force generation during steady flight, the answers to these questions remain elusive.</description>
		<content:encoded><![CDATA[<p>Recently a lecture was given at Harvard by a Cornell scientist named Itai Cohen on the Flight of the Fruit Fly. In part the abstract read: </p>
<p>There comes a time in each of our lives where we grab a thick section of the morning paper, roll it up and set off to do battle with one of nature’s most accomplished aviators &#8211; the fly. If however, instead of swatting we could magnify our view and experience the world in slow motion we would be privy to a world-class ballet full of graceful figure-eight wing strokes, effortless pirouettes, and astonishing acrobatics. After watching such a magnificent display, who among us could destroy this virtuoso? How do flies produce acrobatic maneuvers with such precision? What control mechanisms do they need to maneuver? More abstractly, what problem are they solving as they fly? Despite pioneering studies of flight control in tethered insects, robotic wing experiments, and fluid dynamics simulations that have revealed basic mechanisms for unsteady force generation during steady flight, the answers to these questions remain elusive.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Kyle</title>
		<link>http://www.foresight.org/nanodot/?p=3773#comment-866810</link>
		<dc:creator>Kyle</dc:creator>
		<pubDate>Wed, 24 Feb 2010 17:07:51 +0000</pubDate>
		<guid isPermaLink="false">http://www.foresight.org/nanodot/?p=3773#comment-866810</guid>
		<description>Even if we use a learning machine with neural networks it in essence comes down to statistics. How do you get away from the statistics? 

Even abstraction or novel thought might be nothing more then the averaging out of thousands of random associations. To but integrity in a machine like is pretty risky but something we should definitely do.</description>
		<content:encoded><![CDATA[<p>Even if we use a learning machine with neural networks it in essence comes down to statistics. How do you get away from the statistics? </p>
<p>Even abstraction or novel thought might be nothing more then the averaging out of thousands of random associations. To but integrity in a machine like is pretty risky but something we should definitely do.</p>
]]></content:encoded>
	</item>
</channel>
</rss>