<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
		>
<channel>
	<title>Comments on: AGI Roadmap meeting</title>
	<atom:link href="http://www.foresight.org/nanodot/?feed=rss2&#038;p=3457" rel="self" type="application/rss+xml" />
	<link>http://www.foresight.org/nanodot/?p=3457</link>
	<description>examining transformative technology</description>
	<lastBuildDate>Wed, 03 Apr 2013 18:23:47 +0000</lastBuildDate>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=3.0.4</generator>
	<item>
		<title>By: Wolfgang Lorenz</title>
		<link>http://www.foresight.org/nanodot/?p=3457#comment-908497</link>
		<dc:creator>Wolfgang Lorenz</dc:creator>
		<pubDate>Tue, 03 Aug 2010 07:30:17 +0000</pubDate>
		<guid isPermaLink="false">http://www.foresight.org/nanodot/?p=3457#comment-908497</guid>
		<description>Robots with specialized cleaning hardware will perform better in your test. Remember that AI engineers will lie, trick, and cheat whenever they can! So in order to prove intelligence you&#039;ll have to chose your task very carefully. Of course, if you&#039;re a gene marionette and trying is to defend your genes agains the androids then chosing a fakeable test is the way for you ;-)</description>
		<content:encoded><![CDATA[<p>Robots with specialized cleaning hardware will perform better in your test. Remember that AI engineers will lie, trick, and cheat whenever they can! So in order to prove intelligence you&#8217;ll have to chose your task very carefully. Of course, if you&#8217;re a gene marionette and trying is to defend your genes agains the androids then chosing a fakeable test is the way for you <img src='http://www.foresight.org/nanodot/wp-includes/images/smilies/icon_wink.gif' alt=';-)' class='wp-smiley' /> </p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Russell Wallace</title>
		<link>http://www.foresight.org/nanodot/?p=3457#comment-876815</link>
		<dc:creator>Russell Wallace</dc:creator>
		<pubDate>Fri, 14 May 2010 19:40:28 +0000</pubDate>
		<guid isPermaLink="false">http://www.foresight.org/nanodot/?p=3457#comment-876815</guid>
		<description>If I were setting up an AI prize, the task I&#039;d pick is cleaning (specifically, start with a building containing the usual large set of cleaning subtasks, rate by percentage accomplished, big penalties for breaking or misplacing things).

Of course it&#039;s not AGI-complete -- there are already big incentives to write a human level AGI, but that&#039;s moot because nobody knows how to do it. But it is hard enough to advance the state of the art, and it has extraordinary potential leverage. A prize should kickstart a line of development that is expected to be subsequently self-sustaining. The world spends, at a conservative estimate, more than a trillion dollars a year on cleaning (mostly not paid for under that heading, but the cost is the same). Imagine what would happen if even 1% of that could be spent on developing better machines. Imagine what could be accomplished if that much human effort were freed for better purposes.</description>
		<content:encoded><![CDATA[<p>If I were setting up an AI prize, the task I&#8217;d pick is cleaning (specifically, start with a building containing the usual large set of cleaning subtasks, rate by percentage accomplished, big penalties for breaking or misplacing things).</p>
<p>Of course it&#8217;s not AGI-complete &#8212; there are already big incentives to write a human level AGI, but that&#8217;s moot because nobody knows how to do it. But it is hard enough to advance the state of the art, and it has extraordinary potential leverage. A prize should kickstart a line of development that is expected to be subsequently self-sustaining. The world spends, at a conservative estimate, more than a trillion dollars a year on cleaning (mostly not paid for under that heading, but the cost is the same). Imagine what would happen if even 1% of that could be spent on developing better machines. Imagine what could be accomplished if that much human effort were freed for better purposes.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Wolfgang Lorenz</title>
		<link>http://www.foresight.org/nanodot/?p=3457#comment-872574</link>
		<dc:creator>Wolfgang Lorenz</dc:creator>
		<pubDate>Mon, 03 May 2010 05:58:21 +0000</pubDate>
		<guid isPermaLink="false">http://www.foresight.org/nanodot/?p=3457#comment-872574</guid>
		<description>So Apple sees a market for $1.000.000 coffee machines? ;-)

Well, the Wozniak test is better than the total Turing test because it measures also doing. And it is better than the Nilsson test because it is short and practical.

But it should include a time limit relative to a human test person. And it does not measure improvements on performing the same task over and over again. It also does not explicitely support the development of learning agents because the top task is fixed. I would prefer a test where an agent has to watch and imitate a human doing some handcraft work. The agent should then be given some time to practice on it&#039;s own to see if it gets better.</description>
		<content:encoded><![CDATA[<p>So Apple sees a market for $1.000.000 coffee machines? <img src='http://www.foresight.org/nanodot/wp-includes/images/smilies/icon_wink.gif' alt=';-)' class='wp-smiley' /> </p>
<p>Well, the Wozniak test is better than the total Turing test because it measures also doing. And it is better than the Nilsson test because it is short and practical.</p>
<p>But it should include a time limit relative to a human test person. And it does not measure improvements on performing the same task over and over again. It also does not explicitely support the development of learning agents because the top task is fixed. I would prefer a test where an agent has to watch and imitate a human doing some handcraft work. The agent should then be given some time to practice on it&#8217;s own to see if it gets better.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Chuck Esterbrook</title>
		<link>http://www.foresight.org/nanodot/?p=3457#comment-865246</link>
		<dc:creator>Chuck Esterbrook</dc:creator>
		<pubDate>Sun, 01 Nov 2009 19:25:28 +0000</pubDate>
		<guid isPermaLink="false">http://www.foresight.org/nanodot/?p=3457#comment-865246</guid>
		<description>If we can create narrow AI to drive a car, why can&#039;t we do the same for making coffee? While programming every model of coffee maker might be prohibitive, programming the range of possible controls and behaviors might not be. Also, the task may not be complex enough to *require* natural language processing and generation at a generally intelligent level.

I suspect that if DARPA put up a million for this, we would crack it in a few years sans AGI.

I&#039;ve proposed the Employee Test: Would a business owner be willing to hire your AI/AGI, for the cost of a salary, to replace a valued employee in positions such as software engineering, accounting and project management? Of course, this hits your &quot;(a) too high a bar&quot; but completely avoids &quot;(b) a test of the wrong thing&quot; since we ultimately want AGIs to do various forms of work for us.

Lowering the bar may always be problematic because it increases the probability that the test could be passed with narrow AI.</description>
		<content:encoded><![CDATA[<p>If we can create narrow AI to drive a car, why can&#8217;t we do the same for making coffee? While programming every model of coffee maker might be prohibitive, programming the range of possible controls and behaviors might not be. Also, the task may not be complex enough to *require* natural language processing and generation at a generally intelligent level.</p>
<p>I suspect that if DARPA put up a million for this, we would crack it in a few years sans AGI.</p>
<p>I&#8217;ve proposed the Employee Test: Would a business owner be willing to hire your AI/AGI, for the cost of a salary, to replace a valued employee in positions such as software engineering, accounting and project management? Of course, this hits your &#8220;(a) too high a bar&#8221; but completely avoids &#8220;(b) a test of the wrong thing&#8221; since we ultimately want AGIs to do various forms of work for us.</p>
<p>Lowering the bar may always be problematic because it increases the probability that the test could be passed with narrow AI.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: the Foresight Institute &#187; Robo Habilis</title>
		<link>http://www.foresight.org/nanodot/?p=3457#comment-865242</link>
		<dc:creator>the Foresight Institute &#187; Robo Habilis</dc:creator>
		<pubDate>Thu, 29 Oct 2009 15:16:36 +0000</pubDate>
		<guid isPermaLink="false">http://www.foresight.org/nanodot/?p=3457#comment-865242</guid>
		<description>[...] AGI Roadmap meeting  [...]</description>
		<content:encoded><![CDATA[<p>[...] AGI Roadmap meeting  [...]</p>
]]></content:encoded>
	</item>
</channel>
</rss>