<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
		>
<channel>
	<title>Comments on: Analogical Quadrature</title>
	<atom:link href="http://www.foresight.org/nanodot/?feed=rss2&#038;p=3732" rel="self" type="application/rss+xml" />
	<link>http://www.foresight.org/nanodot/?p=3732</link>
	<description>examining transformative technology</description>
	<lastBuildDate>Wed, 03 Apr 2013 18:23:47 +0000</lastBuildDate>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=3.0.4</generator>
	<item>
		<title>By: Nicole Tedesco</title>
		<link>http://www.foresight.org/nanodot/?p=3732#comment-866492</link>
		<dc:creator>Nicole Tedesco</dc:creator>
		<pubDate>Sat, 06 Feb 2010 00:26:30 +0000</pubDate>
		<guid isPermaLink="false">http://www.foresight.org/nanodot/?p=3732#comment-866492</guid>
		<description>What will help, I believe, is to evolve our current computing architectures from those of simple vector processing to categorized &lt;i&gt;space&lt;/i&gt; processing (yes, I am walking up the topology ladder here).  Brains seem to be really, really good &lt;i&gt;channel&lt;/i&gt; processors.  In any visual image, it seems brains are really good at separting out various &quot;channels&quot; or characteristics of the images presented before them.  Think of every action in Adobe Photoshop being executed on an image, in parallel, such as contour discovery, shadow sampling, noise detection/correction and so on.  Each of the processed results is a data channel which seems to be managed seperately, and in parallel.  For instance, the vertical lines of multiple images may be &quot;remembered&quot; together in their own space.  Of course each of these channels become associated (mapped via topological morphisms) with others that have occured near-simultaneously in time and in conjunction with other related signals.  Navigating a vertical line only channel in the &quot;vertical line space&quot; can be quite rapid in the brain &#8212; consider it a particular &quot;index&quot; into the brain&#039;s memories.  Navigating the horizontal line only channel is also relatively quick, and of course can happen in parallel.  To find where these spaces intersect, let&#039;s say, with an additional &quot;red only&quot; channel is not only a rapid method of detecting similarities in existing memories but additional convergence points can emerge which can also point the way to new possibilities.

Intersections in this space for existing memories should inteference destructively with the &quot;negative space&quot; patterns of the problem at hand.  The remaining patterns will be those not yet tried.  The best patterns should constructively interfere and, deBroglie/Bohm style, gather enough of the attention energy and point the way to a novel solution to an existing problem.  (In the human brain the stratium would light up after a particular threshold has been passed, triggering the &quot;good feeling&quot; that indicates we may have found our solution.)</description>
		<content:encoded><![CDATA[<p>What will help, I believe, is to evolve our current computing architectures from those of simple vector processing to categorized <i>space</i> processing (yes, I am walking up the topology ladder here).  Brains seem to be really, really good <i>channel</i> processors.  In any visual image, it seems brains are really good at separting out various &#8220;channels&#8221; or characteristics of the images presented before them.  Think of every action in Adobe Photoshop being executed on an image, in parallel, such as contour discovery, shadow sampling, noise detection/correction and so on.  Each of the processed results is a data channel which seems to be managed seperately, and in parallel.  For instance, the vertical lines of multiple images may be &#8220;remembered&#8221; together in their own space.  Of course each of these channels become associated (mapped via topological morphisms) with others that have occured near-simultaneously in time and in conjunction with other related signals.  Navigating a vertical line only channel in the &#8220;vertical line space&#8221; can be quite rapid in the brain &mdash; consider it a particular &#8220;index&#8221; into the brain&#8217;s memories.  Navigating the horizontal line only channel is also relatively quick, and of course can happen in parallel.  To find where these spaces intersect, let&#8217;s say, with an additional &#8220;red only&#8221; channel is not only a rapid method of detecting similarities in existing memories but additional convergence points can emerge which can also point the way to new possibilities.</p>
<p>Intersections in this space for existing memories should inteference destructively with the &#8220;negative space&#8221; patterns of the problem at hand.  The remaining patterns will be those not yet tried.  The best patterns should constructively interfere and, deBroglie/Bohm style, gather enough of the attention energy and point the way to a novel solution to an existing problem.  (In the human brain the stratium would light up after a particular threshold has been passed, triggering the &#8220;good feeling&#8221; that indicates we may have found our solution.)</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: J. Storrs Hall</title>
		<link>http://www.foresight.org/nanodot/?p=3732#comment-866471</link>
		<dc:creator>J. Storrs Hall</dc:creator>
		<pubDate>Fri, 05 Feb 2010 12:10:20 +0000</pubDate>
		<guid isPermaLink="false">http://www.foresight.org/nanodot/?p=3732#comment-866471</guid>
		<description>Valkyrie:  I expect graphene electronics will basically keep us on the Moore&#039;s Law track.  Remember that we expect a factor of 100 improvement by 2020 anyway.  I&#039;d be surprised if graphene caused a major bump in that already ambitious schedule.  If you gave me a 100 THz cpu today it wouldn&#039;t speed up my whole computer that much, because the Von Neumann bottleneck is still the bottleneck.  10 years is a fairly short time to rearrange as complex a technology as computers to take advantage of a radical improvement in one part.

Alfred: The implementation would be some blend of the simple equation I gave above, and an industrial version of Copycat, depending on how close the representation was to n-space or semantic nets respectively.  In practice, we expect different concepts to have different representations, and thus for there to be several, possibly many, different  algorithms for AQ in a full-blown cognitive architecture.  In any case, the really hard part is for the system to come up with new representations, and thus new corresponding implementations for AQ (and every other operation it needs to do) on its own as it learns and invents.  To see how hard the representation problem is, try to define a representations for representations and algorithms such that you can represent old representation X, new representation Y, and old algorithm AQ(X), and using AQ(your representation), derive AQ(Y).</description>
		<content:encoded><![CDATA[<p>Valkyrie:  I expect graphene electronics will basically keep us on the Moore&#8217;s Law track.  Remember that we expect a factor of 100 improvement by 2020 anyway.  I&#8217;d be surprised if graphene caused a major bump in that already ambitious schedule.  If you gave me a 100 THz cpu today it wouldn&#8217;t speed up my whole computer that much, because the Von Neumann bottleneck is still the bottleneck.  10 years is a fairly short time to rearrange as complex a technology as computers to take advantage of a radical improvement in one part.</p>
<p>Alfred: The implementation would be some blend of the simple equation I gave above, and an industrial version of Copycat, depending on how close the representation was to n-space or semantic nets respectively.  In practice, we expect different concepts to have different representations, and thus for there to be several, possibly many, different  algorithms for AQ in a full-blown cognitive architecture.  In any case, the really hard part is for the system to come up with new representations, and thus new corresponding implementations for AQ (and every other operation it needs to do) on its own as it learns and invents.  To see how hard the representation problem is, try to define a representations for representations and algorithms such that you can represent old representation X, new representation Y, and old algorithm AQ(X), and using AQ(your representation), derive AQ(Y).</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Alfred</title>
		<link>http://www.foresight.org/nanodot/?p=3732#comment-866461</link>
		<dc:creator>Alfred</dc:creator>
		<pubDate>Fri, 05 Feb 2010 06:55:12 +0000</pubDate>
		<guid isPermaLink="false">http://www.foresight.org/nanodot/?p=3732#comment-866461</guid>
		<description>You speak of Analogical Quadrature both here and in your books. Yet you give no clear explanation how to implement it, or even an idea on how it&#039;s implemented in the Human brain. If you have some ideas along this route, I&#039;m sure many (including me) would be interested in hearing about them, no matter how speculative they are.</description>
		<content:encoded><![CDATA[<p>You speak of Analogical Quadrature both here and in your books. Yet you give no clear explanation how to implement it, or even an idea on how it&#8217;s implemented in the Human brain. If you have some ideas along this route, I&#8217;m sure many (including me) would be interested in hearing about them, no matter how speculative they are.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Valkyrie Ice</title>
		<link>http://www.foresight.org/nanodot/?p=3732#comment-866458</link>
		<dc:creator>Valkyrie Ice</dc:creator>
		<pubDate>Fri, 05 Feb 2010 04:52:19 +0000</pubDate>
		<guid isPermaLink="false">http://www.foresight.org/nanodot/?p=3732#comment-866458</guid>
		<description>And if the continuing advanced being made in graphene electronics allows us to have 1 to 100 terahertz chips available by 2012-15? How do you see such a massive leap in computing power affecting your predictions?</description>
		<content:encoded><![CDATA[<p>And if the continuing advanced being made in graphene electronics allows us to have 1 to 100 terahertz chips available by 2012-15? How do you see such a massive leap in computing power affecting your predictions?</p>
]]></content:encoded>
	</item>
</channel>
</rss>