Foresight Institute Logo
Image of nano

Analogical Quadrature

So far, in making my case that AI is (a) possible and (b) likely in the next decade or two, I’ve focused on techniques which are or easily could be part of a generally intelligent system, and which will clearly be enhanced by the two orders of magnitude increase in processing power we expect from Moore’s Law by 2020.  (Note — we certainly don’t have to wait till 2020 to find out.  Existing hardware is well into the usable range, probably for less than $1M.  But you don’t get too many researchers, and no hobbyists, doing their research on machines like that today. You will in 2020.)

To make a heavier-than-air airplane fly, you need an engine.  If you have an airframe with lift-to-drag ratio r, stall speed s, and weight w, and a propellor with thrust efficiency e, you need an engine with power p=swr/e to fly. Power<p, no fly. Power>p, fly.

Both of the major American flying machine efforts understood this.  Langley spent huge effort developing light, powerful engines.  The brothers Wright built their own aeroengine from scratch in their bicycle shop.

The difference was, the Wright brothers knew an extra Good Trick, which was how to control the plane in the air once it was flying.

So to develop a working AI, we need the power, which we don’t think is going to be a problem. We need the lift, which is the kind of techniques found in narrow AIs and discussed above. And finally we need the control.

What I just said is an example of reasoning by analogy.  To an extent much greater than usually realized, most cognition and reasoning is based on analogy.  When you perform a physical skill, the specific sequence of sensory and motor signals is never exactly any of the ones that happened during practice; but they’re close enough that the mapping is straight-forward.

This is something that is well-known to the AI mainstream:

But “the big feature of human-level intelligence is not what it does when it works but what it does when it’s stuck,” Minsky said. When faced with novelty, Minsky claims, human intelligence applies “reasoning by analogy” to make the most direct tap into the cognitive glue that fuses knowledge domains.
Reasoning by analogy is a way of adapting old knowledge, which almost never perfectly matches the present situation, by following a recipe of detecting differences and tweaking parameters. It all happens so quickly that no “thinking” seems to be involved.  (EE Times)

The particular kind of reasoning by analogy that would make an associative memory machine work well can be called analogical quadrature.  This is the form of problem done most famously by Melanie Mitchell’s Copycat program: you have three things A, B, and C, and you want to find a fourth D such that A:B::C:D.  In the associative memory scheme, you need to do not the actual action you did in the memory, but the action that fits the current situation the way the remembered action fit the remembered situation.

As a simple example, if the remembered action was done by someone else, the parallel could be mapping things so that the action is done by you this time. In other words, analogical quadrature enables imitation.

If you can somehow represent your concepts as points in an n-dimensional space, analogical quadrature is falling-down easy: D=C+B-A in ordinary vector algebra. Of course, sometimes the mapping into n-space is problematical, and we are thrown back on symbolic methods such as those of the FARGitecture.

Those have their own problems, essentially the same ones as any symbolic AI: the operations and ontology in, e.g., Copycat are all idiosyncratic and hand-coded, and there’s no clear way to build a learning machine that extends them automatically.

I’ll go out on a limb and guess that the ultimate solution will involve elements of both extremes.  Search will be needed both to find new operations for symbolic formulations, and to find appropriate mappings into n-space for the subsymbolic ones.  A few key insights — new Good Tricks — will be necessary to unify the known methods and give us a solid understanding of, and engine for, analogical quadrature.  That’ll be a huge step towards general AI.

4 Responses to “Analogical Quadrature”

  1. Valkyrie Ice Says:

    And if the continuing advanced being made in graphene electronics allows us to have 1 to 100 terahertz chips available by 2012-15? How do you see such a massive leap in computing power affecting your predictions?

  2. Alfred Says:

    You speak of Analogical Quadrature both here and in your books. Yet you give no clear explanation how to implement it, or even an idea on how it’s implemented in the Human brain. If you have some ideas along this route, I’m sure many (including me) would be interested in hearing about them, no matter how speculative they are.

  3. J. Storrs Hall Says:

    Valkyrie: I expect graphene electronics will basically keep us on the Moore’s Law track. Remember that we expect a factor of 100 improvement by 2020 anyway. I’d be surprised if graphene caused a major bump in that already ambitious schedule. If you gave me a 100 THz cpu today it wouldn’t speed up my whole computer that much, because the Von Neumann bottleneck is still the bottleneck. 10 years is a fairly short time to rearrange as complex a technology as computers to take advantage of a radical improvement in one part.

    Alfred: The implementation would be some blend of the simple equation I gave above, and an industrial version of Copycat, depending on how close the representation was to n-space or semantic nets respectively. In practice, we expect different concepts to have different representations, and thus for there to be several, possibly many, different algorithms for AQ in a full-blown cognitive architecture. In any case, the really hard part is for the system to come up with new representations, and thus new corresponding implementations for AQ (and every other operation it needs to do) on its own as it learns and invents. To see how hard the representation problem is, try to define a representations for representations and algorithms such that you can represent old representation X, new representation Y, and old algorithm AQ(X), and using AQ(your representation), derive AQ(Y).

  4. Nicole Tedesco Says:

    What will help, I believe, is to evolve our current computing architectures from those of simple vector processing to categorized space processing (yes, I am walking up the topology ladder here). Brains seem to be really, really good channel processors. In any visual image, it seems brains are really good at separting out various “channels” or characteristics of the images presented before them. Think of every action in Adobe Photoshop being executed on an image, in parallel, such as contour discovery, shadow sampling, noise detection/correction and so on. Each of the processed results is a data channel which seems to be managed seperately, and in parallel. For instance, the vertical lines of multiple images may be “remembered” together in their own space. Of course each of these channels become associated (mapped via topological morphisms) with others that have occured near-simultaneously in time and in conjunction with other related signals. Navigating a vertical line only channel in the “vertical line space” can be quite rapid in the brain — consider it a particular “index” into the brain’s memories. Navigating the horizontal line only channel is also relatively quick, and of course can happen in parallel. To find where these spaces intersect, let’s say, with an additional “red only” channel is not only a rapid method of detecting similarities in existing memories but additional convergence points can emerge which can also point the way to new possibilities.

    Intersections in this space for existing memories should inteference destructively with the “negative space” patterns of the problem at hand. The remaining patterns will be those not yet tried. The best patterns should constructively interfere and, deBroglie/Bohm style, gather enough of the attention energy and point the way to a novel solution to an existing problem. (In the human brain the stratium would light up after a particular threshold has been passed, triggering the “good feeling” that indicates we may have found our solution.)

Leave a Reply