Foresight Institute Logo
Image of nano

Archive for the 'Machine Intelligence' Category

Off to AGI-10

Posted by J. Storrs Hall on March 3rd, 2010

I’m on my way to AGI-10, the general AI conference, in Lugano.  If any readers are attending, let’s get together. Among other things, we’ll be unveiling a preliminary take on the AGI Roadmap (of which Foresight is a sponsor).

AI: Summing up

Posted by J. Storrs Hall on February 22nd, 2010

Let’s try to pull all the threads together, as futurists — which is the whole point here — and get some idea about when it might be reasonable to expect AI to show up.  When I say AI I want to look at the entire diahuman range, so the answer would still be a range [...]

Stackless brain

Posted by J. Storrs Hall on February 18th, 2010

Why we should suspect that the brain has a limited ability to recurse, but prefers to daisy-chain instead: The house the malt the rat the cat the dog the cow with the crumpled horn the maiden all forlorn the man all tattered and torn the priest all shaven and shorn the cock that crowed in [...]

Ethics for machines

Posted by J. Storrs Hall on February 17th, 2010

… to boldly go where no man has gone before! This final phrase of the classic Star Trek opening spiel had two problems with it, one as seen by people after the fact, and the other as seen by those who had gone before. As seen by earlier generations, the phrase “to boldly go” is [...]

NLP: State of the Art

Posted by J. Storrs Hall on February 15th, 2010

Over the past ten to fifteen years, research in computational linguistics has undergone a dramatic “paradigm shift.” Statistical learning methods that automatically acquire knowledge for language processing from empirical data have largely supplanted systems based on human knowledge engineering. The original success of statistical methods in speech recognition has been particularly influential in motivating the [...]

Visualizing the Cosmic All

Posted by J. Storrs Hall on February 11th, 2010

In E.E. Smith’s famous Lensman series, the galaxy is the battleground between two races of superintelligent beings, the (good) Arisians and the (evil) Eddorians.  When I listen to people who worry that we are about to create a superintelligence which will take over the world, I get the impression they’ve come from reading “Galactic Patrol” [...]

Natural Language Understanding

Posted by J. Storrs Hall on February 9th, 2010

“It was a true solar-plexus blow, and completely knocked out, Perkins staggered back against the instrument-board. His outflung arm pushed the power-lever out to its last notch, throwing full current through the bar, which was pointed straight up as it had been when they made their landing.” My current research in AI, such as it [...]

Graphene transistor roundup

Posted by J. Storrs Hall on February 8th, 2010

Phaedon Avouris, winner of the Feynman Prize in 1999, is head of the nanoscale science and technology group At IBM, which has recently reported significant advances in synthesizing transistors from graphene using conventional lithography methods. IBM Demonstrates Graphene Transistor Twice as Fast as Silicon Graphene transistors promise 100GHz speeds Graphene Transistors that Can Work at [...]

The first AI blog

Posted by J. Storrs Hall on February 5th, 2010

The first AI blog was written by a major, highly respected figure in the field. It consisted, as a blog should, of a series of short essays on various subjects relating to the central topic. It appeared in the mid-80s, just as the ARPAnet was transforming over into the internet. The only little thing I [...]

Analogical Quadrature

Posted by J. Storrs Hall on February 4th, 2010

So far, in making my case that AI is (a) possible and (b) likely in the next decade or two, I’ve focused on techniques which are or easily could be part of a generally intelligent system, and which will clearly be enhanced by the two orders of magnitude increase in processing power we expect from [...]

Associative memories

Posted by J. Storrs Hall on February 3rd, 2010

AI researchers in the 80s ran into a problem: the more their systems knew, the slower they ran.  Whereas we know that people who learn more tend to get faster (and better in other ways) at whatever it is they’re doing. The solution, of course, is: Duh. the brain doesn’t work like a von Neumann [...]

Learning and search

Posted by J. Storrs Hall on February 1st, 2010

So we will take it as given, or at least observed in some cases and reasonably likely in general, that AI can, at the current state of the programming art, handle any particular well-specified task, given enough (human) programming effort aimed at that one task. We can be a bit more specific about what “well-specified” [...]

The Sigil of Scoteia

Posted by J. Storrs Hall on January 28th, 2010

At the Foresight congerence special-interest lunch on IQ tests for AI, Monica Anderson suggested a test involving separating text which had had spaces and punctuation removed, back into words.  As a somewhat whimsical version of the test, I suggested the Sigil of Scoteia: In case you’re unfamiliar with it, it’s the frontispiece of the novel [...]

AI: how close are we?

Posted by J. Storrs Hall on January 27th, 2010

In the terminology I introduced in Beyond AI, all the AI we have right now is distinctly hypohuman: The overall question we are considering, is AI possible, can be summed up essentially as “is diahuman AI possible?”  The range of things humans can do, done as flexibly as humans can do them, and learned the [...]

A brief history of AI

Posted by J. Storrs Hall on January 25th, 2010

40s: Cybernetics, the notion the brain did logic in circuits, feedback 50s: the computer, stored programs, Logic Theorist 60s: LISP, semantic nets, GOFAI 70s: SHRDLU, AM 80s: AI winter, expert systems, neural nets 90s: robots, machine learning 00s: DARPA grand challenge level of competence The main point of this post is to answer any objections [...]

Is AI really possible?

Posted by J. Storrs Hall on January 24th, 2010

I’m about to start a series of posts on the topic of why I think AI is actually possible.  I realize that most of the readers here don’t probably need too much convincing on that subject, but you’d be surprised how many very smart people, many of them professors of computer science, are skeptical to [...]

Last day of free webcast of Foresight Conference on nanotech & AI

Posted by Christine Peterson on January 17th, 2010

Today is the last day of the free webcast of the 2010 Foresight Conference being held now in Palo Alto. The bandwidth coming out of the Sheraton is marginal, so the video may be low-res, but we will be posting high-res videos later, funds permitting (feel free to assist with this goal!). You can also follow the conference on [...]

Civilization, B.S.O.D.

Posted by J. Storrs Hall on January 6th, 2010

The other day I got a worried call from my mother-in-law.  My wife usually calls her during her commute but that day she neither called or answered her phone. Turns out my wife’s iPhone had crashed — the software had wedged and there was no way to reboot.  The amusing, if you can call it [...]

Is the brain a reasonable AGI design?

Posted by J. Storrs Hall on December 25th, 2009

Shane Legg seems to think so:  Tick, tock, tick, tock… BING. Having dealt with computation, now we get to the algorithm side of things. One of the big things influencing me this year has been learning about how much we understand about how the brain works, in particular, how much we know that should be [...]

Ray Solomonoff, 1926-2009

Posted by J. Storrs Hall on December 12th, 2009

Ray Solomonoff, inventor of algorithmic probability and one of the founding fathers of AI, died December 7 after a brief illness. I met Ray at the AI@50 conference at Dartmouth, given to celebrate the first AI conference and honor the five then surviving participants. He was very friendly, still sharp and insightful, and we had [...]