Foresight Nanotech Institute Logo
Image of nano

Archive for the 'Machine Intelligence' Category

The first AI blog

Posted by J. Storrs Hall on February 5th, 2010

The first AI blog was written by a major, highly respected figure in the field. It consisted, as a blog should, of a series of short essays on various subjects relating to the central topic. It appeared in the mid-80s, just as the ARPAnet was transforming over into the internet. The only little thing I [...]

Analogical Quadrature

Posted by J. Storrs Hall on February 4th, 2010

So far, in making my case that AI is (a) possible and (b) likely in the next decade or two, I’ve focused on techniques which are or easily could be part of a generally intelligent system, and which will clearly be enhanced by the two orders of magnitude increase in processing power we expect from [...]

Associative memories

Posted by J. Storrs Hall on February 3rd, 2010

AI researchers in the 80s ran into a problem: the more their systems knew, the slower they ran.  Whereas we know that people who learn more tend to get faster (and better in other ways) at whatever it is they’re doing. The solution, of course, is: Duh. the brain doesn’t work like a von Neumann [...]

Learning and search

Posted by J. Storrs Hall on February 1st, 2010

So we will take it as given, or at least observed in some cases and reasonably likely in general, that AI can, at the current state of the programming art, handle any particular well-specified task, given enough (human) programming effort aimed at that one task. We can be a bit more specific about what “well-specified” [...]

The Sigil of Scoteia

Posted by J. Storrs Hall on January 28th, 2010

At the Foresight congerence special-interest lunch on IQ tests for AI, Monica Anderson suggested a test involving separating text which had had spaces and punctuation removed, back into words.  As a somewhat whimsical version of the test, I suggested the Sigil of Scoteia: In case you’re unfamiliar with it, it’s the frontispiece of the novel [...]

AI: how close are we?

Posted by J. Storrs Hall on January 27th, 2010

In the terminology I introduced in Beyond AI, all the AI we have right now is distinctly hypohuman: The overall question we are considering, is AI possible, can be summed up essentially as “is diahuman AI possible?”  The range of things humans can do, done as flexibly as humans can do them, and learned the [...]

A brief history of AI

Posted by J. Storrs Hall on January 25th, 2010

40s: Cybernetics, the notion the brain did logic in circuits, feedback 50s: the computer, stored programs, Logic Theorist 60s: LISP, semantic nets, GOFAI 70s: SHRDLU, AM 80s: AI winter, expert systems, neural nets 90s: robots, machine learning 00s: DARPA grand challenge level of competence The main point of this post is to answer any objections [...]

Is AI really possible?

Posted by J. Storrs Hall on January 24th, 2010

I’m about to start a series of posts on the topic of why I think AI is actually possible.  I realize that most of the readers here don’t probably need too much convincing on that subject, but you’d be surprised how many very smart people, many of them professors of computer science, are skeptical to [...]

Last day of free webcast of Foresight Conference on nanotech & AI

Posted by Christine Peterson on January 17th, 2010

Today is the last day of the free webcast of the 2010 Foresight Conference being held now in Palo Alto. The bandwidth coming out of the Sheraton is marginal, so the video may be low-res, but we will be posting high-res videos later, funds permitting (feel free to assist with this goal!). You can also follow the conference on [...]

Civilization, B.S.O.D.

Posted by J. Storrs Hall on January 6th, 2010

The other day I got a worried call from my mother-in-law.  My wife usually calls her during her commute but that day she neither called or answered her phone. Turns out my wife’s iPhone had crashed — the software had wedged and there was no way to reboot.  The amusing, if you can call it [...]

Is the brain a reasonable AGI design?

Posted by J. Storrs Hall on December 25th, 2009

Shane Legg seems to think so:  Tick, tock, tick, tock… BING. Having dealt with computation, now we get to the algorithm side of things. One of the big things influencing me this year has been learning about how much we understand about how the brain works, in particular, how much we know that should be [...]

Ray Solomonoff, 1926-2009

Posted by J. Storrs Hall on December 12th, 2009

Ray Solomonoff, inventor of algorithmic probability and one of the founding fathers of AI, died December 7 after a brief illness. I met Ray at the AI@50 conference at Dartmouth, given to celebrate the first AI conference and honor the five then surviving participants. He was very friendly, still sharp and insightful, and we had [...]

Intelligence and the Chinese Room

Posted by J. Storrs Hall on December 9th, 2009

Michael A. writes: I support the consensus science on intelligence for the sake of promoting truth, but I also must admit that it especially concerns me that the modern denial of the reality of different intelligence levels will cause ethicists and the public to ignore the risks from human-equivalent artificial intelligence. After all, if all [...]

Singularity and the codic cortex

Posted by J. Storrs Hall on December 3rd, 2009

Once upon a time, the story goes, there was a programmer.  He was an amazingly productive programmer, producing thousands of working, debugged lines of code every day. Then he learned about DO-loops. One of the foundational concepts behind the idea of Singularity is the notion of self-improving AI.  And one of the key notions behind [...]

Cryonics and Philosophy of Mind

Posted by J. Storrs Hall on December 2nd, 2009

There’s an interesting debate between Bryan Caplan and Robin Hanson on their respective blogs. Caplan writes: … Robin didn’t care about biological survival.  He didn’t need his brain implanted in a cloned body.  He just wanted his neurons preserved well enough to “upload himself” into a computer. To my mind, it was ridiculously easy to [...]

Reynolds advocates faster nano/AI R&D for safety reasons

Posted by Christine Peterson on November 19th, 2009

In Popular Mechanics, longtime Foresight friend Prof. Glenn Reynolds looks at the future of nanotech and artificial intelligence, among other things looking at safety issues, including one call that potentially dangerous technologies be relinquished.  He takes a counterintuitive stance, which we’ve discussed here at Foresight over the years: But I wonder if that’s such a [...]

The bad robot takeover

Posted by J. Storrs Hall on November 9th, 2009

From the Albany (OR) Democrat Herald: Phone robots: Let’s all rebel By Hasso Hering, Columnist | Posted: Saturday, November 7, 2009 11:45 pm What this country needs – even more than a shorter baseball season so the World Series doesn’t go into November – is a popular uprising against the tyranny of telephone robots. This [...]

Brain mapping and the connectome

Posted by J. Storrs Hall on November 6th, 2009

I’m at the AAAI Fall Symposium session on Biologically Inspired Cognitive Architectures, and there was a really interesting talk by Walter Schneider of Pitt about progress in mapping the nerve bundles that are the “information superhighways” between the various parts of the brain.  You’ll find his slides from last year’s talk on his home page, and [...]

Is Robo Habilis a gateway to Intelligence?

Posted by J. Storrs Hall on November 5th, 2009

In response to my Robo Habilis post, Tim Tyler replied: An intelligence challenge should not involve building mechanical robot controllers – IMO. That’s a bit of a different problem – and a rather difficult one – because of the long build-test cycle involved in such projects. There are plenty of purer tests of intelligence that [...]

More on the AI takeover

Posted by J. Storrs Hall on November 4th, 2009

There are at least 4 stages of intelligence levels that AI will have to get through to get to the take-over-the-world level. In Beyond AI I refered to them as hypohuman, diahuman, epihuman, and hyperhuman; but just for fun let’s use fake species names: Robo insectis: rote, mechanical gadgets (or thinkers) with hand-coded skills, such [...]