Foresight Nanotech Institute Logo
Image of nano

Singularity Institute releases ‘Levels of Organization’

Eliezer Yudkowsky writes "The Singularity Institute has released a draft of the paper "Levels of Organization in General Intelligence", to appear as a chapter in "Real AI: New Approaches to Artificial General Intelligence" (Goertzel and Pennachin, eds., forthcoming). A flat-file version is available (382K).

Everyone has been patiently waiting for science to cough up a general theory of intelligence. This paper contains the Singularity Institute's shot at the problem. The paper's goals are to describe intelligence as a complex supersystem of interdependent, internally specialized subsystems; to structure our understanding of cognition using levels of functional organization; and to integrate our understanding of general intelligence with our understanding of neuroscience, cognitive psychology, and evolutionary theory. The final part of the paper also includes a discussion of recursive self-improvement and seed AI."

13 Responses to “Singularity Institute releases ‘Levels of Organization’”

  1. Mr_Farlops Says:

    Is a general theory needed?

    I got the impression that reason why strong AI failed to materialize in the 80's was because researchers kept looking for some grand unifying concept of the mind. I think that the neurologists were probably the most skeptical of this approach, knowing that the mind just emerges from a stupifyingly huge number of fiendishly complex hacks culled by blind selection.

    This is one of the reasons why I think that nano will arrive before strong AI. Nano will give us the details needed to model neurons well. And it will give us the details needed to accurately model the histology of the brain.

    After that it becomes a matter of growing brains that are shaped by experience in the bodies of artificial organisms. In this way we can reach the goal of strong AI without having to understand all the essential yet endlessly trivial details in brain architecture.

  2. Corwin Says:

    Re:Is a general theory needed?

    Is that really a good thing tho?

    I mean… let's look at it this way. Why do we want or need AI? If we just want to 'grow brains shaped by experience…' there's this thing called 'sexual intercourse' or 'sex' for short. Unless proper precautions are taken… this 'sex' thingy achieves exactly this goal. ;)

    Now…. if we want to create particular types of artificial or psudeo intelligence, say for cheap labor…. I don't have a problem with that. This also doesn't require full 'AI' per se. Take a domestic servant as an example… does that require full human intelligence? Not really. A domestic servant AI would only have to be smart enough to perform a few basic tasks, and be programmed to simulate enough intelligence to respond to questions. But in order to do that don't we actually need to understand the underlying properties involved? I'm generally quite opposed to tinkering with complicated or emergent systems unless we understand the basic principles involved. Bill Gates is wrong. We DO need to know 'how it works' and what we're doing….

  3. Kadamose Says:

    Re:Is a general theory needed?

    The only thing that should ever be banned is 'sex'. I'd prefer an AI causing havoc than a new generation of little brats anyday.

  4. Corwin Says:

    Re:Is a general theory needed?

    Well I suspect it's safe to say that we here would all really prefer it if you didn't breed anyway. (Not that it sounds like that will be much of an issue…) So why don't you just go live on a mountaintop in Sri Lanka and masturbate over Zechariah Stichin books and leave the rest of us alone to live our lives, learn, grow, progress, and yes occasionally get laid. Okie? Okie.

    *bubye*

  5. Kadamose Says:

    Re:Is a general theory needed?

    How can you say that you're going to live your lives, learn, grow, and progress when you are all simply repeating the same mistakes as your pathetic ancestors? Yes, they did not have the technology that we do now (This only applies to anything after circa 3000 BC), but they did have the same mindset (i.e. Opportunity=$$$=Power=War)

    If I were a god, I would find mankind unfit to even live in the first place. Things must change, otherwise, we're all dead anyway, regardless of how far our technology takes us.

  6. Corwin Says:

    Re:Is a general theory needed?

    You know, if you don't stop this you're going to go blind Kad…

  7. Mr_Farlops Says:

    Re:Is a general theory needed?

    "I'm generally quite opposed to tinkering with complicated or emergent systems unless we understand the basic principles involved."

    Yes. You do have a point there, these creatures will be at least as unpredictable as people are. And for that reason we may not want to mess around with that until we are good are ready.

    Also it's true that a lot of the things we want done, don't really require strong AI. We just need a set of programs robust enough to not get confused by the mildly unpredictable situations we place them in. For example, if the robot butler doesn't have to spend twelve hours remapping the room after you move the furniture a bit, we are making some progress. That's just ordinary, weak AI and, Microsoft's paperclip aside, we are already making progress on that. But will this lead to superhuman intelligence, assuming such a thing is possible? I doubt it. Perhaps these weak AI applications might be superhumanly intelligent (again define that as you wish.) in their very limited fields, Deep Blue and chess for example, but I don't think most people will really think of these tools as being intelligent or conscious let alone superhumanly intelligent. Which means they aren't relevent to Moravec's brain taping idea.

    And I do agree that we really should understand a lot more about how our brains work before we attempt to recreate them in artificial life or attempt to improve on them. The prospect of psychotics with god-like intelligence is very chilling.

  8. Corwin Says:

    Re:Is a general theory needed?

    Hell we could probably do the domestic servant bit now…. AI is far enough along for that. Black and White may have been a lame game…. but when AI at that level can be found in a consumer level product…. MIT has much better. ;)

  9. Steve_Moniz Says:

    Re:Is a general theory needed?

    I'm not sure I want to bring strong AI into this world…I can't even keep house plants alive!

  10. Corwin Says:

    Re:Is a general theory needed?

    Yeah but you don't have to water AI…. they can be programmed to plug themselves in… ;)

  11. Mr_Farlops Says:

    Re:Is a general theory needed?

    "Hell we could probably do the domestic servant bit now"

    Hmm. I don't know. That car autopilot that CMU is developing still occasionally mistakes trees for roadways. I don't want 250 kilos of confused robot butler rampaging through my house, just because I moved some papers or books around!

  12. Shadow Says:

    Is this really new?

    Didn't Hoffstadter (I'm sure I'm misspelling his name) write an entire book — "Godel,Escher,Bach"– on the exact same subject? What's the new material in this paper?

  13. Shadow Says:

    Re:Is a general theory needed?

    I don't think there's any concievable "application" for a true AI in the normal sense. Such a technology would by definition be intelligent life. We don't generally think of intelligent life in terms of its applications. The only realistic impetus for development of true AI is the desire to discover (is it invention or is it discovery? philosophical question, I guess; maybe "meet" would be a more approriate word) new forms of intelligence. Regardless, I think we will certainly have to know a great deal more about how our own intelligence operates before this would even become vaguely possible. That's one purpose for research in this area. There are other potential applications for understanding our own intelligence though. If we can get to the point that we have an accurate symbolic understanding of how the human mind works, then we could theoretically synthesize those symbols (at this level of understanding, AI couldn't be very far away). Given that power, we could possibly devise a method for digitizing our thoughts at a symbolic level. The applications of such technology would be tremendous. Telekinetics and telepathy could become reality. Imagine humans telekinetically controlling fleets of nanites. They'd be like sorcerors. Eh, maybe I'm just dreaming, but the potential is there, however far into the depths of the future it may be. That's how I'd justify the research.

Leave a Reply