The Singularity Summit is going on in NYC this weekend. This will be an open thread for comments or questions about the talks (or any related subject.)
This entry was posted
on Saturday, October 3rd, 2009 at 7:15 AM and is filed under Uncategorized.
You can follow any responses to this entry through the RSS 2.0 feed.
You can leave a response, or trackback from your own site.
Does anyone there, or anyone at Foresight, believe that anything special could happen with supercomputers crossing the line into human brain computational power territory? Supposively, the human brain operates between 10 and 20 petaflops, and supercomputers going online soon are supposed to be in that area. Couldn’t these machines support a human level AI? Is anyone trying to get this done? Isn’t this slightly more important than simulating the weather or nuclear weapons, surely someone besides me thinks about and wants this…
I am attending the Singularity Summit. The first day went pretty well. I met Ray Kurzweil himself, and we discussed nanotechnology, I asked him which development pathway he thinks is the most prolific to get us from where we are now to MNT systems; he seems to think a combination of many, and especially things like Dip Pen Nano printing and more.
Regarding AI systems there were some very interesting proposals such as “Whole Brain Emulation” in which a real time simulation of the human brain based on advanced scanning of neural and molecular structure would take place. Some disagreed as to whether or not this alone could bring about a truly Post Human AI. An interesting question was raised: If we were to make a real time human brain simulation and it was conscious, should it be protected with the same rights we flesh and blood beings get? One researcher said yes.
Kurzweil and some others disagree, it seems, and believe molecular technologies will be sufficient to bring about human level AI with the right software.
One of the issues that was brought up time and time again was this: How do we build AI systems and superintelligences, and get the good benefits from them, while not allowing them to be abused or to turn on us. This is a big topic and fear many people have.
One concept put forth by a Dr Chalmers was to create AI systems that are virtual software programs within a computer, instead of robotic AI systems that interact in our enviroment; the idea beinf if the AI turns bad, we can unplug it. This raises issues such as: 1 should you? 2 If the system is sufficiently advanced, and it can detect who we are, and that it is a being within a computer, it can use deception and trickery being very advanced, to somehow “leak out” into our world. One idea is that we should upgrade our minds onto non biological substrates, and integrate with advanced AI to survive.