Exercise Material: Assumptions for the purpose of this discussion:
AI with human-level intelligence, knowledge, understanding & self-awareness is possible
It is likely to happen soon 5 to 30 years either by design, or by accident
Because such an AI would be able to re-design itself (much better than we can), it would be able to improve its intelligence (ie. bootstrap) and to reprogram its primary goal system. It is hard to see how we could possibly contain, control, or outsmart it. Whether by direct action, or by persuading humans, it could pursue its own goals whatever they may be. Is there any way to predict and/ or guide its goals?
It seems to me that the best we can do is to approach this question from a philosophical and psychological perspective: Firstly, to what extent can we put ourselves "in the AI's shoes" to predict what rational value system (morality/ ethics) will it likely develop? Secondly, how will its "view of the world" (and of humans), shaped and colored by its environment, affect its goals? Thirdly, would deeply embedded "emotional"/ motivational structures ultimately direct its goal-setting?
Biotechnology: Lessons for Nanotechnology
Biotechnology has molecular mechanisms, working replicators, major investment, growing products, and a worldwide market. There are lessons to learn from the manner of adoption exhibited by the general public. Explore what these might be and which might apply to the development of nanotechnology.
On Change: A Singularity Intro
Catalyst: John Smart (tentative)
One of the biggest barriers today standing in the way of deployment of advanced wireless communications systems turns out not to be the technology, but restrictions related to regulatory policies. This session will discuss the nature of these barriers and how they have affected the development of wireless data systems over the years.
We will also discuss on-going work using advanced wireless technology to deploy multiservice IP systems as part of infrastructure-development projects in the Kingdom of Tonga and with Native American groups in the US, and how such projects are able to deal with the limitations imposed by conventional regulatory barriers.
Collaboration and Activism Online
Catalyst: Jeff "Hemos" Bates (tentative)
The Internet, Slashdot and other emerging systems are providing environments for collaboration and information sharing. Explore the requirements for systems that will encourage communication and collaborative efforts.
Communicating Challenging High-Tech Issues to the Thinking Public
Techniques for making the communication of technical issues more effective.
Catalyst: Neil Jacobstein
Communicating complex technical issues to the thinking public is a particular challenge for several reasons. A large percentage of "the thinking public" is scientifically and technologically illiterate. Complex issues are often torqued by the media into personality-centric "pro vs. con" camps, even though the issues are typically multidimensional. There is increasing media competition for the public's increasingly short attention spans, and complex technical issues are not easily reduced to a few sound bites. Communicating technical issues in simple, clear terms is a skill that must be practiced regularly to be effective. Finally, the technologies that can augment technical communication and understanding are underutilized.
This segment will examine techniques that could make technical communication of complex technical issues more effective. These techniques include: being coached on how to conduct TV and radio interviews, using a specialized editor for written communications, hyperdocument systems with backlinks (Engelbart/Bootstrap), visual argumentation methods (Robert Horn), Science Courts (Arthur Kantrowitz), videos, simulation and animation, and individualized web-based tutoring systems.
http://www.nsf.gov/sbe/srs/seind98/frames.htm is a link to a 1998 NSF Report on Public Understanding of Science. Chapter 7 focuses on "Science & Technology: Public Attitudes & Public Understanding". It is particularly instructive on the importance of the science news media.
http://nasw.org/csn/ is a link to the National Association of Science Writers on "Communicating Science News: A Guide for Public Information Officers, Scientists and Physicians".
In a world of mature nanotechnology, it would be just as feasible for you to build a spaceship in your garage as it was for your great-great grandfather to build a horse-drawn wagon in his barn. But will you be able to design it?
Catalyst: Josh Hall
Today's microprocessors are already so complex that they could not be designed without substantial help from design and simulation software. Will the same kinds of design techniques scale up to the many-orders- of-magnitude more complex systems we'll be able to build with nanotechnology?
Systems at the biological level of complexity, and higher, are so complex that we can't even imagine the amount of detail. Forget how cells work, forget how the endocrine system interacts with the nervous system, forget how we walk and talk, we can't even specify what someone ought to say (i.e. in order to be considered intelligent).
One way to get around the problem of design is to use evolution in one of a number of possible forms; we don't even have to have a really good idea of what we want the system to be like, we only have to "know it when we see it". Does this leave us unable to verify the systems we need to trust the most, such as cell repair machines, replacement bodies and minds, and so forth?
Another way to have complex systems without rolling your own every time is to use previously tested and understood subparts or organizations of parts. Clearly a market in such subdesigns is a Good Thing; is such a market reachable from today's intellectual property mess?
Finally, if every average Joe can build a spaceship in his garage, what's to keep him from building an ICBM instead?
Computer Security: We're Not Even Close
Where do we stand, and what remains to be done?
Catalyst: Brad Templeton; Memetic Engineer(s): Roger Goun
Creative Systems: Setting Them Up For Yourself
Creative systems can lead to exciting results. What is out there, and what are the considerations in choosing the systems for you?
Catalyst: Robert Grudin (tentative)
Decision Support: Increasing Collective IQ
Software systems can help dispersed groups of individuals to work effectively together. Working on such systems can enable us and our technology to improve, and may launch smarter systems that will ultimately help us learn faster.
Catalyst: Doug Engelbart
Defining Progress: Beyond Today's Economic Measures
Is it everyone's dream to be wealthy in this superheated rat race? What might replace money in the evaluation of progress?
Catalyst: David Friedman (tentative)
Catalyst: Matt Taylor
Iteration 1. A scan of the conditions and opportunities that exist for the Foresight community, and for society in general.
Iteration 2. What are the alternatives? For example, do we encourage and embrace technological advances or do we relinquish the development of certain technologies? Openness versus privacy, etc..
Iteration 3. Testing of the alternatives. What are the implications of the different alternatives? The unintended consequences?
Iteration 4. Develop strategies and policies for Foresight and society that are built upon the memes that have been generated during the first three sessions.
Designing for Space/ Space Infrastructure
"I imagined how wonderful it would be to make some device which had even the possibility of ascending to Mars... Existence at last seemed very purposive." Robert Goddard
Catalyst: Josh Hall; Memetic Engineer(s):
If nanotechnology reduces costs for physical machinery the same way computer processing costs have plummeted, ocean liners will be as affordable as automobiles are now, and the Carribean will look like Manhattan at rush hour. Perhaps they should buy spaceships instead?
Current technology makes it possible to live in space, but in extremely cumbersome suits. Recycling is primitive at best and in most cases non-existent; fresh food and oxygen must be boosted at $10,000 per pound like everything else. Nanotechnology changes all that.
Nanotechnological accidents may have the potential of seriously impacting the habitability of Earth; nano-weapons will certainly have. Should we leave all our eggs in one basket? Shouldn't we at least do some of the more dangerous experiments elsewhere?
When everything anybody does affects or can be affected by anyone else, society begins to be choked in a thickening web of regulatory sclerosis. Do we face an inevitable decline, or is there enough elbow room between the stars to restore our pioneer spirit?
Once in orbit, a wide variety of options, from ion engines to Drexler solar sails, are available to go wherever we want LEO is "halfway to anywhere". Getting to orbit is the hard part. With current technology, it is expensive and dangerous to concentrate the energy needed. Can nanotechnology do better?
Many people are seeking compatible communities. Explore ways to improve outreach and community development as we all struggle with aspects of productivity and happiness.
Catalyst: Jeff "Hemos" Bates (tentative); Memetic Engineer(s): Howard Landman
Emergent Constraints in Complex Systems
Are certain aspects of the human future eminently predictable (i.e. molecular nanotechnology, intelligent machine systems, the Singularity, etc.) or are they chaotically uncertain?
Memetic Engineer(s): John Smart
The Constraint of Information Exponentiation Relevant Book: The Evolutionary Trajectory: The Growth of Information in the History and Future of the Earth, Richard Coren. (Review at http://www.biomednet.com/hmsbeagle/57/viewpts/op_ed, brief registration necessary).
The Constraint of Non-Zero Sum Ethics Relevant Book: Nonzero: The Logic of Human Destiny, Robert Wright (Reviews, excerpts and other articles at: http://www.nonzero.org/)
The Constraint of Ever-Expanding Consciousness Relevant Book: The Global Brain: The Evolution of the Mass Mind, Howard Bloom (The hardcover will be out in August 2000. A draft version is now available at http://www.heise.de/tp/english/special/glob/default.html as a series of chapters: I to XXI. First and last chapters are especially informative.)
The Constraint of Space-Time Collapse Relevant Books: Many books touch on this concept, but often stop short of implying it as a Universal constraint of computation, as I wish to suggest. Some classics: Miniaturization (1961), Gilbert, Horace D., Editor.
Are certain aspects of our technological future becoming ever more predictable (ie, molecular nanotech intelligent/conscious computers, a Singularity) rather than being chaotically uncertain to transpire? If so, which events, and why? Is there, as Holland, Wheeler, and many others now believe, a hidden and emerging order in the Universe as a computational system? Stu Kaufmann states that emergent laws of complex systems must exist, and be every bit as constraining as the simpler laws from which they arise.
If we agree certain major events must occur in the run-up to Singularity, what might be the emergent metalaws (constraints) that are guiding these processes? We can approach these issues with insights from information theory, physics, cog sci, theory of consciousness, computation, and many other disciplines. Several candidates will be proposed and others vigorously solicited. We'll end up with a title and a brief explanatory paragraph for each. Then we will take a poll on the potential validity of each.
Time permitting, we can then briefly ask what these constraints imply with regard to near-future events.
Encrypted Private Currency: Avoiding Government Panic
Will government action (whether intentional or not) exacerbate the negative effects of e-cash on privacy?
Catalyst: Luke Nosek; Memetic Engineer(s): Steve Schear
Game Theory and Idea Futures, Prospects and Problems
After reviewing the state of the idea, including recent developments in theory and application, we will discuss future prospects and strategies.
Catalyst: Robin Hanson; Memetic Engineer(s): Chris Hibbert, Ken Kittlitz
Can a modern, technical society be based on gift-giving? Examples exist, like the potlatch chiefdoms of the American Northwest.
Catalyst: Gayle Pergamit and Eric Raymond (tentatives)
Globalization: Decreasing National Sovereignty
How can global, long-term perspectives be used in decision making strategies? What are the advantages and disadvantages to pursuing globalization in the development of advanced technology, and what steps may be taken to ensure growth and integration across cultural barriers?
Catalyst: Philippe van Nedervelde:
Complexity of issues, increasing numbers of people involved in decision making processes, and emerging conditions all impact collaboration efforts. Individuals cross national boundaries electronically, passing information and ideas to all parts of the globe. Developing long-term solutions will require unprecedented flexibility and cooperation that must transcend cultural differences.
Identify advantages that arise from pursuing a global strategy over a local one. Discuss disadvantages and methods for attempting reductions in regional conflict. In what areas will convergence make the most difference?
Incentive Engineering: Rapid Rewards by Design
Algorithms that manage resources in ways that enable both conventional management and market-based decision making will be useful in establishing agoric systems. Explore the boundaries between design and evolution.
Catalyst: Eric Raymond (tentative); Memetic Engineer(s): Chip Morningstar (tentative)
A point often missed in the discussion of intelligent systems is the important difference between what I shall call "knowledge intelligence" (KI) and "learning intelligence" (LI) at the two ends of the spectrum. Consider the differences and similarities in these systems.
Examples of KI include databases, dictionaries, web pages, expert systems and other human-coded abilities. LI is the ability to automatically gather free-form information and integrate it into a (conceptual) knowledge base. Furthermore, LI has the ability to autonomously learn techniques and abilities to achieve its goals. This includes the ability to gather data, and to improve its learning. It must learn how to learn.
I see LI as an important, but neglected, area of research crucial to achieving general purpose AI. The better we are at achieving artificial LI, the less time we need to invest in KI. LI produces KI as a byproduct.
Intellectual Property: Abundance and Abuse
It is becoming increasingly clear that the current patent system generates problems, and sometimes absurd, unanticipated side effects. Consider scenarios of the extremes and the middle ground in an attempt to find a new system that might reduce the errors.
Catalyst: Markus Krummenacker, Memetic Engineer(s): Brian Schar
Intellectual Property: New Foundations
Is it a social necessity for someone to control a specific object? How should such control be managed?
Catalyst: Larry Millstein (tentative); Memetic Engineer(s): John Bashinski
Intellectual Property: Reform
Do we have a solution? If so, how might we get it adopted?
Catalyst: Brad Templetong and Dan Gillmor (tentative); Memetic Engineer(s): Brian Schar
Read Ahead: See above.
Knowledge Augmentation: Transcending the Meat Brain
What computational abilities would you like to have built-in? If your brain was operating at speeds a million times faster than your body, what might be the outcome?
Catalyst: Doug Engelbart; Memetic Engineer(s): Ka-Ping Yee (tentative)
Liberty: Designing RightsVirtually anything may be called a right. Which ones actually advance a vibrant, healthy and just society toward fulfillment?
Catalyst: Doug Casey (tentative); Memetic Engineer(s): Pierluigi Zappacosta (tentatives)
Life Extension: Let's not be the last generation to die.
Early life-extension techniques might create a wealth-based aging gap. Can progress continue without fostering envy and anger?
Catalyst: Marty Edelstein (tentative); Memetic Engineer(s): Peter Voss
As our understanding of the molecular basis of aging improves we can expect to see a flood of treatments aimed at restoring tissues to their youthful states and conditions. An effective anti-aging protocol might require a very large number of such drugs or procedures, perhaps as much as one for each tissue type. If so, given the cost of bringing drugs to market, it is possible, at least in the early years, that the top 1% of the population might start living considerably longer than the bottom 50%, or even the bottom 90%. Social tensions will arise, and strategies should be developed to deal with these. (http://world.std.com/~fhapgood/nsgdir/97-04-15)
This is for those serious about life extension, let's look at all of our risk factors: Physical health, mental health, financial limitations, living environment & accidents, motivation (incl. lack of happiness/ passion for life), risky behavior/ character traits, etc. How do we evaluate them? How do we minimize or balance them?
There are many practical steps we can take to optimize our lives and reduce our risks: From philosophical to investment knowledge, from diet & exercise to effective cryonics preparations, from psychological know-how to building a meaningful personal community, from goal-setting to choosing our work & living space. Let's explore.
We expect to see AI's as smart as we are, but can we build AE's (at least) as ethical as we are? Creating hyperintelligent superhuman sociopaths would be a blatantly stupid thing to do.
Memetic Engineer(s): Josh Hall
Isaac Asimov proposed Three Laws of Robotics which unequivocally put our welfare ahead of the robots' and made them our slaves. Hans Moravec relies on the strictures of the welfare state to tax them for our benefit after they have superceded us in every capability.
By the end of his epic saga, Asimov had realized that robots that were really intelligent would reinterpret the Laws, get around them, find loopholes, and so forth. Corporations hire the finest minds available to get around the tax laws. It is silly to think that hyperintelligent AI's would do any different.
Currently, moral philosophers (not to mention psychologists, neurophysiologists, cognitive scientists, and anthropologists) cannot agree on just what ethics is. We think we understand what language is, yet we're having a devil of a time programming it into our machines. How can we hope to do the same with morality, if we don't even know where to start?
There is a new theory of moral epistemology that may help. Can we figure out how to program it into our machines? Can we give them a sense of right and wrong in which they, in their superior wisdom, know that what we said was right really is better than what we said was wrong, not just us programming them to our advantage? Can we make them want to do the right thing?
MEMS: A Gateway to Nanotechnology
Can a top-down approach be bootstrapped with this fast-growing technology?
Memetic Engineer(s): Hank Lederer, David Keenan
We will evaluate in as much detail as we can the prospects that nanotechnology will directly induce must faster economic growth rates.
Catalyst: David Friedman (tentative); Memetic Engineer(s): Robin Hanson
Nano Machine Communication and Control
Systems or material consisting of billions or trillions of separately controllable robots will require new concepts in control. Analogies with ecosystems or markets might help.
Memetic Engineer(s): Bob Fleming and Chrie Kushner (tentatives)
Our bodies are based on molecular machines. Using molecular machines to maintain and fix them may be the most significant advance in medical history.
Catalyst: Robert Bradbury (tentative)
Nano Policy: Preventing Accidents and Abuse
The research and development guidelines adopted for biotechnology have served well. Nanotechnology development may need the same.
Catalyst: Neil Jacobstein
The idea of guidelines for the safe development of MNT (Molecular Nanotechnology) has been discussed within the Foresight/IMM community for over a decade. It is inevitable that any proposals made today will be further discussed and perhaps substantively changed; yet we had to begin somewhere. These guidelines were developed during and after a workshop on Molecular Nanotechnology (MNT) Research Policy Guidelines sponsored by the Foresight Institute and the Institute for Molecular Manufacturing (IMM). The workshop was conducted over the February 19-21, 1999, weekend in Monterey, California. Participants included: James Bennett, Greg Burch, K. Eric Drexler, Neil Jacobstein, Tanya Jones, Ralph Merkle, Mark Miller, Ed Niehaus, Pat Parker, Christine Peterson, Glenn Reynolds, and Philippe van Nedervelde.
The Foresight Guidelines ("the Guidelines") include assumptions, principles, and some specific recommendations intended to provide a basis for responsible development of molecular nanotechnology. The Guidelines were intended as a living document, subject to modification and revision. Early drafts have been reviewed and revised several times since the Monterey workshop, including during Foresight/IMM sponsored discussions led by Neil Jacobstein in May and November of 1999. They were also provided in the attachments to Ralph Merkle's June 1999 Congressional testimony on MNT, and referenced in Neil Jacobstein's presentation on Nanotechnology and Molecular Manufacturing: Opportunities and Risks at Stanford University's Colloquium for Doug Engelbart in January of 2000. The Workshop participants debated whether the Guidelines were sufficiently developed for widespread publication, when Bill Joy's article: "Why the Future Doesn't Need Us" was published in the April 2000 issue of Wired Magazine. This article raised public awareness of the potential dangers of self-replicating technologies, including nanotechnology. Since that time, the Guidelines were reviewed critically by Robert Freitas, and revised by Ralph Merkle and Neil Jacobstein. The Guidelines will be revised once again following a May 2000 Foresight workshop, and then published for open review on the web. We encourage your ideas, suggestions, and participation in this segment.
Nanotechnology: Breakthroughs in the last 12 months
Where has nanotechnology research been going in the last year? Progress on all fronts continues, including a novel architecture for a limited form of replication using MEMS components from Zyvex.
Catalyst: Ralph Merkle
Control of Nanoweapons
In a nanotech war, perhaps nothing will appear to happen for a week. Then, everyone on the losing side will suddenly disappear.
Catalyst: Neil Jacobstein
Chemical, biological, and nuclear weapons exist, and are under some
international regulation, monitoring, and treaty controls. These controls
are known to be quite imperfect. Would we be better off without these
controls, or do they provide some protection? If nanotechnology could be
used to make hard-to-detect and powerful weapons of mass destruction, what
controls, if any, might be effective in preventing their use by terrorists,
or individuals bent on destruction? Could an outright ban on offensive
nanoweapons have any hope of being meaningful, enforceable, or effective? Or
would a ban put countries that honored it at greater risk? Are there viable
alternatives to banning powerful nanoweapons that could fall into the hands
of irresponsible groups or individuals? If so, what safeguards, monitoring,
and control methods might actually work?
Catalyst: Glenn Reynolds (tentative)
Open Source: Progress in World Domination
..in which a ragtag band of heroes saves the planet by giving away more good stuff than the world's richest corporation sells.
Catalyst: Eric Raymond (tentative)
Pruning Legal Trees
No human being could read the legal code in a life time, much less keep up with what's passed and the court interpretations. Is a sane and simple legal system possible, one where igroance is really no excuse?
Catalyst: Glenn Reynolds (tentative)
Parallel processing with quantum superposition: theoretically possible; how will it fare in practice? What are the implications of this technology?
Memetic Engineer(s): Alison Chaiken
Reform Tactics: How to Change the World
Work within the system, or around it? Nurse the sick, or do vaccine research?
Memetic Engineer(s): Denis Rice and Karen Breslau (tentatives)
Reputation: Quality not Quantity
Cooperation only evolves when individuals can recognize each other through repeated encounters in a variety of situations.
Catalyst: Jeff "Hemos" Bates (tentative)
Rights and Responsibility
Are rights and responsibilities really two sides of the same coin? Are there ways to encourage personal adoption of ethical codes?
Memetic Engineer(s): Pierluigi Zappacosta and Gayle Pergamit (tentatives)
How long will it be before machines achieve a general-purpose intelligence equal to or greater than that of the human brain? How will molecular nanotechnology hasten the development of machine intelligence?
Catalyst: Marvin Minsky (tentative); Memetic Engineer(s): Nick Bostrom
Effective artificial intelligence requires three things: hardware, "software", and input-output organs. The latter is trivial; we already have videocameras, microphones, robot arms etc. The hardware requirements can be estimated in various ways. Hans Moravec suggests a figure of 1014 ops, based on known processing capacity or the retina; others have put the figure at 1017 ops or greater (depending on assumed levels of optimization). Nanotechnology will make possible processing speeds well in excess of those estimates, suggesting that the hardware problem can be solved. This leaves us with the software problem. Again nanotechnology is relevant. For nanotechnology will enable powerful new ways of scanning or measuring the activity of biological human brains. How hard will it be to use this information to implement on a computer similar algorithms and computational architectures that are used by the human cortex? And how difficult will it be to radically improve on such artificial intelligence once we have achieved human-equivalence? Will self-enhancing AI result in a runaway process of intelligence enhancements, quickly leading to superintelligence vastly beyond that of human brains, or will the process be one of diminishing returns and slow incremental progress?
Safe Nanotech Systems through "Technical Fixes"
Replicating assemblers using the broadcast architecture and other methods to make them inherently safe can effectively eliminate the accidental "gray goo" scenarios. But nanotechnology based weapons systems pose greater uncertainties and will require active countermeasures to effectively control.
Catalyst: Ralph Merkle
Most people still thing "self replication" means "biological self replication", project their preconceptions and biases onto "assemblers", and get totally confused without realizing it. We'll talk about biological self replication, assemblers, the broadcast architecture, and other ways of making inherently safe replicative systems for commercial purposes. Then we'll talk about some possible weapons systems and countermeasures. The countermeasures require advance preparation and make heavy use of nanotechnology, so the most prudent course of action seems to be continued research accompanied by some focused (and perhaps not entirely public) analysis of measures, countermeasures, and the like. While still likely a few decades away, at some point we'll need to have the countermeasures ready and waiting for deployment if the need should arise.
Is there such a thing as and what is the definition of a safe system? What are some of the properties that such systems will display?
Can we produce working proto-assemblers by shaking parts in a bag? What about computers with no moving parts? Can we assist another approach with self-assembled subassemblies? Does it just work with machines, or can self assembly function in other areas as well?
Catalyst: Marty Edelstein (tentative)
After reviewing some history and theory of economic growth we will discuss the prospects for a dramatic acceleration in economic growth rates in the next half century.
Catalyst: Robin Hanson
Sizing the State
Is there a reasonable size to the state? A maximum, a minimum? What are the elements of a smooth running state, and how does size matter?
Catalyst: David Friedman (tentative)
Many kinds of contractual clauses can be embedded in the hardware and software we deal with, in such a way as to make breach of contract expensive for the breacher.
Catalyst: Nick Szabo
Survivability: Protecting Critical Systems
We're becoming reliant on distributed systems which cannot be protected as a whole by classical security measures. How can we ensure that such systems continue to operate despite some inevitable intrusion and compromise?
Catalyst: Ralph Merkle (tentative)
Surviving the Spike
If a technological singularity can happen what are the likely events and elements leading to it? What might be the physical realities of the transformation, and where would we like to go with our evolution?
Catalyst: Damien Broderick (tentative) and John Smart Memetic Engineer(s): Jess DeMarco
The average amount of sleep a person gets is now between 5 to 7 hours where only a few years ago it was 8 or more. The pace of living is accelerating, the amount and quality of information for most people to know to be a so called "expert" is beyond their ability to access. Collaboration and human machine interfacing is pushing the pace faster and faster. Once machine intelligence derives the ability to "become creative" and interact with itself the pace will take a giant leap forward. Where does the term "meaningful work", and "making a living" fit into the picture? How are most people going to be able to cope with the ever accelerating change?
Group activity: Develop a survival kit for accelerating change. The Singularity survival kit?
Alternate activity: Develop a countdown calendar to the singularity with events and elements that are likely to lead to it and and strategies to cope.
Technanogy.net Nanotech Incubator
Technanogy is an investment company funding businesses devoted to the discovery of nanotechnology breakthroughs.
Catalyst: Larry Welch
Trust and Security
Shouldn't anyone debugging the code of a human-level AI take the Hippocratic Oath? Do we (and should we) trust the people who build the systems we use?
Catalyst: Bill Joy and Karen Breslau (tentatives); Memetic Engineer(s): Roger Goun
Uploading: If you can't beat 'em, join 'em.
One way to avoid being left in the robots' dust is to improve ourselves. First step: move your software onto a faster processor.
Catalyst: Marvin Minsky (tentative); Memetic Engineer(s): Jess DeMarco, Peter Voss
On the assumption that they can "beat us", is "joining them" (as equals, and individuals) even an option? It seems that the evidence strongly suggest that fully-fledged AI will be much easier to achieve than uploading, and will thus happen first. In fact, the complexity of workable uploading will probably require super-human intelligence.
Even if we try the gradualist approach of using the latest AI technology to enhance our own cognition, I think that we will still find that AI's not shackled to a human brain will soon outperform us. The reason that this relationship is not readily apparent with today's limited AI applications, is that crucial aspects of intelligence (such as self-directed learning and meta-cognition) have not yet been implemented.
If "joining them" is not a likely option, then the discussion of "beating them" (or discovering that they don't need to be "beaten", that they are "benign" or obedient) becomes ever more important.
Catalyst: Steve Jurvetson and Ken Lang (tentatives)
With heads-up displays, unobtrusive input devices, wireless connectivity and more, the wearable computer can act as an intelligent assistant, augment reality, and generally make you more effective.
Memetic Engineer(s): Katrina Barillova and Alex Lightman