Presidential Commission Will Recommend Backing Open Source Development Path
A major article in the New York Times ("Code Name: Mainstream - Can 'Open Source' Bridge the Software Gap?" by Steve Lohr, 28 August 2000) reports that a Presidential commission will recommend backing the Open Source software development model as an alternative path for addressing pressing national needs in the development of new information technologies.
In a report to President Clinton last year, a group of leading computer scientists warned that the nation faced a troubling "software gap."
The group, known as the President's Information Technology Advisory Committee and made up of corporate executives and university researchers, said that programmers simply could not keep pace with exploding demand for high-quality software -- the computer code needed for everything from Internet commerce to nuclear weapons design. To bridge the gap, the group said, the nation must not only train more skilled programmers but also explore fresh, even radical, approaches to developing and maintaining software.
According to the Times article, the group will recommend that the federal government back "open source software as an alternate path for software development," according to a draft copy of the report, which will be sent to the White House and published in a matter of weeks.
"I am increasingly coming to the conclusion that the Internet and open-source initiatives are the free marketplace way of dealing with the extremely complex software issues we are facing," said Irving Wladawsky-Berger, an I.B.M. executive and a member of the presidential advisory committee.
COMMENTARY Approaching 2001: The Perils of Prediction
by J. Storrs Hall, Research Associate
Institute for Molecular Manufacturing
"When an elderly but distinguished scientist declares that something is possible, he is very probably right. When he declares that something is impossible, he is very probably wrong."
Arthur C. Clarke, author of 2001: A Space Odyssey
Predictions are tough to make, as baseball's quipster Yogi Berra put it, "especially about the future." As the Second Millennium draws to a close, however, it is conventional to try to make at least some attempt to see into the Third.
Earlier this year I was invited to participate in a Foundation For the Future (FFF) workshop on nanotechnology and its implications for the coming millennium. Once I began seriously thinking about it, I found I was stonkered. Any informed predictions I could make based on physical capabilities of the technology, which we can have a reasonable notion of now, take us to a radically different world in 2100, with 900 years to go.
The best I could do was to take a look at the previous millennium and try to draw some parallels.
In the 1890's Octave Chanute published "Progress in Flying Machines." This inspired and helped the Wright brothers, who had a brisk correspondence with Chanute, to make flying machines a reality about a decade later. A century later aviation is one of the major factors determining the world's economic and political structure.
In the 1990's Eric Drexler published Nanosystems. Many more people are working to turn it into reality than were working on flying machines a century ago.
Even though I'm something more of a gradualist in terms of prediction than some of the Singularity fans in the community, from the standpoint of 3000 the difference really won't matter much. By the end of the coming century, we should have a nanotechnology as mature as our aviation is now. Then what?
A quarter millennium ago Benjamin Franklin did experiments with sparks. What, in 2250, will we be doing that bears the same relationship to the Brookhaven Labs RHIC ("Melt the vacuum") experiments as they, or microprocessors, bear to Franklin's kites and Leyden jars? One reasonable guess is that we will be manipulating reality at a level we don't even have descriptions of now, as the atomic nature of matter was unknown in Franklin's day.
A half millennium ago, there emerged a meme complex that reprogrammed some of us to a new way of thought: reductionist, rational, empirical; today we call it science. It was abetted in its development and spread by the invention of the movable-type printing press. Half a millennium hence, rather than merely reprogramming our minds, we'll very likely have a solid understanding of how they work at every level from neurophysiology to the evolution of cultures, including all of psychology, politics, and economics in between as a unified body of knowledge.
A technology based on the above should be able to create mentalities as complex and competent as current-day nations in seconds, simply to solve minor subproblems of whatever project one happened to be working on at the time. Compare (say) the engineering calculations you can do now by firing up the appropriate CAD and simulation software, with what could be done in 1500. Or note that one person, in a variant of a tank designed for the purpose, could defeat any army fielded in 1500.
It should be clear that just about any technological prediction we can make based on an extrapolation of current capabilities will fall far short of the actuality, barring some catastrophic collapse of civilization.
On the other hand, there's no software or other technology to help you write literature any better than Shakespeare did. A bit faster, perhaps, with "desktop publishing", and looking spiffier on the page; but the essential core of writing hasn't been touched.
What's the prospect of getting minds not just faster, but wiser, more insightful, than our own? Progress in areas surrounding the understanding of the human mind has been moving at least as fast as that in physical science; it seems quite unreasonable to expect that it will either grind to a halt or miss its target entirely.
Thus we should expect a world in which "our" mental capabilities are far beyond our current ones. Singularitarians propose that makes such a world incomprehensible to us; I disagree to some extent. In a sense our current world is too complex to understand: there is too much knowledge, too many people, for one human to comprehend it all. Yet we break it into parts, and layers of abstraction, and do as well as we can. I claim that a future world would have more parts and more details we'd miss out on, but we could understand it as well as we could understand anything.
I think synthetic intelligences will supersede organizations, from clubs to governments. Humans will be augmented with memory and processing power. Communication will take on forms and bandwidth far exceeding speech. Relationships (of all kinds) between people will get more complex. But many of the essential characteristics of people are an evolved response to the situation of being one of a group of communicating intelligences in an environment each one can effect but none controls completely. As long as that's a valid description of our "mind children", some of the human character should remain.
In any case, what we do now will strongly affect the character of the intelligences that inhabit the universe in 3000. Some of us may even be there in some form. If we want to have a beneficial effect on that world, another saying of Yogi's may be apropos:
A Foresight Special Report: Bioethics and Nanotechnology
by John Papiewski
In July, The Center for Bioethics and Human Dignity sponsored a conference, titled "Bioethics in the New Millenium", which was held in Chicago and covered existing and potential ethical problems in medicine. The audience was largely made up of physicians, nurses, ethicists, and clergy. Since the CBHD is at Trinity International University, a theological university, there was a pervasive theme of Christian evangelism and bible-oriented thought. Dr C. Christoper Hook, M.D. delivered a talk titled "Cybernetics and Nanotechnology."
Dr. C. Christopher Hook, MD
Dr. Hook holds a number of positions at the Mayo Clinic, Rochester and is Assistant Professor of Medicine in the Mayo Medical School. He is Director of Ethics Education for the Mayo Graduate School of Medicine, and helped create and chairs a number of ethics advisory bodies there. He is a Fellow and member of the international Advisory Board of the Center for Bioethics and Human Dignity.
Going into the seminar I had some concerns about how accurately and in what light nanotechnology would be presented. On that count Dr. Hook did well, describing nanotechnology in terms of molecular design and manufacturing, and mentioning Eric Drexler, Richard Feynman, Robert Freitas and the Foresight Institute and their various significant publications and talks. He rounded out this background with recent developments such as efforts in DNA and RNA computing and work done with fullerenes. The cybernetics side of his seminar covered recent work with wearable computers, the successful implantation of artificial retinas, and growing nerve cells onto electronic interfaces. He pointed out that cybernetics is already commonplace as eyeglasses, hearing aids, and pacemakers. His slides ranged from factual (the cover of Nanosystems), to hopeful (renditions of cell repair robots) to fearful (a Star Trek "Borg" character and an image of a person's head with bar code coming out the back).
Dr. Hook spoke of possible medical consequences of nanotechnology such as cell repair and reversing the aging process. He mentioned the work of Robert Frietas on developing technology to replace blood components. He also briefly touched on out-of-control nanotechnology in the "gray goo" scenario.
In the ethical focus of the talk Dr. Hook discussed Bill Joy's article in Wired Magazine ("Why the Future Doesn't Need Us," April 2000) and Foresight's recent efforts to create guidelines for the safe development of nanotechnological systems. While he wasn't caught up in Bill Joy's anxiety, Hook did call the Foresight guidelines "naive," depending too much on an honor system. He expressed grave concerns about a future world of people with electronic implants, inextricably tied to the Internet, saying that would be destructive to human relationships.
Regarding medical applications, he drew a distinct ethical line between correcting injuries and diseases, and enhancing or altering (augmenting) an already healthy/normal person. Using biblical and Christian references to back up this view, he made comparisons between Wired magazine items on transhumanism and Neitzche's writings on the Superman, again concluding that efforts to fundamentally change human nature would be morally destructive.
At the end, Hook's tone was reservedly optimistic. He agreed with Foresight's view that calling for a stop to nanotech development would be shortsighted. He was hopeful about potential medical uses and the elimination of unnecessary suffering. But given the powerful "temptations" nanotechnology would eventually present, he asserted that Christian beliefs were crucially needed to prevent social degeneration.
The conference organizers offered an opprotunity for journalists to talk one-on-one with the speakers. Dr. Hook reiterated his view that Foresight's work on safety standards, while well thought-out and done with the best of intentions, was not enough. He stated that a proactive approach using active measures of different kinds is needed. He was concerned that a stealthy nanotech virus could spread undetected, striking suddenly and with devastating results. I asked him about the possibilies of developing new institutions to deal with hostile nanotech: humanity has been living successfully with nuclear, biological, and chemical weapons for decades largely due to institutional responses to them. He didn't hold out much hope for institutional responses to nanotech. He believed the ubiquity of nanotech and its self-reproducing power would make it difficult or impossible to manage.
I attended two other seminars that I thought Foresight readers would be interested in:
In "Artificial Life and Intelligence," Mitchell Wilkes, Ph. D. of Vanderbilt University, covered the current state of robotics research, and asked, if these things appear to be alive, are they? If they are alive, do we have any ethical responsibilites towards them? Wilkes maintained that while machines may have degrees of intelligence, they do not have minds or consciousness, and asserted that AIs will always be brittle things, limited to narrow areas of competence, unable to develop intutitive or non-logical responses to problems, or to deal with poorly defined problems.
"Personhood and Artificial Intelligence," by Robert Garcia, drew similar conclusions. He asserted that cognitive/functional definitions of consciousness underlying AI research create an ethical problem: If an AI can pass a Turing test and be declared conscious and a person, if a human flunks the test, are they not a person? Garcia held the Turing test isn't a valid test of consciousness: computational states are blind to meaning and can't produce the semantic understanding or intention necessary for consciousness.
While debates like this about artificial intelligence are nothing new and will continue into the forseeable future, it's interesting to see that discussion about nanotechnology has moved from technical communities into the general public and have caught the attention of ethicists and moral thinkers.
The new program establishes a Ph.D. nanotechnology track tied closely to other science disciplines, and will involve nine departments. Students will earn concurrent degrees in nanotechnology and a discipline of science, engineering or medicine. The effort is being funded by a $2.7 million grant from the National Science Foundation's Integrative Graduate Education Research Training program.
In a UW press release, Dr. Viola Vogel, director of UWCNT, said the need for such programs is critical. "Nanotechnology will be to the 21st century what microelectronics was to the past century," Vogel said. "This field has implications for a wide range of disciplines . . . and it has the potential to totally change almost every aspect of our lives. There will be a great demand for people with proficiency in this field."
The new program will build on the foundation laid by the UWCNT. Founded in 1997 to encourage interdisciplinary educational pursuits, the center maintains close working relationships among otherwise separate departments using nanotechnology as a unifying theme. These include biochemistry, bioengineering, chemistry, chemical and electrical engineering, materials science and engineering, molecular biotechnology, physics, and physiology and biophysics.
As a result, the faculty and academic infrastructure to support the new doctorate already exist, Vogel said. The bulk of the NSF money, awarded in $500,000 increments over five years, will pay for student fellowships for Ph.D. candidates to pursue in-depth nanotechnology research. Part of the grant will fund symposia and conferences to encourage collaboration. An additional one-time $200,000 award will fund purchase of additional equipment for the university's NanoTech User Facility, where graduate and undergraduate students receive hands-on instruction in operating state-of-the-art equipment.
The end goal, according to Vogel, is to broaden students' training and prepare them to work in a field that is increasingly multi-disciplinary, where sciences overlap, often exploring uncharted ground. "The race just gets faster and faster," Vogel said. "If they are interdisciplinary they can communicate better, which means they can recognize the importance of other experiments and incorporate them into their own work. That saves time and makes for good science."
In a related item, Flinders University, located in Adelaide, South Australia, Australia (about 750 km northwest of Melbourne) is offering a Bachelor of Science program in Nanotechnology. The BSc degree is a honours specialization of a general science degree program, providing nanotechnology-related courses to a curriculum that includes math, physics, chemistry and biology. A description of the program can be found on the Flinders web site, along with an overview of the coursework required for the program. There are apparently not yet any similar post-graduate programs in place.