“Ethics for Machines” paper: Excellent
from the great-stuff dept.
Strongly recommended by Foresight chairman Eric Drexler is this paper by Senior Associate Josh Hall. Josh writes "The final version of my ethics for machines paper is now available. Thanks to all those at the "Confronting Singularity" Gathering who read the draft and discussed the ideas with me."Do you agree with Eric that this work is important and should be expanded into a book?



December 20th, 2001 at 9:05 PM
legalism and humanism have to be challenged
I'd favor developing this material into a book, if only because it would bel likely to consider the important challenges to its legalist and humanist point of view. The most significant breakthroughs in ethics since the work that JoSH references, have been 1. a shift from patriarchal moralism ("should") towards dispute resolution in an environment of mutually-assured destruction (a "feminist" ethic of de-escalation, some say), and 2. a shift towards the human "ape body" as the absolute reference point not only for ethics but for symbolic reasoning itself (refer work of Lakoff, Nunez, Foucault, Suchman, and the Greens).
If JoSH is prepared to deal seriously with these, and to consider swapping out "human being" in the Asimov First Law with something more biospheric or ape-centric (to be child-centric is to combine the two, some would say), the material should be a book. But if it's going to be a paean to the American concepts of legalism and humanism as they presently stand, arguing that America is a "moral leader" and therefore must be some kind of "technology leader", I'd say the book is not only noise, but the very worst kind of noise.
I don't expect JoSH or Chris Peterson to change their perspective, which I first noticed clashed with mine ten years ago. What I do expect is that if they publish a book, they make an effort to actually investigate and represent points of view outside their pro-tech, pro-human, pro-legal American pseudo-capitalist orthodoxy. That view has so drastically retarded debate on these issues that it has rendered nanodot.org and foresight.org basically useless as debate fora.
Truly "Confronting Singularity" requires more than a bunch of easily-debunked tech weenies like Robert Frietas or Esther Dyson in attendance. It requires a genuine ethical perspective on bodies, what is meaningful about them, and how they relate to our symbol systems, including "ethics".
I find more meaningful debate on this at http://www.csmonitor.com/monitortalk or http://www.greenpeace.org or even http://www.kurzweilai.net or http://globalgreens.org, than I find here.
Maybe the book publishing process would force JoSH and Chris to confront their own seriously flawed premises, if only for fear of brutal reviews from those with a body/Green point of view.
January 22nd, 2002 at 7:15 PM
"Infantile Disrespect"
The following appeared in response to a repost of some of Josh's material at GlobalGreens.org Quoted from the anonymous author without comment: original comment:
"The wise that you so diligently seek already exist, the "morally superior" a figment of narrow minded fantasy.
Is this a personal attack ? Yes and no. Viewed as an individual post it can be seen so, as a response to another post it is merely self-defense.
You seem to have fallen under the spells of both democracy and higher education.
The wise that you so diligently seek already exist and have for over 60 million years. They are commonly called Elders. Why don't we hear more of them ? They are often frustrated at being outshouted by the false promises of democratic process.
Does an 18 year old have anywhere near the experience of a 50 year old from which to speak ? Of course not.
Does higher education offer the broad spectrum of experiences that living closer to the wild and to the street does, does it offer those experiences that it often takes to survive ? Of course not.
I suggest that you review your priorities, both your personal and political priorities.
I suggest that you consider, as simply put as I can, that the false promises of eternal nursing that democratic prosess offers is just that; false promises. I suggest that you consider getting weaned.
Wisdom is not experience nor is it education. Respect of Elders is a deep and ancient part of us. It is natural."
March 2nd, 2002 at 8:41 PM
'ethical architecture' for a Biosecurity Protocol
Biosecurity Protocol discussions at Greenpeace include 'ethical architecture' as one of the elements of a protocol. Josh should comment on this perhaps. Other elements of that protocol are under discussion.
May 27th, 2002 at 7:57 AM
Machine Ethics paradoxes?
Looking at the "Machine Ethics" article by JoSH (John Storrs Hall), I am surprised that he ends the article with a visionary statement about advanced machine minds being morally superior to human ones! In particular, in talking about such things as generalizations to the biblical Golden Rule ("do unto others" etc.), he seems to be indicating, not only superiority for robots in some way, but also some sort of ongoing progress or advancement in ethical systems as such? One of the things that makes this difficult on the face of it, is that JoSH's own theory of morals is essentially a humanist, relativist type of theory where moral values are strictly biological and/or cultural adaptations. So the effort is being made, here, to extract a humanistically defined direction of moral progress from a complicated social science situation of various players in society interacting with one another. There is the "generalized Golden Rule" notion, for example, but is there any real adaptive value, or even any unambiguous value, to the sort of Golden Rule extension that JoSH is talking about? Considering, as a rule, "treat your inferiors as you would have your superiors treat you", just who would define "superior" and "inferior" and how would you make comparisons between completely different situations, anyway? Granted, JoSH does seem to have a sophisticated, political philosophy based argument for moral advancement in general (whether that moral advancement is strictly inside AI machines or not). The argument is that the legalistic or bureaucratic machinery of democratic governments has proven to be generally ethically better than relying on the ruling intuition of a single dictator or ruling clique. By extension, if machine-like legalisms can be better than a human's judgement, then perhaps an actual thinking machine could be ethically superior to a human, as well? For myself, while I find this "extension by analogy" interesting, my sense of skepticism gets roused by this, too. A significant aspect of law is that common sense interpretation and human goal setting get factored into the system in many places, so it's really hard for a skeptic to conclude anything much about superiority of machine-like rules! Given my sense of skepticism, I naturally found it interesting to read Peter Voss' sharply dissenting response to JoSH's article, with Voss' emphasis on individualism and scientific self-interest, or "looking out for number one". What I would note is that pure self interest doesn't always explain why people feel the sense of mission or of "rightness" that they often do feel in pursuing a particular career, say, or in fulfilling particular duties in life. Projecting this into the future, I think it would be fair to say that it's generally a good idea if advanced entities would tend to think of certain virtues and/or duties as essential to worthwhile living — in effect, why live at all, if one can't make oneself useful? Within that kind of self concept, "looking out for number one" is certainly no contradiction! Remember, even in terms of the old Asimov robot stories, the "look out for number one" thing *was* Asimov's Third Law! Having mentioned Asimov and his Three Laws, I find it almost paradoxical how the Asimovian ideas managed to do both so both well *and* so poorly in defining what the appropriate rules for sentient robots might be. While I suppose we might complain that Asimov's rules are difficult to interpret consistently, I find this a mere quibble, compared to the fact that those Laws are seemingly not at all appropriate for certain kinds of human occupations — think of military service, for instance. Also, the three "Laws" probably aren't appropriate for medical "triage" situations either, i.e., deciding who is medically worth spending time to save, on the one hand, and who is so close to that one must abandon them, on the other. Following a strict law not to kill anyone, or some law to save *everyone* scarcely seems helpful in such situations, yet Asimov's robots were supposedly bound by that sort of injunction. With regard to morally superior AI's, what with all the bemusingly complicated relationships between humans, who can tell how real AI's are going to fit in, let alone whether their morals can be made superior, for goodness sake? Do we really know that an advanced AI could invent something smarter than itself, or that such AI's could even work as effectively in groups as humans are able to? There are biological and emotional ties between humans that are going to be hard, maybe even near impossible, to simulate, while the long term emergence of a true AI and/or transhuman society is, well, long term? Again, I'm skeptical about any vision of future minds that touts their inherent moral superiority. Now, I don't wish to have any lack of vision in my own outlook, so perhaps it would help if I bring to mind a quite impressively visionary sf novel that I read some time ago, namely Greg Egan's _Diaspora_ . The major viewpoint character in that novel is an AI called Yatima, and Yatima, in turn, is in some way quite self motivated and human-like (unlike most of Asimov's robots, who would gladly trash themselves at any time, if their human owners told them to). What occurs to me here, is that maybe what JoSH is really saying is that a wise or humanistic society would treat any future "Yatimas" as much like human citizens as possible. In other words, don't delete poor Yatima just because you need some extra disk space, or something! If that is what JoSH is trying to say, then maybe he should try to say it more clearly somehow, without making the future sound like a moralist utopia? In _Diaspora_, for instance, different peoples clearly have some varied ideas about "right" and "wrong", but it sure would seem as though the scientifically enlightened wouldn't assume any clear path to increasing their moral superiority! These future visions can get out of hand, it seems, so maybe it would actually be better to build and work with some sentient AI's *before* we reach for a firm vision as to their ethical nature, or whether they need to be treated like humans, or whatever?
April 18th, 2009 at 8:28 AM
Hello nice site