Foresight Nanotech Institute Logo
Image of nano

Critique of Josh Hall’s ‘Ethics for Machines’

from the major-disagreement dept.
Senior Associate Peter Voss writes "Josh Hallís Ethics for Machines suffers many of the problems endemic to moral debate: vague and shifting definitions, confusion over ëdutyí, rejecting the possibility of a rationally derived morality, and confusing description and prescription. Specifically, it fails to clearly define, or justify, its implied meta-ethical goal of ëgroup dynamismí. Other core problems are: its mischaracterization of ëethical instinctí, its condemnation of self-interest and common sense, and its failure to recognize the importance of high-level intelligence and consciousness to morality. Ethics for Transhumans addresses these points, and sketches an alternative path."

3 Responses to “Critique of Josh Hall’s ‘Ethics for Machines’”

  1. Adam Burke Says:

    A bit harsh

    Caveat: I have not read the response in full, having not read the Appendix apart from the summary. Although I realise that the major charge laid against "Ethics for machines" is lack of rigour, I think a few comments are worthwhile.

    I think the critique is a bit harsh. "Ethics for machines" does rely on intuitive ethical principles, but as I recall it doesn't claim they are the pinnacle of ethical achievement. Rather, it points out an evolutionary reason for the development of ethics, and suggests using the same mechanism for to develop ethics for machines is dangerously slow. It also carefully points out that evolutionarily if self-interest could be separated from ethics that would provide a competitive advantage. I think it's implicit (or maybe explicit) in the essay and in currently common "intuitive ethical systems" that this if such a separation was effected it would be a bad thing, ethicwise.

    As to improving ethical principles by reason and scientific principles, the original essay seemed to support such as position to me, with the author discussing creating super-ethical machines that could make their ethical discoveries known to the other conscious beings about the place, such as humans and post-humans. I also think the description of ethics as a science is too strong: ethics is still firmly a philosophy, and though systematic arguments can be applied, empirical observation is not really possible. The discussion must happen on the level of argument, with arguments being systematically tested out. Good counter-examples to my assertion, showing successful ethical experiments, would be appreciated.

    The technique of assuming the worst case allowable by a particular wording is worthwhile, and I think it's been used to great effect in the critique, I just think it oversteps the mark by occassionally claiming the intent of the essay was always the worst case. This may be one reason Mr Voss finds the article a little incoherent.

  2. PeterVoss Says:

    Re:A bit harsh

    My purpose was to alert futurists to the dangers of certain common approaches and errors in ethics – particularly, using rationality and science to describe our behavior, but not to developing prescitive ethics. I hope that Adam Burke will get a chance to read my whole article. More generally, I would be thrilled to see this important subject of transhuman ethics receive increased rational (scientific) attention.

  3. PatGratton Says:

    Evolutionary Analysis

    Peter makes a number of good points about Josh's paper, particularly those in regard to clarity and apparent internal contradictions. I would like to see these addressed.

    I have additional comments/criticisms which are fairly orthogonal to Peter's points. These comments can be found here, with the major points being:

    • This isn't an abstract debate! AI is incredibly dangerous – papers like this ought to outline the severity of the danger.
    • An ethical approach to this topic should include personal as well as social ethics.
    • There is no progress in ethics! (What would you judge it by?)
    • Our current understanding of the effect of evolution on behavior clearly indicates the opposite of what Josh suggests – specifically, that AIs will very likely evolve into ruthlessly selfish intelligences.

Leave a Reply