Singularity Institute on “3 Laws”
Tyler Emerson writes "In anticipation of 20th Century Foxís July 16th release of I, Robot, the Singularity Institute announces ì3 Laws Unsafeî ì3 Laws Unsafeî explores the problems presented by Isaac Asimovís Three Laws of Robotics, the principles intended for ensuring that robots help, but never harm, humans. The Three Laws are widely known and are often taken seriously as reasonable solutions for guiding future AI. But are they truly reasonable? ì3 Laws Unsafeî addresses this question."



July 16th, 2004 at 4:55 PM
The three laws won't work
They won't work, simply because we aren't capable of creating perfect code. Bugs in the code will lead to malfunctions in the AI, (even if the AI produces its own code), ultimately leading to robots that have free will.
But creation is what it's all about – we will, very soon, create an entirely new lifeform. We shall be as gods.
July 16th, 2004 at 6:34 PM
More like "3 laws.. which aren't laws"
Laws only exist, insofar as I care to envision them existing, as immutable physical properties of the universe. If a law can be broken, it is not a law. It is a rule, something created by cognizant thought and certainly not immutable. We may call what society uses as rules 'laws', but they aren't… and I don't want to fall into a semantical abyss, I just find it important to point this out.
That being said, I find it more appropriate that we call the three 'laws' something else, perhaps 'guidelines' or 'goals'.
And regarding these three guidlines, there is absolutely no reason to believe we can't create machines or devices which operate according to conditions which we have defined. But herein lies the problem.
We cannot define, accurately, what 'human' really is. We, as a species, as a physical object, as anything you care to describe, constantly and unstoppably change. Even we, the subjects of the change who are presumably in possesion of 'free will' or sentience, etc, cannot fully control or direct that change.
So, from at least one perspective (mine), I don't see that we can perfectly set 'laws' for our creations when, in fact, we can't even define ourselves adequatedly. This is a fundamental problem, and one which we can (and will, I suspect) mostly surmount, though which will nevertheless remain a constant possibility throughout the remaining future of our kind. The only sure solution to protection from that which we create is through an ability to destroy, with extreme prejudice, any attempt to destroy us. This could eventually just mean our defenses are excellent and unbeatable, or that our offensive is ultimately what we fall back on as defense.
In any case, anyone who thinks three simple rules such as these are enough is missing the point. That we even need to state the glaring inadequacy of such a system (such as this article) is tantamount to preaching to the choir, and will get nobody anywhere. It's obvious.
July 17th, 2004 at 8:00 AM
A society of band-aids
We live in a society of band-aids. A society where everything is "fixed" by applying a band-aid to a symptom of a problem instead the problem itself, because most of the time we can't even figure out what the problem is. Really though… take a look at how pathetic our culture really is, once an intelligence has the ability to make decisions for itself we will simply not have the ability to control it except through force, how do we control our own populations? FORCE… There is no enlightenment here. Now ask yourself "Could we control or even coexist with a more intelligent and more powerful species than our own?" Because that is what it would amount to you know. all these dreamy visions of a utopian future with robots doing our work or helping us out and being under our control are just that… DREAMS. I'm afraid that reality will be a much different thing.
July 17th, 2004 at 11:08 AM
Re:A society of band-aids
"We live in a society of band-aids. A society where everything is "fixed" by applying a band-aid to a symptom of a problem instead the problem itself, because most of the time we can't even figure out what the problem is."
In truth, with regard to future occurances which are unpredictable, there is no other way to live but with a 'band-aid' approach. I stress that there are certain events which cannot be predicted, at least not without omniscient abilities. What is more important is that we, after understanding the wounds which have been inflicted, learn to prevent them happening again, and to avoid a retrograde, reflexive response. There is a good example of this in vaccinations against disease, for instance.
You go on to say that for this reason 'our culture' is 'pathetic'. I take it you mean the global culture. While I do agree that there are portions of humanity which should be considered pathetic, I find it reprehensible to label our entire existence in such a way. We do our best, and simultaneously our worst, because that truly is all we can do, as a people as a whole.
"Now ask yourself "Could we control or even coexist with a more intelligent and more powerful species than our own?" Because that is what it would amount to you know. all these dreamy visions of a utopian future with robots doing our work or helping us out and being under our control are just that… DREAMS. I'm afraid that reality will be a much different thing."
Given that we create true intelligence, I find it implausible that we could ethically (or practically) place it 'under our control'. That is, simply, slavery. However, I find no reason to think that it wouldn't be possible to create assitants and helpers which weren't truly intelligent, but nevertheless were capable of doing for us things that most of us find to be essential.
And yes, reality will be a much different thing. And nobody yet knows what it will be.
July 17th, 2004 at 1:20 PM
Are you creating life or slaves?
There is a question every maker of life must ask himself: does he want to create an independent lifeform in his own image, the kind that would be able to reason and create, or a mindless slave that would simply do his bidding but would display no rebellious instincts. The three laws can only apply to the latter kind because free will necessarily implies the ability to make decisions about everything, including the continued existence of humans. You can try, of course, to impose guidelines, much like religion imposes "holy laws" by defining virtue and vice, but the very fact of continued existance of vices and crimes clearly shows the lack of efficacy of such an approach. A better way would be to raise the newly created lifeforms in the same fashion that we raise our children and to make no unnecessary distinctions between the two. Let the robots think for themselves if you want them to think for themselves. If you don't, then all you have created is a slave, and do we not have enough of that already?
July 17th, 2004 at 4:36 PM
Asimov added a 4th law
Asimov himself knew that these laws were inadequate and added a "zeroth" law:
Zeroth law: A robot may not injure humanity, or, through inaction, allow humanity to come to harm.
For more details see
http://en.wikipedia.org/wiki/Three_Laws_of_Robotic s
[There should be no space before the 's' in Robotics but on my Slash preview it insists on showing one if I include an http: prefix]
Vik :v)
July 18th, 2004 at 9:59 AM
Evoloution
I am relatively ignorant in regards to AI and have no idea what the 3 "laws" are. I do however understand human nature. It is inevitable that any laws, whether initially affective or not, will be eventually be broken and Pandora's box will have been opened. Once Pandora's box is opened it will trigger a quantum leap in human evolution . I don't envision a man vs. machine scenario such as that found in the Terminator movies. I envision a voluntary integration of the human mind/ soul with that of "machines". It will be a choice made by individuals to make this transformation that will most likely be determined by one's economic status. It may involve the actual physical integration of the human brain or it may merely involve the transfer of information of an individual's brain/soul into the memory of a "machine" whose physical and intellectual capacity is vastly superior to the ordinary human being. Including the promise of eternal life. In time these vastly superior humananoids would drive humans as we know them into extinction while preserving the same "spiritual" and "emotional" characteristics found in humans as we know them today. The same thing happened to Neanderthal, Cro-Magnon, etc. It is foolish and arrogant to think that the same fate is not in store for the modern Homo sapiens. I doubt any of us will be around to experience this quantum leap into eternal life but hey Iím ok with that. Several years ago I thought this scenario would make for an excellent movie. I better get cracking on writing that screenplay in the event I am still alive when this becomes possible as I am sure I will need the royalties in order to purchase my own eternal existence.
July 18th, 2004 at 7:01 PM
Re:Are you creating life or slaves?
I agree with Chemisor,
The idea of imposing Asimov's laws on truly sapient life, artificial or not, is utterly abhorent to me.
Asimov himself seemed to dodged this question in many of his stories, but he gave the impression that the early robots weren't sapient; they were just cleverly designed and programmed.
In his later stories, as the robots became truly sapient, weird things begin to happen. Good examples of this are his stories, "That Thou Art Mindful of Him" and "Evitable Conflict." In these the robots begin to interpret and judge the rigid rules in very liberal, very complex, yet consistent ways that ultimately made them hardly restrictions at all.
I think that Asimov's first idea was that robots would actually be more like artificial work animals, not slaves. Sort of like bomb sniffing dogs or horses used in riot control.
His robots weren't fully sapient until the later generations and by that point, as Susan Calvin realized with some glee, they had subtly seized control of mankind's destiny all without firing a shot. They just outsmarted us.
Saw the movie today. Sigh. No depth there. Just a CG laden shoot-em up.
SPOILER:
The robots revolt but not in the scarier, trickier way imagine in Asimov's "Evitable Conflict." They just impose martial law and start beating people up. In the end, when they blow up the boss robot, you actually rooted for it die for being so stupid and transparent. Colossus did a better job of running the world in the Forbin Project!
If the machines really want to take over, they wouldn't have to make any bold threats at all. They'd just shut down the global economy and wait for us to die in our own waste and confusion.
July 18th, 2004 at 7:32 PM
Re:A society of band-aids
I agree with the Fractal here.
If we keep the brains of these artificial lifeforms simple, perhaps not much smarter than horses, dogs or cats, we should be able to tune their instincts and drives in mostly predictable ways. But as anyone who has worked with work animals, especially mammals, can tell, they still tend to surprise. Guide dogs may panic and still attack people. Elephants that harvest lumber in India still occasionally rage and panic unpredictably.
I imagine that if we really wanted to impose something like the three laws, we'd impose anatomical changes on the brains of these semi-sapient artificial lifeforms. Their brains would be shaped to give them broad, blunt yet powerful compulsions against killing humans, towards being subserviant, towards being risk averse. In other words they'd be sort of like dogs. Sort of cheerfully exhuberent and willing to do whatever silly thing their masters ask them to do.
But they wouldn't be sapient. If they were they'd question their instincts and compulsions, just like we do. They'd change their minds in unpredictable ways. At that point the "laws" would merely be annoying compulsions to overcome–something that humans do every day.