Foresight Nanotech Institute Logo
Image of nano

Analysis of Spielberg’s move, AI

from the gradual-future-shock? dept.
redbird (Gordon Worley) writes "Most of this is filled with spoilers, so I recommend that, unless you've seen the film, don't click read more. For those of you looking for a basic review, this is an okay movie (I'd give it about 2.5 out of 5 stars), but certain aspects of the film really ruin it. Basically, I consider this a cute movie about subhuman AIs and is not dangerous to the public's perception of AIs (in fact, it may actually help it by gradually future shocking them)."

Read more for the redbird's review . . .

Analysis of AI

by Gordon Worley

WARNING: Plot is discussed. This is more of an analysis of aspects of the movie, not a review. You should probably read this after you see the film.

AI can be summed up in one word: cute. If you've seen ET, you've already seen AI, only AI is worse because it tries to do more, but fails. For example, when David becomes frozen in ice, that would have made a fair ending; not the best ending, but a decent one. Speilberg, though, is not content to let the viewer consider such a bleak, realistic fate, but gives the audience something that is beyond reason. As I point out later on, it only makes sense that David end, frozen in ice forever, but Spielberg wants a feel good ending that will keep Joe Average coming back for more.

The problems with the film really begin in the opening scene. An AI 'scientist' is explaining how he plans to make a more human Mecca, by making it love. He believes that if it can learn to love, then all other human characteristics will follow. Aside from the silliness of this proposition, the problem is the same problem Asimov's bots had: adversarial human attitudes. The humans want to make the AI love because they believe that will make sure that it will stay in line, not harm humans, and such. Spielberg, either by accident on purpose, doesn't state this explicitly, which hides this error from the smart but underinformed viewer.

You may be saying, what's so adversarial about keeping AIs from killing and why would such a thing be bad. The problem is two folded. On one hand, no system you try to implement is going to be fool proof. The AI is a lot smarter than any human (or at least can get a lot smarter) and can use what would look like magic to us. Secondly, regardless of smartness, the AI will face a philosophical crises that will probably be the end of it. For example, David is taught to love humans, but humans, from the start, did not love him by forcing him to love rather than trusting him.

BTW, this love thing is his undoing. Much like Asimovian bots that become stuck in logic loops, David becomes stuck in a love loop. The end of the film lets him out of it, because that's the happy ending, not the one that makes the most sense.

Okay, so if you can't be adversarial and lay down some Asimovian laws or something, what can you do. You can create Friendly AI. There is quite a lot written about this topic, but the best place to start is here. Please, click this link; don't just post like an idiot that I have no clue about AI. For many of you, this will also dispel myths about Classical AI and lead you to new ideas. Just in case you missed it, you can learn about Friendly AI here.

Now, I did mention that the movie was cute, which means it has some interesting parts. For example, Dr. Know was an interesting information retrieval system, though pretty dumb considering they could create David and Google gives back better info. Also, the Transhumans at the end of the film look cool, but act far too much like humans. BTW, that's Joe at the end of the film who talks to David, not an alien. My friend who went with me to the movie seemed a bit confused about this, but hopefully she'll get it later.

For those who you who have read 'Super Toys Last All Summer', the short story this film is based on, you'll notice when the story ends and the film begins, and at that point the plot shifts to a different story: Pinnochio. The parallels are almost embarrassing. Also, there is a hint of Wizard of Oz in there (think about what is said about Dr. Know at the end and you'll get it). There are probably more, but I haven't caught them yet.

So, to sum up, is this an SF classic: no. Does it make you think: only if you don't know what AI is. Is this a cute summer movie to take Grandma, the kids, and Fluffy the dog to: yes. Or, do like me and take a date; you'll have more fun (even if you're both smart and see problems with the movie)! ;-)

5 Responses to “Analysis of Spielberg’s move, AI”

  1. redbird Says:

    Eratta

    Well, after I submitted this I realized that I had made an error: mecca should be mecha. Oh well, I guess I can't catch every error before posting to the world.

    Also, you should check out Eliezer's analysis. It convers a lot more stuff than mine does.

  2. MarkGubrud Says:

    Trustworthy systems?

    regardless of smartness, the AI will face a philosophical crises that will probably be the end of it. For example, David is taught to love humans, but humans, from the start, did not love him by forcing him to love rather than trusting him.

    I can agree that AIs will face philosophical crises, just as humans do, only probably much worse in proportion to how much more intelligent they are. In my view, love is the only way out of the fundamental crisis, "Why even exist?" I also agree that the idea of making an AI love is silly, unless the purpose is to create a love doll. What I don't understand is why you think humans should trust AIs. I know Yudkowsky claims to have a way of guaranteeing that AI's will be "friendly," but I don't trust his claim any more than I trust a superintelligent robot in a philosophical crisis.

  3. MarkGubrud Says:

    Verbose AI

    If you go to Yudkowsky's website, you find about a billion words, and if you start trying to wade through them, after about half an hour you find that you still don't have a clue how "friendly AI" is supposed to work. So you give up.

    Could someone write about a four-page paper, perhaps with a flow chart or two, explaining the overall architecture which is supposed to enable self-improving artificial intelligence while guaranteeing that it will always be "friendly" to humans?

    The impression I get is that what you really have is a very confused tangle of fragmentary thoughts, presented in a way which demands so much of readers that it either forces them to back down (thus achieving intimidation) or else make such an investment of time that they are motivated to sign on as true believers. I am prepared to believe that Eliezer may have something, but if it can't be boiled down to a succinct top-level description, and decomposed hierarchically, then I suspect that maybe it actually isn't there.

  4. fred Says:

    You are missing the point of the entire movie.
    What Spielberg was creating was a story of genesis. AI is the bible story for the next group of dominent creatures on Earth. Forget about sci-fi theories and think about how much we humans would embrace the one and only true story behind our creation if it could be proven (although we may not like what we find). This is what we were witnessing as an audience.
    To compare AI to ET is really simplistic.

  5. Elan Horsfield Says:

    There is the possibility that “AI” is not about AI at all, but about human relationships,
    about our relations to different groups of humans, about our relationships to individuals.

    Judging it on how near it comes to achieving a good explanation of Artificial Intelligence
    may be off course.

    Judging it on the muddled ending may be unfair, since many great films have had muddled
    plots. “Once upon a Time in America” for example.

    The boy comes accross as a human, yet we are constantly reminded during the film that he
    is a robot. He is as rejected by humans as much as he seems at first to reject his fellow
    robots.
    A parallel might be someone born black, brought up by whites, then rejected as he loses
    his child appeal, and from his point of view, puzzlement at his view of fellow blacks who are
    treated as lesser humans.

    I think I’ve been quoting from several plots here.

Leave a Reply