Monday, March 07, 2005

The big three...

A word first, on this random day of thoughts, about my new addition of current listening, reading, Netflix closing. The way I figure it, in the end the person who will get the most out of reading this blog is me. That's not to say that there wouldn't be anything of interest for someone else who wanted to read it, but I'm really the only one who would have an interest in everything here. The movies that we see, or the books that we read, and albums that we play are often more representative of who we are at a particular point in our lives than anything else. Certainly, the music I listen to now I vastly different than what I was listening to five years ago. Books and movies are often the same. I am interested to see what sort of pattern develops if I look back in a few years at all of these entries. Besides, I'm often interested in hearing what other people are listening to or reading, so maybe someone else will be too.

On a similar not I just finished Isaac Asimov's I, Robot yesterday, and got to thinking. The book is actually pretty good. I liked each story on an individual basis. They were interesting psychological vignettes, and each one turned on an unusual aspect of how the three rules which govern robot behavior would work in practice. I've got to say that the one where the robot wanders around in circles was hilarious.

My problem with the book, however, came from the rules themselves. For those who have not read the book, or seen the movie (which has nothing to do with the book) the three rules are:

1) No robot may cause harm to a human or allow harm to come to a human through inaction.
2) Robots must obey commands given to them by humans unless it violates rule 1
3) A robot must take action to preserve itself from injury unless to do so would violate rules 1 or 2

In the book they mention that these rules were created to pacify people who were concerned/afraid of the robots taking over, or of losing control over them. The problem, however, and I think it's pretty obvious, is that these rules virtually ensure that happening. To think that these rules would have helped to placate anyone is ridiculous.

Here are the problems. First the definition of harm seems to be rather loose. By rule 1 I would expect robots to be chasing people around taking their cigarettes, forcing them out of their cars, putting an end to professional sports and numerous other things. All of these things are dangerous. All of them cause harm, yet people chose to do them anyway. Every time you drive your car you are in immediate peril and could be dead in a fraction of a second. A robot programmed with rule 1 could not allow you to do those things because they would be allowing you to harm yourself by their inaction. Additionally, by rule 2 you would not be able to order them to leave you alone because your order would be outweighed by rule 1.

Here's another problem that Asimov even touched on in a few of the stories. a robot is able to perform certain tasks much more accurately and capably than people. Some of these are tasks which humans depend on having performed correctly. Since the robot knows that left to humans there could be problems leading to harm. The robot then decides that the most disastrous outcome for humanity would be the loss of itself. Thus rule 1 means the robot will act toward its own preservation above anything else. This effect would probably be enhanced by rule 3 giving it an instinct for self-preservation already.

The logical problems with the rules are pretty clear, so the question I couldn't get past while reading was how anyone would accept these rules as a solution to the possible robot problem. They pretty clearly ensure the eventual takeover of everything by well intentioned robots, and the transition of humans into caretakers of the machines which watch over them. I don't know about you but if I were handed these rules as my safeguard against robotic domination I wouldn't take it too well. To be fair to Asimov this eventuality was part of the book, but I found the shock evinced by the human characters at these outcomes questionable. They should have figured that out long before the first robot came off the line.

Currently listening to: Another Joyous Occasion - Widespread Panic
Curently reading: The Daily Show with Jon Stewart presents America (The Book): A Citizen's Guide to Democracy Inaction
Last Netflix movie: Still Dawn of the Dead. What can I say.

0 Comments:

Post a Comment

<< Home