Chess-playing computers may cause the robot apocalypse
Source: ca.news.yahoo.com
Sore-loser chess programs might be the end of us all...
In the movie The Terminator, we heard the human side of the story of Judgment Day, with the machines getting smart and seeing us as a threat. It wasn’t until the sequel that we heard the tale from the machine’s point of view, as it chose to start World War III to avoid having its plug pulled. This is all science fiction of course, but according to Steve Omohundro, a researcher in artificial intelligence, the vision of those sci-fi movies may not be so far off the mark.
In a recent paper, he pointed out that unless we’re very careful, some of the autonomous systems that we’re designing now could turn against us, and it’s the simplest systems that are potentially the most dangerous. It isn’t so much out of some kind of malice, but simply because if the system is designed to achieve maximum effectiveness, it might completely bypass any safety measures (like having a functional ’off’ switch) we put into place in order to reach that goal.
As Omohundro writes: "When roboticists are asked by nervous onlookers about safety, a common answer is ’We can always unplug it!’ But imagine this outcome from the chess robot’s point of view. A future in which it is unplugged is a future in which it cannot play or win any games of chess. This has very low utility and so expected utility maximization will cause the creation of the instrumental subgoal of preventing itself from being unplugged. If the system believes the roboticist will persist in trying to unplug it, it will be motivated to develop the subgoal of permanently stopping the roboticist. Because nothing in the simple chess utility function gives a negative weight to murder, the seemingly harmless chess robot will become a killer out of the drive for self-protection."
Of course, there’s Asimov’s Laws (although there’s four of them, not just three, and failures in some science fiction seem to be based on forgetting the ’zeroth’ law), but again, what if the program can simply re-write itself to bypass those limitations if they aren’t convenient for its purpose?
It looks like we need to be very careful going into the future, and perhaps take a tip from our science fiction ... if we’re going to design learning computers, the first thing to teach it should be the value of human life.
[...]
Read the full article at: news.yahoo.com
READ: Has mankind created a technical world that the mind cannot master or control?