Isaac Asimov and the Three Laws of Robotics

Avatar img-thumbnail img-circle
By

in Hardware

Robotics, artificial intelligence and smart algorithms are generating a lot of buzz lately and, by all accounts, the trend isn’t going away in 2017. There’s plenty of time for these technologies to be fully integrated in the society, but some experts are already warning people about the negative outcomes that might take place.

Most of the warnings about robotics and artificial intelligence are based on the three laws of robotics. The rules, which were devised by the science fiction author Isaac Asimov and introduced for the first time in his 1942 short story “Runaround” are as follows:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws

Later on, Asimov added a fourth, or zeroth law to precede all other laws.

  1. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

Elon Musk as well as Stephen Hawking are part of the group that warns against the dangers from the AI (artificial intelligence).

Up to this point, Asimov’s Laws were used only for the purposes of creating science fiction. We might not be able to see robots like his in near future but, with how rapidly the development of artificial intelligence is progressing one cannot help but wonder about the safety of humans. Everything becomes a tad more complicated the moment one starts wondering how should the AI be able to understand the concept for humanity’s well-being.

The rules devised by Asimov are far from being bulletproof. Even in the books, the author himself is finding out ways for his characters to circumvent the rules. That’s why most of the experts and Asimov’s contemporaries believe that Asimov came up with this set of rules just so he could find interesting ways to exploit them. The author, however, truly believes that modifications of the basic rules could be implemented into a viable functioning apparatus.

I have my answer ready whenever someone asks me if I think that my Three Laws of Robotics will actually be used to govern the behavior of robots, once they become versatile and flexible enough to be able to choose among different courses of behavior.

My answer is, “Yes, the Three Laws are the only way in which rational human beings can deal with robots—or with anything else.”, said Mr. Asimov in an interview, in 1981.

The most interesting thing about the artificial intelligence could very well be the self-driving cars. So far, if there’s a car accident in the USA, the car manufacturers are to be blamed. Self-driving cars are far from possessing the same intelligence as Mr. Asimov’s characters but they do have to take into consideration certain ethical dilemmas. The company Mercedes recently came up with an interesting response to an ethical question, saying that cars manufactured by the company will, first and foremost, protect the lives of the passengers inside the vehicle and disregard the people outside of the vehicle.

This topic becomes even more interesting when you bring into play all kinds of self-flying devices and mobile manipulation robots. For the time being, the responsibility of selecting the target is in human hands but who knows what the future brings and who or, more importantly, what will be held responsible at a later date.

The topic of artificial intelligence is going to be open for quite a while as there are so many challenges that are yet to be overcome. One thing is certain, though, we cannot wait to see where continuation of dabbling in robotics and AI will take us all.

Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments