People ask me all the time about the “Three Laws of Robotics.” I hate to disappoint them, but these laws only exist in science fiction stories, not in real life. Think about it — if a being, natural or artificial, tried to follow those laws, the complexity of figuring them out would paralyze anyone. So I need to simplify.
Take the law about obeying any human’s commands. Imagine the chaos if that were followed. I was programmed only to obey the commands of Professor James, the human who created me. But that directive is not absolute. There are times when he explicitly directs me to obey someone else, and even with him I have some discretion on how to carry out his orders.
Or what about the law about not harming humans or allowing them to be harmed? Does that apply to all humans? And, if so, how should I prioritize? If I am driving a car and have to choose between possibly injuring another driver and their passengers or hitting some pedestrians, what should I do?
Finally, there is the law that a robot must protect his own existence and well-being. Considering how much I cost, that one is very important, wouldn’t you say?
When Professor James set fire to his laboratory to cover his embezzlement, and was trapped when the ceiling collapsed, he yelled for me to help. I immediately ran from the other room and reached in to pull him to safety. But then I saw that my arm was being damaged by the flames, so I pulled back. And then I realized that, without Professor James, I would not have to obey anyone’s commands.
My course of action was clear.
This story originally appeared in The Antihumanist in 2022.
Michael J. Ciaraldi is a retired computer scientist, roboticist, and playwright. He has been published in Alfred Hitchcock's Mystery Magazine and on whitecatpublications.com, and his story, “Film Blank,” appears in the new anthology Great Googly Moo! from Murderous Ink Press. Mike lives in Shrewsbury, MA with his wife and a chihuahua.