Asimov, AI, Robotics, and the Human Future

Will the ‘Three Laws of Robotics’ save us?

Some transhumanists are clearly excited about a robotic, AI world. In an article posted yesterday I wrote about “The Promises and Perils of AI and Our Posthuman Future”. In it I quoted from five key titles on this topic. Authors included those who are experts in the field, along with those who offer ethical, philosophical and theological commentary on all this.

I noted how these thinkers and writers are divided in terms of how things will pan out. Some of them are rather optimistic and positive about how these developments will unfold, while some are much more pessimistic and negative.

As I have stated before when I write about such topics, I tend to be in the latter camp. Yes, many benefits and advantages to life have already occurred because of these new technologies, but we dare not be naïve about the very real damage and destruction they can also produce.

One fellow sent in a comment to my article, mentioning the well-known laws of Isaac Asimov concerning robotics. I replied by saying that yes, a number of the books listed in my piece did speak to this. For those not familiar with him, Asimov (1920-1992) was one of the big three English-speaking science fiction writers of last century, along with Robert A. Heinlein and Arthur C. Clarke.

In 1950 a number of his robot stories were collected and published in I, Robot. Included there was a set of ethical rules for robots and intelligent machines called the “Three Laws of Robotics”. The three laws say this:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Given the comment by my friend and my response, it seems worth while taking all this a bit further. So let me go back to one of the books I featured in my list, and quote from it further on this matter. In the James Barrat book, The Final Invention for example he spoke to this issue (although I left it out of the quote that I had shared). So here is what he said early on in his book:

And how will the machines take over? Is the best, most realistic scenario threatening to us or not?

When posed with this question some of the most accomplished scientists I spoke with cited science-fiction writer Isaac Asimov’s Three Laws of Robotics. These rules, they blithely replied, would be “built in” to the AIs, so we have nothing to fear. They spoke as if this were settled science. We’ll discuss the three laws in chapter 1, but it’s enough to say for now that when someone proposes Asimov’s laws as the solution to the dilemma of superintelligent machines, it means they’ve spent little time thinking or exchanging ideas about the problem. How to make friendly intelligent machines and what to fear from superintelligent machines has moved beyond Asimov’s tropes. Being highly capable and accomplished in Al doesn’t inoculate you from naïveté about its perils. (pp. 4-5)

And here is part of what he does say in Chapter 1:

Now, it is an anthropomorphic fallacy to conclude that a superintelligent AI will not like humans, and that it will be homicidal, like the Hal 9000 from the movie 2001: A Space Odyssey, Skynet from the Terminator movie franchise, and all the other malevolent machine intelligences represented in fiction. We humans anthropomorphize all the time. A hurricane isn’t trying to kill us any more than it’s trying to make sandwiches, but we will give that storm a name and feel angry about the buckets of rain and lightning bolts it is throwing down on our neighborhood. We will shake our fist at the sky as if we could threaten a hurricane.

 

It is just as irrational to conclude that a machine one hundred or one thousand times more intelligent than we are would love us and want to protect us. It is possible, but far from guaranteed. On its own an AI will not feel gratitude for the gift of being created unless gratitude is in its programming. Machines are amoral, and it is dangerous to assume otherwise. Unlike our intelligence, machine-based superintelligence will not evolve in an ecosystem in which empathy is rewarded and passed on to subsequent generations. It will not have inherited friendliness. Creating friendly artificial intelligence, and whether or not it is possible, is a big question and an even bigger task for researchers and engineers who think about and are working to create AI. We do not know if artificial intelligence will have any emotional qualities, even if scientists try their best to make it so. However, scientists do believe, as we will explore, that AI will have its own drives. And sufficiently intelligent Al will be in a strong position to fulfill those drives.

 

And that brings us to the root of the problem of sharing the planet with an intelligence greater than our own. What if its drives are not compatible with human survival? Remember, we are talking about a machine that could be a thousand, a million, an uncountable number of times more intelligent than we are—it is hard to overestimate what it will be able to do, and impossible to know what it will think. It does not have to hate us before choosing to use our molecules for a purpose other than keeping us alive. You and I are hundreds of times smarter than field mice, and share about 90 percent of our DNA with them. But do we consult them before plowing under their dens for agriculture? Do we ask lab monkeys for their opinions before we crush their heads to learn about sports injuries? We don’t hate mice or monkeys, yet we treat them cruelly. Superintelligent AI won’t have to hate us to destroy us. (pp. 17-19)

Image of Our Final Invention: Artificial Intelligence and the End of the Human Era
Our Final Invention: Artificial Intelligence and the End of the Human Era by Barrat, James (Author) Amazon logo

He then addresses the Three laws of Asimov:

[A]nthropomorphizing about machines leads to misconceptions, and misconceptions about how to safely make dangerous machines leads to catastrophes. In the short story, “Runaround,” included in the classic science-fiction collection I, Robot, author Isaac Asimov introduced his three laws of robotics. They were fused into the neural networks of the robots’ “positronic” brains: (pp. 19-20)

He lists the three laws and then closes the chapter with these words:

The laws contain echoes of the Golden Rule (“Thou Shall Not Kill”), the Judeo-Christian notion that sin results from acts committed and omitted, the physician’s Hippocratic oath, and even the right to self-defense. Sounds pretty good, right? Except they never work. In “Runaround,” mining engineers on the surface of Mars order a robot to retrieve an element that is poisonous to it. Instead, it gets stuck in a feedback loop between law two—obey orders—and law three—protect yourself. The robot walks in drunken circles until the engineers risk their lives to rescue it. And so it goes with every Asimov robot tale—unanticipated consequences result from contradictions inherent in the three laws. Only by working around the laws are disasters averted.

 

Asimov was generating plot lines, not trying to solve safety issues in the real world. Where you and I live his laws fall short. For starters, they’re insufficiently precise. What exactly will constitute a “robot” when humans augment their bodies and brains with intelligent prosthetics and implants? For that matter, what will constitute a human? “Orders,” “injure,” and “existence” are similarly nebulous terms.

 

Tricking robots into performing criminal acts would be simple, unless the robots had perfect comprehension of all of human knowledge. “Put a little dimethylmercury in Charlie’s shampoo” is a recipe for murder only if you know that dimethylmercury is a neurotoxin. Asimov eventually added a fourth law, the Zeroth Law, prohibiting robots from harming mankind as a whole, but it doesn’t solve the problems.

 

Yet unreliable as Asimov’s laws are, they’re our most often cited attempt to codify our future relationship with intelligent machines. That’s a frightening proposition. Are Asimov’s laws all we’ve got?

 

I’m afraid it’s worse than that. Semiautonomous robotic drones already kill dozens of people each year. Fifty-six countries have or are developing battlefield robots. The race is on to make them autonomous and intelligent. For the most part, discussions of ethics in Al and technological advances take place in different worlds.

 

As I’ll argue, AI is a dual-use technology like nuclear fission. Nuclear fission can illuminate cities or incinerate them. Its terrible power was unimaginable to most people before 1945. With advanced Al, we’re in the 1930s right now. We’re unlikely to survive an introduction as abrupt as nuclear fission’s. (pp. 20-21)

It has always been the case that science and technology tend to race ahead of ethical and spiritual considerations. As far as I am aware, Barrat is not a Christian. But he is asking a lot of important questions and is not skirting around the moral dilemmas that arise here.

As he rightly points out, we will need something more solid and secure than Asimov’s laws to help us steer through the murky waters that we are now in and that lie ahead. Many other books do similar things, and I listed 40 of them in a recent recommended reading list: https://billmuehlenberg.com/2025/01/17/what-to-read-on-ai-transhumanism-and-the-new-digital-technologies/

And other books not found in that list could also be mentioned, including the important 2014 volume by Oxford University philosopher Nick Bostrom, Superintelligence: Paths, Dangers, Strategies. It is vital that these folks and others keep asking the hard and penetrating questions.

But the worry is that such reflections, critiques and questioning will be outpaced by the very rapid advances in AI and related issues. As such, the global future is looking unsettling at best.

[1652 words]

4 Replies to “Asimov, AI, Robotics, and the Human Future”

  1. I enjoyed Asimov as a teenager. Strange to think that science has moved on past science fiction!

  2. Barrat makes an excellent point about Asimov’s writing for plots, not for scientific feasibility, and he is one of the few commenters who addresses the Zeroth Law, which undermines the others.

    The Laws are fine as fiction but reflect Asimov’s evolutionary determinism. His fundamental plot point was that by correctly applying the Laws, one could understand all robotic behaviour – and thus was his robopsychologist genius, Susan Calvin portrayed.

Leave a Reply

Your email address will not be published. Required fields are marked *