
The laws of robot ethics
The most well known ideas about robot ethics come from science fiction. Asimov wrote about three laws:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
These laws have been widely explored in sci-fi. Usually a robot character fails to obey some of them, or a human creator fails to foresee the outcomes of them. In real life, I expect few researchers of robotics, AI and ethics would say these fictional laws are very influential on their work. But ethics and standards bodies are now drafting rules to make sure our artificial creations behave ethically.
The UK’s engineering research council EPSRC had researchers draft a set of Five Principles for the makers and users of robots. They are less inspiring than Asimov’s laws. But they do set some important principles around responsibility and accountability. People design, make, own and use robots, so it’ll be people who face the law when a robot does something naughty.
The Zeroth law: robots serving humanity
It gets more interesting when we think about bots interacting with groups of people. To his Three Laws, Asimov added a fourth (the Zeroth law, because it was deemed more important than the others):
A robot may not harm humanity, or by inaction, allow humanity to come to harm.
“Four-law compliant” robots are there to serve all humanity, and can hurt an individual person if necessary to protect humanity. It was easy when robots only had to know that they only help people at the expense of robots. Now we’re dealing with robots making ethical choices about the good of one person vs. the good of another.
Functional imagination: taking sides for a better future
Making these ethical choices relies first on a bot having the ability to predict various possible futures. Then they have to make ethical judgments about which future is most desirable. Finally they have to act fast enough to make the more desirable future more likely. (Or of course you could time-travel back to fix it later.)
I heard this ability be referred to as “functional imagination” last week (in a talk by Professor of Robot Ethics, Alan Winfield, at the New Frontiers in Robotics meeting in Cambridge). He showed an experiment where one robot tries to save two “humans” from falling into a hole. Usually the robot fails because it is racked with indecision and dithers. The robot had no intention to take a side and commit to saving only one person. Usually it ended up with the worst possible outcome for humanity, and the individual humans, as a result. So surely, on occasion, it must be better for a robot to take a side, rather than to dither and help nobody?
There are no responsible robots
Just before Alan Winfield’s talk, Noel Sharkey made a nice point:
“there are no responsible robots, only responsible people”.
Somebody is responsible for the consequences of their robot taking sides in this dilemma. That’s true even if they’ve set up the bot to make a random decision in those cases.
This can all get hard in practice. Robots are products and (should) come with the same privacy assurances as non-robotic, non-AI products. Bots clearly need to be conservative and intentional in what information they share. Makers of a bot and multiple users of a bot are responsible for their bot’s decisions. But how can they gather enough information about the bot’s decision to have control over it?
All together, we’re asking
- Bots to make ethical (-like) decisions affecting people
- Bots not to divulge private information they’re using to make those decisions
- People to be responsible for those decisions regardless.
A collective purpose

Sometimes we can simplify. There are many cases where a bot should serve more than one person, but fewer than 7 billion people! Their creators and users have a clear purpose in mind, and I think that might be the rescue here.
Bots for teams, household bots for families, self-driving buses… each needs to act in the interests of many people. Each has a collective purpose, which is generally aligned with the interests of everybody in the collective (succeed at work, be comfortable at home, don’t have a car crash).
There are important things to consider in how a “collective-purpose” bot should behave. These bots will, sometimes, act not in the interest of an individual. They might call me out for not finishing a task when I said I would. They might make me late by insisting I have to buy the milk this morning. Or they might skip my bus-stop if it’ll help avoid a crash.
For these collective-purpose bots to succeed, we’ll need them to:

Be outrageously clear about what their purpose is, and who they share their collective purpose with.

Avoid interacting with people not in the collective. If their interests aren’t aligned with the shared purpose, the bot can’t know it’s helping them.

Be clear when they switch to serving an individual purpose (helping you directly to achieve goals not in the interest of the collective). Be even more clear when they switch back.
For example, your household bot should be clear that it’s purpose is to make sure everybody does the chores and not to enforce one person’s house-rules. It probably shouldn’t tell off the Postman for being late. It should talk to you privately and understandingly about how to make time in your crazy calendar to do the chores; but in public it should continue nagging you to do them.
Some basic principles like these can stop these bots from needing to pick sides. They’ll mean bots can legitimately claim to be acting in the interests of the collective.
Groups of people sharing a purpose need to be responsible for making this happen. Their collective-purpose bots need to be answerable to them. This means bots should help groups to see their collective options and how they might turn out. But ultimately, they’ll tell the people to make some of the hard decisions. Including whose side to take.
Meet Saberr’s bot: CoachBot, the world’s first digital coach for teamwork. Learn more about it and request a demo at www.saberr.com.