Can You Trust A Robot? Let's Find Out

Email a Friend
While Hollywood has firmly planted the idea in our minds that robots may very well turn out to be evil, academic research into dangerous interactions between humans and robots has only just begun.

When they come — and they are coming — will the robots we deploy into human culture be capable of evil? Well, perhaps "evil" is too strong a word. Will they be capable of inflicting harm on human beings in ways that go beyond their programing?

While this may seem like a question for the next installment of The Terminator franchise (or The Matrix or whatever, pick your favorite), it's a serious question in robotics and it's being taken up by researchers now.

Yes, it is a bit early to worry about robots planning to take over the world and enslave their former masters. That would require the development of artificial intelligence (AI) in machines (a milestone which always seems to be about 20 years away, no matter when the question is asked). But it is not too early to ask about the safety of robot-human collaborations, which are already happening on a small scale in areas like manufacturing and health care. And that is why a team of scientists in England has started the Trustworthy Robotic Assistant project.

The goal of the project is to understand not only if robots can make safe moves in their interactions with humans, but to also understand if they can knowingly or deliberately make "unsafe moves."

Without AI robots are, of course, just slaves to their own programming. But given the complexity of those programs, along with the requirement to interact with sometimes-unpredictable, non-artificial intelligences (i.e. you and me), "trust" in working with robots has become an operative concept. Can we safely rely on the robots we will be working with?

As the project's website puts it:

The development of robotic assistants is being held back by the lack of a coherent and credible safety framework. Consequently, robotic assistant applications are confined either to research labs or, in practice, to scenarios where physical interaction with humans is purposely limited, e.g., surveillance, transport or entertainment.

The Trustworthy Robot Assistant research program's ultimate goal is to get robots out of these cloistered environments so that they can be put to good use out in the world — among us. To make that leap, researchers need to understand what limits both the realities and perceptions of robot behavior.

As Professor Michael Fisher of Liverpool University put it:

The assessment of robotic trustworthiness has many facets, from the safety analysis of robot behaviors, through physical reliability of interactions, to human perceptions of such safe operation.

It will be interesting to follow the project's progress since their results may very well shape the fine-grained texture of our potentially robot-saturated lives a few decades in the future.

At this point it's worth reminding everyone of The Three Laws of Robotics so presciently set down by Isaac Asimov more than 70 years ago:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Lets hope so.

You can keep up with more of what Adam Frank is thinking on Facebook and on Twitter: @AdamFrank4

Copyright 2013 NPR. To see more, visit