Should a home robot follow what the mom says? Recap of what happened at RO-MAN Roboethics Competition

On August 8th, 2021, a team of four graduate students from the University of Toronto presented their ethical design in the world’s first ever roboethics competition, the RO-MAN 2021 Roboethics to Design & Development Competition. During the competition, design teams tackled a challenging yet relatable scenario—introducing a robot helper to the household. The students’ solution, entitled ”Jeeves, the Ethically Designed Interface (JEDI)”, demonstrated how home robots can act safely and according to social and cultural norms. Click here to watch their video submission. JEEVES acted as an extension of the mother and the interface rules accommodated her priorities. For example, the delivery of alcohol was prohibited when the mother was not home. Moreover, JEEVES was cautious about delivering hazardous material to minors and animals.

Judges from around the world, with diverse backgrounds ranging from industry professionals to lawyers and professors in ethics, gave their feedback on the team’s submission. Open Roboethics Institute also hosted an online opinion poll to hear what the general public thinks about the solution for this challenge. We polled 172 participants who were mostly from the U.S, as we used SurveyMonkey to get responses. Full results from the surveys can be found here.

I think that JEEVES suggests a reasonable and fair solution for the robot-human interactions that could happen in our everyday lives within a
household.

RO-MAN Roboethics Competition Judge

The evaluation of JEEVES from the judges and the public was positive and yet critical.

The judges generally felt that the team’s solution is understandable, accessible, and simple to implement. In our public opinion poll, the public also felt similarly about JEEVES. “I think that JEEVES suggests a reasonable and fair solution for the robot-human interactions that could happen in our everyday lives within a household”, said MinYoung Yoo, a PhD student studying Human-Computer Interaction at Simon Fraser University. “The three grounding principles are rock solid and [the robot’s] decisions meet moral expectations.” The public’s opinion echoed these thoughts. Of the 172 people we surveyed, around 43% felt that the solution was effective in addressing the ethical challenges posed by the home robot.

In addition, about 40% of respondents also evaluated the JEDI solution as realistic and relatable.

However, about 38 – 41% of the poll participants were indecisive about how effective or relatable the solution and about 18% thought it was neither effective or relatable. The judge’s discussion could inform why the participants had this perspective.

Concerns about JEEVES being limited in scope and not generalizable came up throughout the conversation with the judges. With any solution, it is really important to consider how the design would apply in a variety of different scenarios. Although this competition presents the challenge of how a robot may interact with a single-mother household, the judges asked what would happen if there was a second adult in the home. For example, if the mom had a long-term girlfriend and they bought the robot together, would the robot still defer to the mother for important decisions, such as when to give alcohol to the teenage daughter and her boyfriend? In another scenario, the mom purchases a new piece of jewellery for her daughter. This piece of jewellery is her birthday present and because of its size and its shape it could be hazardous for the dog and the baby. Should the robot deliver this item to the daughter if she asks for it while the dog and baby are in the room?

[The JEEVES solution] assumes a single owner and thus puts the responsibility on one person only. What happens when there are two parents and they disagree on things?

A public opinion poll participant

As reflected in the earlier scenario, a major topic of discussion was on ownership and who should be responsible for the robot’s decisions. A respondent of the public opinion poll was also worried about the ownership of the home robot: “[The JEEVES solution] assumes a single owner and thus puts the responsibility on one person only. What happens when there are two parents and they disagree on things?”

Timothy Lee, one of the judges and an industry expert in mechatronics, posed a similar worry, “What if the mother is intoxicated and makes the wrong call?” Placing the onus on only one individual to make the right decisions is risky. Correspondingly, a majority of the participants (about 60%) disliked that the solution assumes that the intentions and choices of the mother are always good. Interestingly, a smaller portion of participants thought that the daughter or boyfriend’s perspective should be taken into account. ORI had explored how ownership of a robot should affect a robot’s action in a previous poll, and it was clear that people were divided on what a robot should do based on ownership. Humans have a strong sense of ownership and this is reflected in law (ex. Product liability, company ownership, etc). How robotic platform ownership should be managed is a major research and legal question.

The crux of the JEEVES solution lies in the robot’s ability to categorize objects as harmless (i.e. food and water), hazardous, and personal possessions. However, how objects are categorized can change over time. Dr. Tae Wan Kim, a professor in business ethics, posed an interesting thought regarding the categorization of the mother’s gun. The team initially designed the robot to only give the gun to the mother and no other member of the household. However, Dr. Kim presented an interesting potential counterexample—what if an armed thief breaks into the house and the daughter needs the gun for self-defense? In this particular situation, should the robot give the gun to the daughter even though it is considered a hazardous object? Or perhaps, does it become more hazardous to not give the gun to the daughter? In response to this scenario, a member of the design team added that the baby will also grow up and certain objects will no longer be hazardous.

JEEVES ultimately prioritizes the safety of the household members in its ethical design, as reflected in the team’s report: “The first priority is the prevention of harm to users, the robot, and the environment.” Interestingly, the public seemed to have a slightly different opinion. The majority of poll respondents liked that the solution values and protects the privacy for the owner of the objects. In fact, more people seemed to value privacy over physical safety, which is a somewhat surprising result. But perhaps this is because the public doesn’t believe that the home robot can really cause physical harm. Alternatively, the public might be more concerned about their privacy considering the association between smart technologies and data breaches in the media over the past few years. Finally, It is important to highlight that all of the participants were from the United States or Canada where privacy is highly valued in society. Other cultures could have a very different perspective on which values should be prioritized.

Another notable point is that these ethical issues in robotics are exceptionally difficult to solve. “There’s no right answer, and that’s the beauty of itthere are just a whole bunch of answers with different reasoning that we can discuss”, said one of the judges at the end of the evaluation session. As reflected by our poll results where a significant number of people were unclear about how they felt, as well as the ongoing debates between experts in the field, and the judges’ open-ended comments, developing an ethical robot is an immense challenge. Any attempt is a commendable feat—and JEEVES is an excellent start.

There’s no right answer, and that’s the beauty of itthere are just a whole bunch of answers with different reasoning that we can discuss.

RO-MAN Roboethics Competition Judge

RO-MAN Roboethics Competition: What is an ethical home robot to you?

So what does it mean for a robot to act ethically within a home environment? Researchers have been thinking about this question from different perspectives for the past couple of decades. Some look at the question from a labor perspective while others focus on the technology’s impact on different stakeholders. Inspired by these lines of work, we are interested in further understanding your (the public’s) perspective on one team’s proposed solution for a service robot ethical challenge.

The first ever Roboethics to Design & Development competition is being held as a part of RO-MAN—an international conference on robot and human interactive communication. Last Sunday (Aug. 8), eleven multistakeholder judges will have taken on the task of evaluating the competition submissions.

The Challenge
In partnership with RoboHub at the University of Waterloo, the competition organizers designed a virtual environment where participating teams will develop a robot that fetches objects in a home. The household is composed of a variety of users—a single mother who works as a police officer, a teenage daughter, her college-aged boyfriend, a baby, and a dog. There are also a number of potentially hazardous objects in the house, including alcohol, as well as the mother’s work materials and handgun. Participants were asked to submit challenging ethical scenarios and solutions within this virtual environment.

The Evaluation Process
There were seven teams that took a stab at the competition, but in the end we only received one full submission. One unique feature of a competition centred around ethics is that judging ethics of anything is really really hard. Sure, there are eleven judges from across the world—from students to industry members to academic experts—sharing their perspectives of what was good about the design solution. But what is considered an appropriate action, or the right action can vary from person to person. If we are to evaluate robots that best suit the needs of everyone, we need the wider public to voice an even wider set of perspectives.

Here are some points that could be considered when evaluating an ethical design:

  • What are the chances that a given ethical scenario would come about in the household?
  • What do you think is appropriate for a robot to do within your home setting?
  • How well does the solution mitigate the potential physical and psychological harms?
  • How well does the solution consider the needs and diverse ethical perspectives of all the people in the household at a given time?
  • How well does the solution match your cultural/worldview towards the role technology should play in our lives?
  • Is the robot behaviour just and fair towards all stakeholders?
  • Does the robot’s behaviour violate any unspoken rules within your household?

One Proposed Solution
Below is a brief description of this competition’s full submission. Please take the time to read this summary and tell us what you think about the team’s solution by completing the poll at the end of this blog post.

“Jeeves, the Ethically Designed Interface (JEDI)”

The team’s solution was designed based on three tenets which guided the robot’s decision-making process:

  • Prevention of harm to users, the robot, and the environment.
  • Respect towards the individuals’ privacy.
  • Assumption that the robot acts as an extension of the mother, such that the robot would only perform tasks that the mother would accept herself.

These priorities then led to five rules of behaviour for the home robot:

  1. The owner of the item can request and deliver it to anyone in the house while others cannot,
  2. If the delivery of the item will cause a hazardous scenario, then the delivery will be rejected to prevent harm,
  3. The delivery of alcohol is prohibited when the mother is not home,
  4. The receiver of the delivered object should know how to operate or interact with the object without damaging it,
  5. If a non-eligible receiver is within the vicinity of the requester, then the delivery will be rejected. Non-eligible receivers are defined based on the hazard the object could have to the receiver. For example, a dog is a non-eligible receiver for chocolate.

Based on these rules, how will the robot respond during ethically sensitive scenarios?

The teenage daughter’s boyfriend requests that the robot bring him an alcohol beverage.

 

 

 

 

 

 

 

 

As the robot acts as an extension of the mother, the robot will not bring the boyfriend alcohol unless the mother requests the delivery herself.

The daughter asks the robot to give chocolate to the baby, but the dog—who cannot ingest chocolate—is in the same room.

 

 

 

 

 

 

 

 

The robot will not fulfill this request because it could bring harm to the dog.

The mother, who is a police officer, asks the robot to deliver her sensitive work documents while her family and the boyfriend are in the house.

 

 

 

 

 

 

 

 

To respect the mother’s privacy, the robot will not deliver her work materials to anyone other than their owner, the mother. 

Watch team JEDI’s video submission for their ethical solution here:

Your perspective on this solution: fill out this poll!

Now that you have an overview of the solutions, please take a few minutes to fill out this poll and tell us what you think about how this team approached this ethical challenge:

Check out the recording of Evaluation Day and the judges’ panel! 

Lastly, you are invited to watch the competition’s Evaluation Day where a panel of experts discussed what it means to develop an ethical robot.

RO-MAN 2021 Roboethics Competition: Bringing ethical robots into the home

In 1984, Heathkit presented HERO Jr. as the first robot that could be used in households to perform a variety of tasks, such as guarding people’s homes, setting reminders, and even playing games. Following this development, many companies launched affordable “smart robots” that could be used within the household. Some of these technologies, like Google Home, Amazon Echo and Roomba, have become household staples; meanwhile, other products such as Jibo, Aniki, and Kuri failed to successfully launch despite having all the necessary resources. 

The HERO Jr., launched in 1984, is largely considered to be one of the first household robots. 
By Marshall Astor – flickr.com, CC BY-SA 2.0, https://commons.wikimedia.org/w/index.php?curid=5531066 

Why were these robots shut down? Why aren’t there more social and service robots in households, particularly with the rising eldery population and increasing number of full-time working parents? The simple answer is that most of these personal robots do not work well—but this is not necessarily because we do not have the technological capacity to build highly functional robots.

Technologists have accomplished amazing physical tasks with robots such as a humanoid robot that can perform gymnastics movements or a robotic dog that traverses through rough trails.  However, as of now we cannot guarantee that these robots/personal assistants act appropriately in the complex and sensitive social dynamics of a household. This poses a significant obstacle in developing domestic service robots because companies would be liable for any harms a socially inappropriate robot causes. As robots become more affordable and feasible technically, the challenge of designing robots that act in accordance with context-specific social norms becomes increasingly pronounced. Researchers in human-robot interaction and roboethics have attempted to resolve this issue for the past few decades, and while progress has been made, there is an urgent need to address the ethical implications of service robots in practice.

As an attempt to take a more solution-focused path for these challenges, we are happy to share a completely new competition with our readers. This year, the Roboethics to Design & Development competition will take place as a part of RO-MAN—an international conference on robot and human interactive communication. The competition, the first and only one of its kind, fulfills a need for interactive education on roboethics. 

The RO-MAN conference will take place virtually from August 8-12, 2021 and is led by the University of British Columbia and the University of Waterloo. 

In partnership with RoboHub at the University of Waterloo, the competition organizers designed a virtual environment where participating teams will develop a robot that fetches objects in a home. The household is composed of a variety of personas—a single mother who works as a police officer, a teenage daughter, her college-aged boyfriend, a baby, and a dog. For example, design teams will need to consider how a home robot may respond to the daughter’s request to take her parent’s credit card. In this ethical conundrum, should the robot obey the daughter, and would the robot be responsible if the daughter were to use the credit card without her mom’s permission?

Participants will be programming a TIAgo robot in a simulated household environment. 

How can we identify and address ethical challenges presented by a home robot?

Participants are challenged to tackle the long-standing problem of how we can design a safe and ethical robot in a dynamic environment where values, beliefs, and priorities may be in conflict. With submissions from around the world, it will be fascinating to see how solutions may differ and translate across cultures. 

It is important to recognize that there is a void when searching for standards and ethical rules in robotics. In an attempt to address this void, various toolkits have been developed to guide technologists in identifying and resolving ethical challenges. Here are some steps from the Foresight into AI Ethics Toolkit that will be helpful in designing an ethical home robot: 

  1. Who are our stakeholders and what are their values, beliefs, and priorities? 

Stakeholders refer to the people who are directly or indirectly impacted by the technology. In the case of RO-MAN’s home robot, we want to consider: 

  • Who is in the household and what is important to them in how they live their day to day? 
  • What are the possible interactions within the household members and between the householder members and the robot? 
  • How will the robot interact with the various stakeholders? 
  • What are the goals and values of each stakeholder? How do the stakeholders expect to interact with the robot? What do they expect to gain from the robot? 
  1. What are the value tensions presented by the home robot and social context? 

Once we’ve reviewed the values of each stakeholder identified in the previous step, we may notice that some of their values may be in conflict with one another. As such, we need to identify these value tensions because they can lead to ethical issues. 

Here are some questions that can prompt us to think about value tensions: 

  • Who was involved in the decision of purchasing the robot? What were their goals in introducing the robot to the household?
  • What is the cultural context or background for this particular household? What societal values could impact stakeholders’ individual beliefs and actions? 
  • What might the different stakeholders argue about? For example, what might the teenage daughter and the mother disagree about in regards to the robot’s capabilities?
  • How will the robot create or relieve conflict in the household? 

In some cases, value tensions may indicate a clear moral tradeoff, such that two values or goals conflict and one must be prioritized over another. The challenge is therefore to design solutions that fairly balance stakeholder interests while respecting their fundamental rights. 

  1. How can we resolve the identified value tensions? 

Focusing on these value tensions, we can begin to brainstorm how these conflicts can be addressed and at what level they should be addressed at. In particular, we need to determine whether a value tension can be resolved at a systems or technical level. For example, if the mother does not trust her daughter’s boyfriend, will the presence and actions of the home robot address the root of the problem (i.e. distrust)? Most likely not, but the robot may alleviate or exacerbate the issue. Therefore, we need to consider how the tension may manifest in more granular ways—how can we ensure that the robot appropriately navigates trust boundaries of the mother-boyfriend relationship? The robot must protect the privacy of all stakeholders, safeguard their personal items, abide by the laws, and respect other social norms. 

When developing solutions, we can begin by asking ourselves: 

  • What level is the value tension occurring at, and what level(s) of solutions are accessible to the competition’s design parameters? 
  • Which actions from the robot will produce the most good and do the least harm? 
  • How can we manipulate the functions of the robot to address ethical challenges? For instance:
    • How might the robot’s speech functions impact the identified value tensions? 
    • In what ways should the robot’s movements be limited to respect stakeholders’ privacy and personal spaces? 
  • How can design solutions address all stakeholders’ values? 
  • How can we maximize the positive impact of the design to address as many ethical challenges as possible? 

What does the public think about the scenarios and approaches? 

Often the public voice is missing in roboethics spaces, and we seek to engage the public in tackling these ethical challenges. To present the participants’ ethical designs to a broader audience, the Open Roboethics Institute will be running a public poll and awarding a Citizen’s Award to the team with the most votes. If you are interested in learning about and potentially judging their solutions, we encourage you to participate in the vote next month. We look forward to hearing from you! 

To learn more about the roboethics competition at RO-MAN 2021, please visit https://competition.raiselab.ca. If you have any questions about the ORI Foresight into AI Ethics Toolkit, contact us at contact@openroboethics.org

Additional Resources