Intelligent systems, especially those with an embodied construct, are becoming pervasive in our society. From chatbots to rehabilitation robotics, from shopping agents to robot tutors, people are adopting these systems into their daily life activities. Alas, associated with this increased acceptance is a concern with the ethical ramifications as we start becoming more dependent on these devices [1]. Studies, including our own, suggest that people tend to trust, in some cases overtrusting, the decision-making capabilities of these systems [2]. For high-risk activities, such as in healthcare, when human judgment should still have priority at times, this propensity to overtrust becomes troubling [3]. Methods should thus be designed to examine when overtrust can occur, modelling the behavior for future scenarios and, if possible, introduce system behaviors in order to mitigate. In this talk, we will discuss a number of human-robot interaction studies conducted where we examined this phenomenon of overtrust, including healthcare-related scenarios with vulnerable populations, specifically children with disabilities.
References
- A. Howard, J. Borenstein, “The Ugly Truth About Ourselves and Our Robot Creations: The Problem of Bias and Social Inequity,” Science and Engineering Ethics Journal, 24(5), pp 1521–1536, October 2018.
- A. R. Wagner, J. Borenstein, A. Howard, “Overtrust in the Robotic Age: A Contemporary Ethical Challenge,” Communications of the ACM, 61(9), Sept. 2018.
- J. Borenstein, A. Wagner, A. Howard, “Overtrust of Pediatric Healthcare Robots: A Preliminary Survey of Parent Perspectives,” IEEE Robotics and Automation Magazine, 25(1), pp. 46–54, March 2018.
Publication: HRI ’20: Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction |March 2020 | Pages 1 |https://doi.org/10.1145/3319502.3374842