Community, craft, and the vernacular in artificially intelligent systems take the position that everyone participating in society is an expert in our experiences within the community infrastructures, which inform the makeup of robotic entities.
Though we may not be familiar with the jargon used in specialized professional contexts, we share the vernacular of who we are as people and communities and the intimate sense that we are being learned. We understand that our data and collaboration is valuable, and our ability to successfully cooperate with the robotic systems proliferating around is well served by the creation of qualitatively informed systems that understand and perhaps even share the aims and values of the humans they work with.
Using her art practice, which interrogates a humanoid robot and seeks to create culturally specific voice interactive entities as a case in point, Dinkins examines how interactions between humans and robots are reshaping human-robot and human-human relationships and interactions. She ponders these ideas through the lens of race, gender, and aging. She argues communities on the margins of tech production, code, and the institutions creating the future must work to upend, circumvent, or reinvent the algorithmic systems increasingly controlling the world, including robotics, that maintain us.
Wired Magazine recently called for us to, post pandemic, “ditch our tech enabled tools of social distancing”. But are our telepresence robots creating emotional distancing or are they actually improving our emotional lives. This week in our weekly “COVID-19, robots and us” discussion with experts, we’re looking at the topic of virtual presence and emotional contact as well as many other practical ways that robotics can make a difference in pandemic times.
Robin Murphy, Raytheon Professor at Texas A&M University and founder of the field of Rescue Robotics, was involved in the very first use of robots in a disaster scenario in 9/11. Since then she’s been involved in multiple disaster responses worldwide, including the Ebola outbreak in 2014-2016. During the US Ebola outbreak, the White House Office of Science, Technology and policy, and then later NSF did a series of workshops, and myself and Ken Goldberg, are among those who participated in work with various public health officials in groups such as Doctors Without Borders.
“Some of the lessons learned about infectious diseases in general, and for COVID, in particular, are that there’s really five big categories of uses of robots. Most everybody immediately thinks of clinical applications, or things that are directly related to the health care industry, but the roll of robots is far broader than that. It’s not always about replacing doctors. It’s about how can robots assist in any way they can in this whole large, complex enterprise of a disaster.” [Robin Murphy March 31 2020]
Ross Mead’s company Semio develops software for robot operation that focuses on how people will live, work and play with robots in their everyday lives. “We’re building an operating system and app ecosystem for in home personal robots. And all of our software relies on natural language user interfaces, just speech and body language or other social cues.”
“Right now as it pertains to COVID-19. We are working with a team of content creators from a company called copy to develop conversational content similar to chatbots, or voice voice based skills that’s geared towards informing users about or helping mitigate the spread of COVID-19. We’re also developing socially aware navigation systems for mobile robots, natural human environments. I would love to talk about use cases for social robots, even telepresence robots, as well as the impacts of social isolation in these times.” [Ross Mead March 31 2020]
Therapeutic robot seal Paro in an aged care facility.
Wendy Ju studies people’s interaction with autonomous cars, systems or robots. She focuses on things like nonverbal cues, and the things people do to coordinate and collaborate with one another. Her PhD dissertation was on the design of implicit interactions, something a lot of us take for granted or consider static, not dynamic. Through her work on autonomous cars, she’s been exploring the subtle cues that convey critical information.
“if we get these things wrong, or it’s life or death. I think we’re starting to understand that a lot of the things that we think of as interaction are only the top layer of what we’re actually doing all the time with other people. And if we don’t understand those lower layers, it could kill us.”
Pedestrians and driverless car interaction
“So last week, I put together a proposal to study how people are interacting with one another in the city around the social distancing policy. And I agree the name is not perfect, but I think it also gets to the heart of what’s important to do. In this epidemic, there’s a halo effect around our social interactions because we know they’re necessary and good for us. And so people think, well, I shouldn’t go to the grocery stores and go to the hospital. But surely, it can’t be bad to go visit my neighbor or surely it can’t be bad to go see my grandmother, these kind of inclinations will kill us, when taken to scale.”
“When we say social distancing, we’re saying like, yes, school is good, but school is bad in the situation, churches good, but churches are bad in the situation, really getting at the thing that we are so tempted to do that is literally the thing that we’re trying to stop right now. I think that’s why they call it social distancing. And it does definitely have a physical corollary. I’m interested to see afterwards, if those people who were playing basketball and other people who were playing soccer, are those places where people got more sick or not? We don’t actually know all the different mechanisms for transmission for disease. And I think later on, we’ll be able to figure it out.” [Wendy Ju March 31 2020]
Cory Kidd is the CEO and founder of Catalia Health, which uses social robots for medical care management. Catalia Health has done extensive clinical trials prior to commercial roll out and leads the world in understanding robots for medical care.
“The concept of chronic disease management of course is not new, it’s just that the usual model is very human powered. We do it in clinical settings, in the doctor’s office, in the hospital, and we send people out to homes, and a lot of the work is done by calling patients on the phone to check in on them. We replace all of those by putting actually a physical robot in the patient’s home to talk to them.
Catalia Health’s Mabu at home with Michelle Chin
“So what we’re doing on the AI side is generating conversations for patients on the fly, for whatever condition they’re dealing with, and we build these around specific conditions. The robotic piece of it is really driven by the psychology around why we would rather be all in a room together, as opposed to, gathering around our computers and staring into the the screen at zoom. We intuitively get that physical presence is different. “
“When we’re face to face with someone, they’re more engaged, we create stronger relationships and a number of other things. Research showed that those differences actually carry over into the future. When you put a cute little robot in front of someone that can look them in the eyes while it’s talking to them, we actually get a lot of the effects of face to face interaction. And so we’ve leveraged that to build chronic disease care management programs. Over the last couple of years, we’ve been rolling these out largely in specialty pharmacy, so we work with some of the largest pharma manufacturers in the world, like Pfizer. We’re helping patients across a number of different conditions really keep track of how they’re doing, to stay on therapy and stay out of the hospital using our AI and robotics platform.”
“The current situation around the world is really highlighting the need for more of this kind of technology.” [Cory Kidd March 31 2020]
Medical personnel works inside one of the emergency structures that were set up to ease procedures outside the hospital of Brescia, Northern Italy, Tuesday, March 10, 2020. For most people, the new coronavirus causes only mild or moderate symptoms, such as fever and cough. For some, especially older adults and people with existing health problems, it can cause more severe illness, including pneumonia. (Claudio Furlan/LaPresse via AP)
“There’s this fundamental issue that I’ve been thinking a lot about, which is protecting the health care workers, especially where they’re now having to provide these tests for huge numbers of people. Swabbing is quite an uncomfortable and invasive process. And is there any way that that that we might be able to automate that at some point? I don’t think that’s going to happen anytime soon. But it’s an interesting goal that we could move in that direction.”
“The other is the idea that intubation is an incredibly difficult process and very risky because a lot of droplets vaporizing happens. That’s another area where it would be very helpful if that could be teleoperated. Right now, the state of the art in telemedicine, tele surgery in particular, and these type of procedures is not ready for the situation we’re facing now. We are nowhere near capable of doing that. And so I think this is a really important wake up call to start to develop these technologies.”
Also, in the discussion, Jessica Armstrong who is a mechanical engineer at SuitX and local coordinator for Open Source COVID-19 Medical Supplies gave us updates on local PPE activities and how community grass roots initiatives like OSCMS and Helpful Engineering have been part of catalyzing networks of people to sew masks and gowns, to laser cut face shields and 3D print parts for PPE and medical equipment, and developing new designs for emergency ventilators and respirators, while we’re still waiting for manufacturers and the supply chain to meet the demand.
Perhaps most critically, groups like OSCMS and Helpful Engineering validate and share designs for PPE so that people aren’t wasting time designing their own solutions, nor putting health care workers at risk with badly designed homemade PPE.
Our second weekly discussion about “COVID-19, robots and us” from March 31 is now available online and as a podcast. You can sign up to join the audience for the next episodes here.
Special guests were Robin Murphy, Raytheon Professor at Texas A&M University and founder of the field of Rescue Robotics, Ross Mead, CEO of Semio and VP of AI LA, Wendy Ju, Interaction Design Professor at Cornell, Cory Kidd, CEO of Catalia Health – maker of medical social robots, Ken Goldberg, Director of CITRIS People and Robots Initiative and Jessica Armstrong, mechanical engineer at SuitX and local coordinator for Open Source Covid-19 Medical Supplies. Moderated by Andra Keay, Managing Director of Silicon Valley Robotics, with extra help from Erin Pan, Silicon Valley Robotics, and Beau Ambur from Kickstarter.
The capabilities of the UVH-170 unmanned helicopter address many social (medical, pharmaceutical, remote communities, humanitarian aid, etc.) and economic (mining, oil & gas, courier, etc.) use-cases being requested by the customers.
Singapore researchers have invented a disinfecting robot with an arm that mimics human movement, to help take the load off overworked cleaners during the coronavirus pandemic.
In research published in the Journal of Economic Psychology, scientists explore whether people trust robots as they do fellow humans. These interactions are important to understand because trust-based interactions with robots are increasingly common in the marketplace, workplace, on the road and in the home. Results show people extend trust similarly to humans and robots but people's emotional reactions in trust-based interactions vary depending on partner type.
More and more industrial tasks are being performed by robots, but human operators are still needed for the more complex manipulation actions, such as handling and processing food products.
For all of the potential that robotics and automation have brought to humanity’s table, there are still concerns surrounding liability and robotics that will likely continue for years to come.
The second article of the GANs in computer vision series - looking deeper in generative adversarial networks, mode collapse, conditional image synthesis, and 3D object generation, paired and unpaired image to image generation.
One of the most often overlooked parameters of direct current motors (both brush and brushless) is the Km or motor constant. The motor constant (expressed as Km) defines the ability of a motor to transform electrical power into mechanical power.
In this episode, Lilly interviews Kajal Gada on her work at BrainCorp, the San Diego-based company behind BrainOS, a technology stack for autonomous solutions. Kajal discusses BrainCorp’s cloud-connected operating system and their floor cleaning, vacuuming, and warehouse delivery robots. She also articulates some of the challenges in becoming a software engineer and developing commercial solutions.
Kajal Gada
Kajal Gada is a software engineer at BrainCorp. She works on developing algorithms for realizing robotic operations in the real world. Kajal received her Masters in Robotics from University of Maryland – College Park, and also has an MBA in Technology Management from NMIMS, Mumbai. She is an advocate for women in the workplace and has spoken at GHC in 2018. In her free time, she loves to read mystery novels and work on puzzles
In this episode, Lilly interviews Kajal Gada on her work at BrainCorp, the San Diego-based company behind BrainOS, a technology stack for autonomous solutions. Kajal discusses BrainCorp’s cloud-connected operating system and their floor cleaning, vacuuming, and warehouse delivery robots. She also articulates some of the challenges in becoming a software engineer and developing commercial solutions.
Kajal Gada
Kajal Gada is a software engineer at BrainCorp. She works on developing algorithms for realizing robotic operations in the real world. Kajal received her Masters in Robotics from University of Maryland – College Park, and also has an MBA in Technology Management from NMIMS, Mumbai. She is an advocate for women in the workplace and has spoken at GHC in 2018. In her free time, she loves to read mystery novels and work on puzzles
Checking for potential issues during production allows manufacturers to scrap or rework unacceptable parts at the beginning of a run, and correct issues before a lot of parts are produced – this saves a significant amount of time and expense.
Using a robot to disrupt the gait cycle of participants, researchers discovered that feedforward mechanisms controlled by the cerebellum and feedback mechanisms controlled at the spinal level determine how the nervous system responds to robot-induced changes in step length. Credit: Wyss Institute at Harvard University By Tim Sullivan, Spaulding Rehabilitation Network Communications
Many of us aren’t spending much time outside lately, but there are still many obstacles for us to navigate as we walk around: the edge of the coffee table, small children, the family dog. How do our brains adjust to changes in our walking strides? Researchers at the Wyss Institute for Biologically Inspired Engineering at Harvard University and the Motion Analysis Laboratory at Spaulding Rehabilitation Hospital used robots to try to answer that question, and discovered that mechanisms in both the cerebellum and the spinal cord determine how the nervous system responds to robot-induced changes in step length. The new study is published in the latest issue of Scientific Reports, and points the way toward improving robot-based physical rehabilitation programs for patients.
Using a robot to disrupt the gait cycle of participants, researchers discovered that feedforward mechanisms controlled by the cerebellum and feedback mechanisms controlled at the spinal level determine how the nervous system responds to robot-induced changes in step length. Credit: Wyss Institute at Harvard University
“Our understanding of the neural mechanisms underlying locomotor adaptation is still limited. Specifically, how behavioral, functional, and physiological processes work in concert to achieve adaptation during locomotion has remained elusive to date,” said Paolo Bonato, Ph.D., an Associate Faculty member of the Wyss Institute and Director of the Spaulding Motion Analysis Lab who led the study. “Our goal is to create a better understanding of this process and hence develop more effective clinical interventions.”
For the study, the team used a robot to induce two opposite unilateral mechanical perturbations to human subjects as they were walking that affected their step length over multiple gait cycles. Electrical signals recorded from muscles were collected and analyzed to determine how muscle synergies (the activation of a group of muscles to create a specific movement) change in response to perturbation. The results revealed a combination of feedforward control signals coming from the cerebellum and feedback-driven control signals arising in the spinal cord during adaptation. The relative side-specific contributions of the two processes to motor-output adjustments, however, depended on which type of perturbation was delivered. Overall, the observations provide evidence that, in humans, both descending and afferent drives project onto the same spinal interneuronal networks that encode locomotor muscle synergies.
Researchersstudy how our brains adjust to changes in our walking strides, gaining insights that could be used to develop better physical rehabilitation programs. Credit: Wyss Institute.
These results mirror previous observations from animal studies, strongly suggesting the presence of a defined population of spinal interneurons regulating muscle coordination that can be accessed by both cortical and afferent drives in humans. “Our team hopes to build on this work to develop new approaches to the design of robot-assisted gait rehabilitation procedures targeting specific descending- and afferent-driven responses in muscle synergies in the coming year,” said Bonato.
Intelligent systems, especially those with an embodied construct, are becoming pervasive in our society. From chatbots to rehabilitation robotics, from shopping agents to robot tutors, people are adopting these systems into their daily life activities. Alas, associated with this increased acceptance is a concern with the ethical ramifications as we start becoming more dependent on these devices [1]. Studies, including our own, suggest that people tend to trust, in some cases overtrusting, the decision-making capabilities of these systems [2]. For high-risk activities, such as in healthcare, when human judgment should still have priority at times, this propensity to overtrust becomes troubling [3]. Methods should thus be designed to examine when overtrust can occur, modelling the behavior for future scenarios and, if possible, introduce system behaviors in order to mitigate. In this talk, we will discuss a number of human-robot interaction studies conducted where we examined this phenomenon of overtrust, including healthcare-related scenarios with vulnerable populations, specifically children with disabilities.
References
A. Howard, J. Borenstein, “The Ugly Truth About Ourselves and Our Robot Creations: The Problem of Bias and Social Inequity,” Science and Engineering Ethics Journal, 24(5), pp 1521–1536, October 2018.
A. R. Wagner, J. Borenstein, A. Howard, “Overtrust in the Robotic Age: A Contemporary Ethical Challenge,” Communications of the ACM, 61(9), Sept. 2018.
J. Borenstein, A. Wagner, A. Howard, “Overtrust of Pediatric Healthcare Robots: A Preliminary Survey of Parent Perspectives,” IEEE Robotics and Automation Magazine, 25(1), pp. 46–54, March 2018.
Intelligent systems, especially those with an embodied construct, are becoming pervasive in our society. From chatbots to rehabilitation robotics, from shopping agents to robot tutors, people are adopting these systems into their daily life activities. Alas, associated with this increased acceptance is a concern with the ethical ramifications as we start becoming more dependent on these devices [1]. Studies, including our own, suggest that people tend to trust, in some cases overtrusting, the decision-making capabilities of these systems [2]. For high-risk activities, such as in healthcare, when human judgment should still have priority at times, this propensity to overtrust becomes troubling [3]. Methods should thus be designed to examine when overtrust can occur, modelling the behavior for future scenarios and, if possible, introduce system behaviors in order to mitigate. In this talk, we will discuss a number of human-robot interaction studies conducted where we examined this phenomenon of overtrust, including healthcare-related scenarios with vulnerable populations, specifically children with disabilities.
References
A. Howard, J. Borenstein, “The Ugly Truth About Ourselves and Our Robot Creations: The Problem of Bias and Social Inequity,” Science and Engineering Ethics Journal, 24(5), pp 1521–1536, October 2018.
A. R. Wagner, J. Borenstein, A. Howard, “Overtrust in the Robotic Age: A Contemporary Ethical Challenge,” Communications of the ACM, 61(9), Sept. 2018.
J. Borenstein, A. Wagner, A. Howard, “Overtrust of Pediatric Healthcare Robots: A Preliminary Survey of Parent Perspectives,” IEEE Robotics and Automation Magazine, 25(1), pp. 46–54, March 2018.