Archive 13.04.2020

Page 41 of 50
1 39 40 41 42 43 50

#307: Commercializing Robot Brains, with Kajal Gada

In this episode, Lilly interviews Kajal Gada on her work at BrainCorp, the San Diego-based company behind BrainOS, a technology stack for autonomous solutions. Kajal discusses BrainCorp’s cloud-connected operating system and their floor cleaning, vacuuming, and warehouse delivery robots. She also articulates some of the challenges in becoming a software engineer and developing commercial solutions.

Kajal Gada

Kajal Gada is a software engineer at BrainCorp. She works on developing algorithms for realizing robotic operations in the real world. Kajal received her Masters in Robotics from University of Maryland – College Park, and also has an MBA in Technology Management from NMIMS, Mumbai. She is an advocate for women in the workplace and has spoken at GHC in 2018. In her free time, she loves to read mystery novels and work on puzzles

Links

#307: Commercializing Robot Brains, with Kajal Gada

In this episode, Lilly interviews Kajal Gada on her work at BrainCorp, the San Diego-based company behind BrainOS, a technology stack for autonomous solutions. Kajal discusses BrainCorp’s cloud-connected operating system and their floor cleaning, vacuuming, and warehouse delivery robots. She also articulates some of the challenges in becoming a software engineer and developing commercial solutions.

Kajal Gada

Kajal Gada is a software engineer at BrainCorp. She works on developing algorithms for realizing robotic operations in the real world. Kajal received her Masters in Robotics from University of Maryland – College Park, and also has an MBA in Technology Management from NMIMS, Mumbai. She is an advocate for women in the workplace and has spoken at GHC in 2018. In her free time, she loves to read mystery novels and work on puzzles

Links

New study uses robots to uncover the connections between the human mind and walking control

Using a robot to disrupt the gait cycle of participants, researchers discovered that feedforward mechanisms controlled by the cerebellum and feedback mechanisms controlled at the spinal level determine how the nervous system responds to robot-induced changes in step length. Credit: Wyss Institute at Harvard University

By Tim Sullivan, Spaulding Rehabilitation Network Communications

Many of us aren’t spending much time outside lately, but there are still many obstacles for us to navigate as we walk around: the edge of the coffee table, small children, the family dog. How do our brains adjust to changes in our walking strides? Researchers at the Wyss Institute for Biologically Inspired Engineering at Harvard University and the Motion Analysis Laboratory at Spaulding Rehabilitation Hospital used robots to try to answer that question, and discovered that mechanisms in both the cerebellum and the spinal cord determine how the nervous system responds to robot-induced changes in step length. The new study is published in the latest issue of Scientific Reports, and points the way toward improving robot-based physical rehabilitation programs for patients.

New Study Uses Robots to Uncover the Connections Between the Human Mind and Walking Control
Using a robot to disrupt the gait cycle of participants, researchers discovered that feedforward mechanisms controlled by the cerebellum and feedback mechanisms controlled at the spinal level determine how the nervous system responds to robot-induced changes in step length. Credit: Wyss Institute at Harvard University

“Our understanding of the neural mechanisms underlying locomotor adaptation is still limited. Specifically, how behavioral, functional, and physiological processes work in concert to achieve adaptation during locomotion has remained elusive to date,” said Paolo Bonato, Ph.D., an Associate Faculty member of the Wyss Institute and Director of the Spaulding Motion Analysis Lab who led the study. “Our goal is to create a better understanding of this process and hence develop more effective clinical interventions.”

For the study, the team used a robot to induce two opposite unilateral mechanical perturbations to human subjects as they were walking that affected their step length over multiple gait cycles. Electrical signals recorded from muscles were collected and analyzed to determine how muscle synergies (the activation of a group of muscles to create a specific movement) change in response to perturbation. The results revealed a combination of feedforward control signals coming from the cerebellum and feedback-driven control signals arising in the spinal cord during adaptation. The relative side-specific contributions of the two processes to motor-output adjustments, however, depended on which type of perturbation was delivered. Overall, the observations provide evidence that, in humans, both descending and afferent drives project onto the same spinal interneuronal networks that encode locomotor muscle synergies.

Researchers study how our brains adjust to changes in our walking strides, gaining insights that could be used to develop better physical rehabilitation programs. Credit: Wyss Institute.

These results mirror previous observations from animal studies, strongly suggesting the presence of a defined population of spinal interneurons regulating muscle coordination that can be accessed by both cortical and afferent drives in humans. “Our team hopes to build on this work to develop new approaches to the design of robot-assisted gait rehabilitation procedures targeting specific descending- and afferent-driven responses in muscle synergies in the coming year,” said Bonato.

HRI 2020 Keynote: Ayanna Howard

Intelligent systems, especially those with an embodied construct, are becoming pervasive in our society. From chatbots to rehabilitation robotics, from shopping agents to robot tutors, people are adopting these systems into their daily life activities. Alas, associated with this increased acceptance is a concern with the ethical ramifications as we start becoming more dependent on these devices [1]. Studies, including our own, suggest that people tend to trust, in some cases overtrusting, the decision-making capabilities of these systems [2]. For high-risk activities, such as in healthcare, when human judgment should still have priority at times, this propensity to overtrust becomes troubling [3]. Methods should thus be designed to examine when overtrust can occur, modelling the behavior for future scenarios and, if possible, introduce system behaviors in order to mitigate. In this talk, we will discuss a number of human-robot interaction studies conducted where we examined this phenomenon of overtrust, including healthcare-related scenarios with vulnerable populations, specifically children with disabilities.

References

  1. A. Howard, J. Borenstein, “The Ugly Truth About Ourselves and Our Robot Creations: The Problem of Bias and Social Inequity,” Science and Engineering Ethics Journal, 24(5), pp 1521–1536, October 2018.
  2. A. R. Wagner, J. Borenstein, A. Howard, “Overtrust in the Robotic Age: A Contemporary Ethical Challenge,” Communications of the ACM, 61(9), Sept. 2018.
  3. J. Borenstein, A. Wagner, A. Howard, “Overtrust of Pediatric Healthcare Robots: A Preliminary Survey of Parent Perspectives,” IEEE Robotics and Automation Magazine, 25(1), pp. 46–54, March 2018.

Publication: HRI ’20: Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction |March 2020 | Pages 1 |https://doi.org/10.1145/3319502.3374842

 

HRI 2020 Keynote: Ayanna Howard

Intelligent systems, especially those with an embodied construct, are becoming pervasive in our society. From chatbots to rehabilitation robotics, from shopping agents to robot tutors, people are adopting these systems into their daily life activities. Alas, associated with this increased acceptance is a concern with the ethical ramifications as we start becoming more dependent on these devices [1]. Studies, including our own, suggest that people tend to trust, in some cases overtrusting, the decision-making capabilities of these systems [2]. For high-risk activities, such as in healthcare, when human judgment should still have priority at times, this propensity to overtrust becomes troubling [3]. Methods should thus be designed to examine when overtrust can occur, modelling the behavior for future scenarios and, if possible, introduce system behaviors in order to mitigate. In this talk, we will discuss a number of human-robot interaction studies conducted where we examined this phenomenon of overtrust, including healthcare-related scenarios with vulnerable populations, specifically children with disabilities.

References

  1. A. Howard, J. Borenstein, “The Ugly Truth About Ourselves and Our Robot Creations: The Problem of Bias and Social Inequity,” Science and Engineering Ethics Journal, 24(5), pp 1521–1536, October 2018.
  2. A. R. Wagner, J. Borenstein, A. Howard, “Overtrust in the Robotic Age: A Contemporary Ethical Challenge,” Communications of the ACM, 61(9), Sept. 2018.
  3. J. Borenstein, A. Wagner, A. Howard, “Overtrust of Pediatric Healthcare Robots: A Preliminary Survey of Parent Perspectives,” IEEE Robotics and Automation Magazine, 25(1), pp. 46–54, March 2018.

Publication: HRI ’20: Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction |March 2020 | Pages 1 |https://doi.org/10.1145/3319502.3374842

 

COVID-19 robotics resources: ideas for roboticists, users, and educators

Thessaloniki, Greece – April 6, 2020: Drone with recorded message informs citizens of Thessaloniki to stay home to be protected from the coronavirus.

Robots could have a role to play in COVID-19, whether it’s automating laboratory research, helping with logistics, disinfecting hospitals, education, or allowing carers, colleagues or loved ones to connect using telepresence. Yet many of these solutions are still in development or early deployment. The hope is that accelerating these translations could make a difference.

This page aims to compile some resources for roboticists who are able to help, users who need robots for COVID-19 applications, and people who want to learn about robotics while on lockdown.


This is not an exhaustive resource page, and we will regularly be updating the content. Please send pointers to sabine.hauert@robohub.org. Check AIhub.org for a similar resource page related to AI. Read More

HRI 2020 Online Day One

HRI2020 has already kicked off with workshops and the Industry Talks Session on April 3, however the first release of videos has only just gone online with the welcome from General Chairs Tony Belpaeme, ID Lab, University of Ghent and James Young, University of Manitoba.

https://youtu.be/Fkg3YvA5n5o

There is also a welcome from the Program Chairs Hatice Gunes from University of Cambridge and Laurel Riek from University of San Diego, requesting that we all engage with the participants papers and videos.

https://youtu.be/_74udxMmGJw

The theme of this year’s conference is “Real World Human-Robot Interaction,” reflecting on recent trends in our community toward creating and deploying systems that can facilitate real-world, long-term interaction. This theme also reflects a new theme area we have introduced at HRI this year, “Reproducibility for Human Robot Interaction,” which is key to realizing this vision and helping further our scientific endeavors. This trend was also reflected across our other four theme areas, including “Human-Robot Interaction User Studies,” “Technical Advances in Human-Robot Interaction,” “Human-Robot Interaction Design,” and “Theory and Methods in Human-Robot Interaction.”

The conference attracted 279 full paper submissions from around the world, including Asia, Australia, the Middle East, North America, South America, and Europe. Each submission was overseen by a dedicated theme chair and reviewed by an expert group of program committee members, who worked together with the program chairs to define and apply review criteria appropriate to each of the five contribution types. All papers were reviewed by a strict double-blind review process, followed by a rebuttal period, and shepherding if deemed appropriate by the program committee. Ultimately the committee selected 66 papers (23.6%) for presentation as full papers at the conference. As the conference is jointly sponsored by ACM and IEEE, papers are archived in the ACM Digital Library and the IEEE Xplore.

Along with the full papers, the conference program and proceedings include Late Breaking Reports, Videos, Demos, a Student Design Competition, and an alt.HRI section. Out of 183 total submissions, 161 (88%) Late Breaking Reports (LBRs) were accepted and will be presented as posters at the conference. A full peer-review and meta-review process ensured that authors of LBR submissions received detailed feedback on their work. Nine short videos were accepted for presentation during a dedicated video session. The program also includes 12 demos of robot systems that participants will have an opportunity to interact with during the conference. We continue to include an alt.HRI session in this year’s program, consisting of 8 papers (selected out of 43 submissions, 19%) that push the boundaries of thought and practice in the field. We are also continuing the Student Design Competition with 11 contenders, to encourage student participation in the conference and enrich the program with design inspiration and insights developed by student teams. The conference will include 6 full-day and 6 half-day workshops on a wide array of topics, in addition to the selective Pioneers Workshop for burgeoning HRI students.

Keynote speakers will reflect the interdisciplinary nature and vigour of our community. Ayanna Howard, the Linda J. and Mark C. Smith Professor and Chair of the School of Interactive Computing at the Georgia Institute of Technology, will talk about ‘Are We Trusting AI Too Much? Examining Human-Robot Interactions in the Real World’, Stephanie Dinkins, a transmedia artist who creates platforms for dialog about artificial intelligence (AI) as it intersects race, gender, aging, and our future histories, and Dr Lola Canamero, Reader in Adaptive Systems and Head of the Embodied Emotion, Cognition and (Inter-)Action Lab in the School of Computer Science at the University of Hertfordshire in the UK, will talk about ‘Embodied Affect for Real-World HRI’.

The Industry Talks Session was held on April 3 and we are particularly grateful to the sponsors who have remained with HRI2020 as we transition into virtual. Karl Fezer from ARM, Chris Roberts from Cambridge Consultants, Ker-Jiun Wang from EXGWear and Tony Belpaeme from IDLab at University of Ghent were able to join me for the first Industry Talks Session at HRI 2020 – a very insightful discussion!

The HRI2020 proceedings are available from the ACM digital library.

Full papers:
https://dl.acm.org/doi/proceedings/10.1145/3319502

Companion Proceedings (alt.HRI, Demonstrations, Late-Breaking Reports, Pioneers Workshop, Student Design Competitions, Video Presentations, Workshop Summaries):
https://dl.acm.org/doi/proceedings/10.1145/3371382

Page 41 of 50
1 39 40 41 42 43 50