Planetary exploration rover avoids sand traps with ‘rear rotator pedaling’
Flexible Conveyor Manufacturer Glide-Line Overcomes Space, Size, & Product Handling Constraints for Aerospace Industry Integrator
Flexible Conveyor Manufacturer Glide-Line Overcomes Space, Size, & Product Handling Constraints for Aerospace Industry Integrator
From SLAM to Spatial AI
You can watch this seminar here at 1PM EST (10AM PST) on May 15th.
Andrew Davison (Imperial College London)
Abstract: To enable the next generation of smart robots and devices which can truly interact with their environments, Simultaneous Localisation and Mapping (SLAM) will progressively develop into a general real-time geometric and semantic `Spatial AI’ perception capability. I will give many examples from our work on gradually increasing visual SLAM capability over the years. However, much research must still be done to achieve true Spatial AI performance. A key issue is how estimation and machine learning components can be used and trained together as we continue to search for the best long-term scene representations to enable intelligent interaction. Further, to enable the performance and efficiency required by real products, computer vision algorithms must be developed together with the sensors and processors which form full systems, and I will cover research on vision algorithms for non-standard visual sensors and graph-based computing architectures.
Biography: Andrew Davison is Professor of Robot Vision and Director of the Dyson Robotics Laboratory at Imperial College London. His long-term research focus is on SLAM (Simultaneous Localisation and Mapping) and its evolution towards general `Spatial AI’: computer vision algorithms which enable robots and other artificial devices to map, localise within and ultimately understand and interact with the 3D spaces around them. With his research group and collaborators he has consistently developed and demonstrated breakthrough systems, including MonoSLAM, KinectFusion, SLAM++ and CodeSLAM, and recent prizes include Best Paper at ECCV 2016 and Best Paper Honourable Mention at CVPR 2018. He has also had strong involvement in taking this technology into real applications, in particular through his work with Dyson on the design of the visual mapping system inside the Dyson 360 Eye robot vacuum cleaner and as co-founder of applied SLAM start-up SLAMcore. He was elected Fellow of the Royal Academy of Engineering in 2017.
Robotics Today Seminars
“Robotics Today – A series of technical talks” is a virtual robotics seminar series. The goal of the series is to bring the robotics community together during these challenging times. The seminars are scheduled on Fridays at 1PM EDT (10AM PDT) and are open to the public. The format of the seminar consists of a technical talk live captioned and streamed via Web and Twitter (@RoboticsSeminar), followed by an interactive discussion between the speaker and a panel of faculty, postdocs, and students that will moderate audience questions.
Stay up to date with upcoming seminars with the Robotics Today Google Calendar (or download the .ics file) and view past seminars on the Robotics Today Youtube Channel. And follow us on Twitter!
Upcoming Seminars
Seminars will be broadcast at 1PM EST (10AM PST) here.
22 May 2020: Leslie Kaelbling (MIT)
29 May 2020: Allison Okamura (Stanford)
12 June 2020: Anca Dragan (UC Berkeley)
Past Seminars
We’ll post links to the recorded seminars soon!
Organizers
Contact
Velodyne Lidar Sensors Power 3D Data Capture in New NavVis VLX Mapping System
Business Perspectives and the Impacts of COVID-19 – Q&A with BitFlow Inc.
Soft robotic exosuit makes stroke survivors walk faster and farther
Researchers develop real-time physics engine for soft robotics
‘Data clouds fusion’ helps robots work as a team in hazardous situations
#309: Learning to Grasp, with Jeannette Bohg
In this episode, Lilly Clark interviews Jeannette Bohg, Assistant Professor at Stanford, about her work in interactive perception and robot learning for grasping and manipulation tasks. Bohg discusses how robots and humans are different, the challenge of high dimensional data, and unsolved problems including continuous learning and decentralized manipulation.
Jeannette Bohg is an Assistant Professor of Computer Science at Stanford University. She was a group leader at MPI until September 2017 and remains affiliated as a guest researcher. Her research focuses on perception for autonomous robotic manipulation and grasping. She is specifically interested in developing methods that are goal-directed, real-time and multi-modal such that they can provide meaningful feedback for execution and learning.
Before joining the Autonomous Motion lab in January 2012, Jeannette Bohg was a PhD student at the Computer Vision and Active Perception lab (CVAP) at KTH in Stockholm. Her thesis on Multi-modal scene understanding for Robotic Grasping was performed under the supervision of Prof. Danica Kragic. She studied at Chalmers in Gothenburg and at the Technical University in Dresden where she received her Masters in Art and Technology and her Diploma in Computer Science, respectively.
Links
Audience Choice HRI 2020 Demo
Welcome to the voting for the Audience Choice Demo from HRI 2020. Each of these demos showcases an aspect of Human-Robot Interaction research, and alongside “Best Demo” award, we’re offering an “Audience Choice” award. You can see the video and abstract from each demo here, with voting at the bottom. One vote per person. Deadline May 14 11:59 PM BST. You can also register for the Online HRI 2020 Demo Discussion and Award Presentation on May 21 4:00 PM BST.
1. Demonstration of A Social Robot for Control of Remote Autonomous Systems José Lopes, David A. Robb, Xingkun Liu, Helen Hastie
Abstract: There are many challenges when it comes to deploying robots remotely including lack of situation awareness for the operator, which can lead to decreased trust and lack of adoption. For this demonstration, delegates interact with a social robot who acts as a facilitator and mediator between them and the remote robots running a mission in a realistic simulator. We will demonstrate how such a robot can use spoken interaction and social cues to facilitate teaming between itself, the operator and the remote robots.
2. Demonstrating MoveAE: Modifying Affective Robot Movements Using Classifying Variational Autoencoders Michael Suguitan, Randy Gomez, Guy Hoffman
Abstract: We developed a method for modifying emotive robot movements with a reduced dependency on domain knowledge by using neural networks. We use hand-crafted movements for a Blossom robot and a classifying variational autoencoder to adjust affective movement features by using simple arithmetic in the network’s learned latent embedding space. We will demonstrate the workflow of using a graphical interface to modify the valence and arousal of movements. Participants will be able to use the interface themselves and watch Blossom perform the modified movements in real time.
3. An Application of Low-Cost Digital Manufacturing to HRI Lavindra de Silva, Gregory Hawkridge, German Terrazas, Marco Perez Hernandez, Alan Thorne, Duncan McFarlane, Yedige Tlegenov
Abstract: Digital Manufacturing (DM) broadly refers to applying digital information to enhance manufacturing processes, supply chains, products and services. In past work we proposed a low-cost DM architecture, supporting flexible integration of legacy robots. Here we discuss a demo of our architecture using an HRI scenario.
4. Comedy by Jon the Robot John Vilk, Naomi T. Fitter
Abstract: Social robots might be more effective if they could adapt in playful, comedy-inspired ways based on heard social cues from users. Jon the Robot, a robotic stand-up comedian from the Oregon State University CoRIS Institute, showcases how this type of ability can lead to more enjoyable interactions with robots. We believe conference attendees will be both entertained and informed by this novel demonstration of social robotics.
5. CardBot: Towards an affordable humanoid robot platform for Wizard of Oz Studies in HRI Sooraj Krishna, Catherine Pelachaud
Abstract: CardBot is a cardboard based programmable humanoid robot platform designed for inexpensive and rapid prototyping of Wizard of Oz interactions in HRI incorporating technologies such as Arduino, Android and Unity3d. The table demonstration showcases the design of the CardBot and its wizard controls such as animating the movements, coordinating speech and gaze etc for orchestrating an interaction.
6. Towards Shoestring Solutions for UK Manufacturing SMEs Gregory Hawkridge, Benjamin Schönfuß, Duncan McFarlane, Lavindra de Silva, German Terrazas, Liz Salter, Alan Thorne
Abstract: In the Digital Manufacturing on a Shoestring project we focus on low-cost digital solution requirements for UK manufacturing SMEs. This paper shows that many of these fall in the HRI domain while presenting the use of low-cost and off-the-shelf technologies in two demonstrators based on voice assisted production.
7. PlantBot: A social robot prototype to help with behavioral activation in young people with minor depression Max Jan Meijer, Maaike Dokter, Christiaan Boersma, Ashwin Sadananda Bhat, Ernst Bohlmeijer, Jamy Li
Abstract: The PlantBot is a home device that shows iconographic or simple lights to depict actions that it requests a young person (its user) to do as part of Behavioral Activation therapy. In this initial prototype, a separate conversational speech agent (i.e., Amazon Alexa) is wizarded to act as a second system the user can interact with.
8. TapeBot: The Modular Robotic Kit for Creating the Environments Sonya S. Kwak, Dahyun Kang, Hanbyeol Lee, JongSuk Choi
Abstract: Various types of modular robotic kits such as the Lego Mindstorm [1], edutainment robot kit by ROBOTIS [2], and the interactive face components, FacePartBot [3] have been developed and suggested to increase children’s creativity and to learn robotic technologies. By adopting a modular design scheme, these robotic kits enable children to design various robotic characters with plenty of flexibility and creativity, such as humanoids, robotic animals, and robotic faces. However, because a robot is an artifact that perceives an environment and responds to it accordingly, it can also be characterized by the environment it encounters. Thus, in this study, we propose a modular robotic kit that is aimed at creating an interactive environment for which a robot produces various responses.
We chose intelligent tapes to build the environment for the following reasons: First, we presume that decreasing the expectations of consumers toward the functionalities of robotic products may increase their acceptance of the products, because this hinders the mismatch between the expected functions based on their appearances, and the actual functions of the products [4]. We believe that the tape, which is found in everyday life, is a perfect material to lower the consumers’ expectation toward the product and will be helpful for the consumer’s acceptance of it. Second, the tape is a familiar and enjoyable material for children, and it can be used as a flexible module, which users can cut into whatever size they want and can be attached and detached with ease.
In this study, we developed a modular robotic kit for creating an interactive environment, called the TapeBot. The TapeBot is composed of the main character robot and the modular environments, which are the intelligent tapes. Although previous robotic kits focused on building a robot, the TapeBot allows its users to focus on the environment that the robot encounters. By reversing the frame of thinking, we expect that the TapeBot will promote children’s imagination and creativity by letting them develop creative environments to design the interactions of the main character robot.
9. A Gesture Control System for Drones used with Special Operations Forces Marius Montebaur, Mathias Wilhelm, Axel Hessler, Sahin Albayrak
Abstract: Special Operations Forces (SOF) are facing extreme risks when prosecuting crimes in uncharted environments like buildings. Autonomous drones could potentially save officers’ lives by assisting in those exploration tasks, but an intuitive and reliable way of communicating with autonomous systems is yet to be established. This paper proposes a set of gestures that are designed to be used by SOF during operation for interaction with autonomous systems.
10. CoWriting Kazakh: Learning a New Script with a Robot – Demonstration Bolat Tleubayev, Zhanel Zhexenova, Thibault Asselborn, Wafa Johal, Pierre Dillenbourg, Anara Sandygulova
Abstract: This interdisciplinary project aims to assess and manage the risks relating to the transition of Kazakh language from Cyrillic to Latin in Kazakhstan in order to address challenges of a) teaching and motivating children to learn a new script and its associated handwriting, and b) training and providing support for all demographic groups, in particular senior generation. We present the system demonstration that proposes to assist and motivate children to learn a new script with the help of a humanoid robot and a tablet with stylus.
11. Voice Puppetry: Towards Conversational HRI WoZ Experiments with Synthesised Voices Matthew P. Aylett, Yolanda Vazquez-Alvarez
Abstract: In order to research conversational factors in robot design the use of Wizard of Oz (WoZ) experiments, where an experimenter plays the part of the robot, are common. However, for conversational systems using a synthetic voice, it is extremely difficult for the experimenter to choose open domain content and enter it quickly enough to retain conversational flow. In this demonstration we show how voice puppetry can be used to control a neural TTS system in almost real time. The demo hopes to explore the limitations and possibilities of such a system for controlling a robot’s synthetic voice in conversational interaction.
12. Teleport – Variable Autonomy across Platforms Daniel Camilleri, Michael Szollosy, Tony Prescott
Abstract: Robotics is a very diverse field with robots of different sizes and sensory configurations created with the purpose of carrying out different tasks. Different robots and platforms each require their own software ecosystem and are coded with specific algorithms which are difficult to translate to other robots.
CAST YOUR VOTE FOR “AUDIENCE CHOICE”
VOTING CLOSES ON THURSDAY MAY 14 AT 11:59 PM BST [British Standard Time]