Archive 23.07.2020

Page 2 of 5
1 2 3 4 5

OmniTact: a multi-directional high-resolution touch sensor

Human thumb next to our OmniTact sensor, and a US penny for scale.

By Akhil Padmanabha and Frederik Ebert

Touch has been shown to be important for dexterous manipulation in robotics. Recently, the GelSight sensor has caught significant interest for learning-based robotics due to its low cost and rich signal. For example, GelSight sensors have been used for learning inserting USB cables (Li et al, 2014), rolling a die (Tian et al. 2019) or grasping objects (Calandra et al. 2017).

The reason why learning-based methods work well with GelSight sensors is that they output high-resolution tactile images from which a variety of features such as object geometry, surface texture, normal and shear forces can be estimated that often prove critical to robotic control. The tactile images can be fed into standard CNN-based computer vision pipelines allowing the use of a variety of different learning-based techniques: In Calandra et al. 2017 a grasp-success classifier is trained on GelSight data collected in self-supervised manner, in Tian et al. 2019 Visual Foresight, a video-prediction-based control algorithm is used to make a robot roll a die purely based on tactile images, and in Lambeta et al. 2020 a model-based RL algorithm is applied to in-hand manipulation using GelSight images.

Unfortunately applying GelSight sensors in practical real-world scenarios is still challenging due to its large size and the fact that it is only sensitive on one side. Here we introduce a new, more compact tactile sensor design based on GelSight that allows for omnidirectional sensing, i.e. making the sensor sensitive on all sides like a human finger, and show how this opens up new possibilities for sensorimotor learning. We demonstrate this by teaching a robot to pick up electrical plugs and insert them purely based on tactile feedback.

GelSight Sensors

A standard GelSight sensor, shown in the figure below on the left, uses an off-the-shelf webcam to capture high-resolution images of deformations on the silicone gel skin. The inside surface of the gel skin is illuminated with colored LEDs, providing sufficient lighting for the tactile image.


Comparison of GelSight-style sensor (left side) to our OmniTact sensor (right side).

Existing GelSight designs are either flat, have small sensitive fields or only provide low-resolution signals. For example, prior versions of the GelSight sensor, provide high resolution (400×400 pixel) images but are large and flat, providing sensitivity on only one side, while the commercial OptoForce sensor (recently discontinued by OnRobot) is curved, but only provides force readings as a single 3-dimensional force vector.

The OmniTact Sensor

Our OmniTact sensor design aims to address these limitations. It provides both multi-directional and high-resolution sensing on its curved surface in a compact form factor. Similar to GelSight, OmniTact uses cameras embedded into a silicone gel skin to capture deformation of the skin, providing a rich signal from which a wide range of features such as shear and normal forces, object pose, geometry and material properties can be inferred. OmniTact uses multiple cameras giving it both high-resolution and multi-directional capabilities. The sensor itself can be used as a “finger” and can be integrated into a gripper or robotic hand. It is more compact than previous GelSight sensors, which is accomplished by utilizing micro-cameras typically used in endoscopes, and by casting the silicone gel directly onto the cameras. Tactile images from OmniTact are shown in the figures below.


Tactile readings from OmniTact with various objects. From left to right: M3 Screw Head, M3 Screw Threads, Combination Lock with numbers 4 3 9, Printed Circuit Board (PCB), Wireless Mouse USB. All images are taken from the upward-facing camera.


Tactile readings from the OmniTact being rolled over a gear rack. The multi-directional capabilities of OmniTact keep the gear rack in view as the sensor is rotated.

Design Highlights

One of our primary goals throughout the design process was to make OmniTact as compact as possible. To accomplish this goal, we used micro-cameras with large viewing angles and a small focus distance. Specifically we picked cameras that are commonly used in medical endoscopes measuring just (1.35 x 1.35 x 5 mm) in size with a focus distance of 5 mm. These cameras were arranged in a 3D printed camera mount as shown in the figure below which allowed us to minimize blind spots on the surface of the sensor and reduce the diameter (D) of the sensor to 30 mm.


This image shows the fields of view and arrangement of the 5 micro-cameras inside the sensor. Using this arrangement, most of the fingertip can be made sensitive effectively. In the vertical plane, shown in A, we obtain $\alpha=270$ degrees of sensitivity. In the horizontal plane, shown in B, we obtain 360 degrees of sensitivity, except for small blind spots between the fields of view.

Electrical Connector Insertion Task

We show that OmniTact’s multi-directional tactile sensing capabilities can be leveraged to solve a challenging robotic control problem: Inserting an electrical connector blindly into a wall outlet purely based on information from the multi-directional touch sensor (shown in the figure below). This task is challenging since it requires localizing the electrical connector relative to the gripper and localizing the gripper relative to the wall outlet.


To learn the insertion task, we used a simple imitation learning algorithm that estimates the end-effector displacement required for inserting the plug into the outlet based on the tactile images from the OmniTact sensor. Our model was trained with just 100 demonstrations of insertion by controlling the robot using keyboard control. Successful insertions obtained by running the trained policy are shown in the gifs below.




As shown in the table below, using the multi-directional capabilities (both the top and side camera) of our sensor allowed for the highest success rate (80%) in comparison to using just one camera from the sensor, indicating that multi-directional touch sensing is indeed crucial for solving this task. We additionally compared performance with another multi-directional tactile sensor, the OptoForce sensor, which only had a success rate of 17%.


What’s Next?

We believe that compact, high resolution and multi-directional touch sensing has the potential to transform the capabilities of current robotic manipulation systems. We suspect that multi-directional tactile sensing could be an essential element in general-purpose robotic manipulation in addition to applications such as robotic teleoperation in surgery, as well as in sea and space missions. In the future, we plan to make OmniTact cheaper and more compact, allowing it to be used in a wider range of tasks. Our team additionally plans to conduct more robotic manipulation research that will inform future generations of tactile sensors.

This blog post is based on the following paper which will be presented at the International Conference on Robotics and Automation 2020:

We would like to thank Professor Sergey Levine, Professor Chelsea Finn, and Stephen Tian for their valuable feedback when preparing this blog post.

This article was initially published on the BAIR blog, and appears here with the authors’ permission.

How COVID-19 is Accelerating Robot and Drone Technology for use in Everyday Activities

The COVID-19 pandemic has forced changes to our daily lives, but when it comes to technology, not all changes have been unwelcomed. In many ways, shifting the way we live has accelerated the use of technology to a level that we were only beginning to dip our toes into.

Using drones to reduce disease-spreading mosquito populations

Vector-borne diseases are illnesses that can be transmitted to humans by blood-feeding insects, such as mosquitoes, ticks and fleas. Mosquitoes are known to contribute to the spread of a number of vector-borne diseases, including malaria, dengue, yellow fever and Zika.

RSS 2020 – all the papers and videos!

RSS 2020 was held virtually this year, from the RSS Pioneers Workshop on July 11 to the Paper Awards and Farewell on July 16. Many talks are now available online, including 103 accepted papers, each presented as an online Spotlight Talk on the RSS Youtube channel, and of course the plenaries and much of the workshop content as well. We’ve tried to link here to all of the goodness from RSS 2020.

The RSS Keynote on July 15 was delivered by Josh Tenenbaum, Professor of Computational Cognitive Science at MIT in the Department of Brain and Cognitive Sciences, CSAIL. Titled “It’s all in your head: Intuitive physics, planning, and problem-solving in brains, minds and machines”.

Abstract: I will overview what we know about the human mind’s internal models of the physical world, including how these models arise over evolution and developmental learning, how they are implemented in neural circuitry, and how they are used to support planning and rapid trial-and-error problem-solving in tool use and other physical reasoning tasks. I will also discuss prospects for building more human-like physical common sense in robots and other AI systems.

RSS 2020 introduces the new RSS Test of Time Award given to highest impact papers published at RSS (and potentially journal versions thereof) from at least ten years ago. Impact may mean that it changed how we think about problems or about robotic design, that it brought fully new problems to the attention of the community, or that it pioneered new approach to robotic design or problem solving. With this award, RSS generally wants to foster the discussion of the long term development of our field. The award is an opportunity to reflect on and discuss the past, which is essential to make progress in the future. The awardee’s keynote is therefore complemented with a Test of Time Panel session devoted to this important discussion.

This year’s Test of Time Awards goes to the pair of papers for pioneering an information smoothing approach to the SLAM problem via square root factorization, its interpretation as a graphical model, and the widely-used GTSAM free software repository.

Abstract: Many estimation, planning and optimal control problems in robotics have an optimization problem at their core. In most of these optimization problems, the objective function is composed of many different factors or terms that are local in nature, i.e., they only depend on a small subset of the variables. 10 years ago the Square Root SAM papers identified factor graphs as a particularly insightful way of modeling this locality structure. Since then we have realized that factor graphs can represent a wide variety of problems across robotics, expose opportunities to improve computational performance, and are beneficial in designing and thinking about how to model a problem, even aside from performance considerations. Many of these principles have been embodied in our evolving open source package GTSAM, which puts factor graphs front and central, and which has been used with great success in a number of state of the art robotics applications. We will also discuss where factor graphs, in our opinion, can break in

The RSS 2020 Plenary Sessions highlighted Early Career Awards for researchers, Byron Boots, Luca Carlone and Jeanette Bohg. Byron Boots is an Associate Professor in the Paul G. Allen School of Computer Science and Engineering at the University of Washington. Luca Carlone is the Charles Stark Draper Assistant Professor in the Department of Aeronautics and Astronautics at the Massachusetts Institute of Technology, and a Principal Investigator in the Laboratory for Information & Decision Systems (LIDS). Jeannette Bohg is an Assistant Professor of Computer Science at Stanford University.

Title: Perspectives on Machine Learning for Robotics

Abstract: Recent advances in machine learning are leading to new tools for designing intelligent robots: functions relied on to govern a robot’s behavior can be learned from a robot’s interaction with its environment rather than hand-designed by an engineer. Many machine learning methods assume little prior knowledge and are extremely flexible, they can model almost anything! But, this flexibility comes at a cost. The same algorithms are often notoriously data hungry and computationally expensive, two problems that can be debilitating for robotics. In this talk I’ll discuss how machine learning can be combined with prior knowledge to build effective solutions to robotics problems. I’ll start by introducing an online learning perspective on robot adaptation that unifies well-known algorithms and suggests new approaches. Along the way, I’ll focus on the use of simulation and expert advice to augment learning. I’ll discuss how imperfect models can be leveraged to rapidly update simple control policies and imitation can accelerate reinforcement learning. I will also show how we have applied some of these ideas to an autonomous off-road racing task that requires impressive sensing, speed, and agility to complete.

Title: The Future of Robot Perception: Certifiable Algorithms and Real-time High-level Understanding

Abstract: Robot perception has witnessed an unprecedented progress in the last decade. Robots are now able to detect objects and create large-scale maps of an unknown environment, which are crucial capabilities for navigation, manipulation, and human-robot interaction. Despite these advances, both researchers and practitioners are well aware of the brittleness of current perception systems, and a large gap still separates robot and human perception.

This talk discusses two efforts targeted at bridging this gap. The first focuses on robustness. I present recent advances in the design of certifiable perception algorithms that are robust to extreme amounts of noise and outliers and afford performance guarantees. I present fast certifiable algorithms for object pose estimation: our algorithms are “hard to break” (e.g., are robust to 99% outliers) and succeed in localizing objects where an average human would fail. Moreover, they come with a “contract” that guarantees their input-output performance. I discuss the foundations of certifiable perception and motivate how these foundations can lead to safer systems.

The second effort targets high-level understanding. While humans are able to quickly grasp both geometric, semantic, and physical aspects of a scene, high-level scene understanding remains a challenge for robotics. I present our work on real-time metric-semantic understanding and 3D Dynamic Scene Graphs. I introduce the first generation of Spatial Perception Engines, that extend the traditional notions of mapping and SLAM, and allow a robot to build a “mental model” of the environment, including spatial concepts (e.g., humans, objects, rooms, buildings) and their relations at multiple levels of abstraction.
Certifiable algorithms and real-time high-level understanding are key enablers for the next generation of autonomous systems, that are trustworthy, understand and execute high-level human instructions, and operate in large dynamic environments and over and extended period of time

Title: A Tale of Success and Failure in Robotics Grasping and Manipulation

Abstract: In 2007, I was a naïve grad student and started to work on vision-based robotic grasping. I had no prior background in manipulation, kinematics, dynamics or control. Yet, I dove into the field by re-implementing and improving a learning-based method. While making some contributions, the proposed method also had many limitations partly due to the way the problem was framed. Looking back at the entire journey until today, I find that I have learned the most about robotic grasping and manipulation from observing failures and limitations of existing approaches – including my own. In this talk, I want to highlight how these failures and limitations have shaped my view on what may be some of the underlying principles of autonomous robotic manipulation. I will emphasise three points. First, perception and prediction will always be noisy, partial and sometimes just plain wrong. Therefore, one focus of my research is on methods that support decision-making under uncertainty due to noisy sensing, inaccurate models and hard-to-predict dynamics. To this end, I will present a robotic system that demonstrates the importance of continuous, real-time perception and its tight integration with reactive motion generation methods. I will also talk about work that funnels uncertainty by enabling robots to exploit contact constraints during manipulation.

Second, a robot has many more sensors than just cameras and they all provide complementary information. Therefore, one focus of my research is on methods that can exploit multimodal information such as vision and touch for contact-rich manipulation. It is non-trivial to manually design a manipulation controller that combines modalities with very different characteristics. I will present work that uses self-supervision to learn a compact and multimodal representation of visual and haptic sensory inputs, which can then be used to improve the sample efficiency of policy learning. And third, choosing the right robot action representation has a large influence on the success of a manipulation policy, controller or planner. While believing many years that inferring contact points for robotic grasping is futile, I will present work that convinced me otherwise. Specifically, this work uses contact points as an abstraction that can be re-used by a diverse set of robot hands.

Inclusion@RSS is excited to host a panel “On the Future of Robotics” to discuss how we can have an inclusive robotics community and its impact on the future of the field. Moderator: Matt Johnson-Roberson (University of Michigan) with Panelists: Tom Williams (Colorado School of Mines), Eduard Fosch-Villaronga (Leiden University), Lydia Tapia (University of New Mexico), Chris Macnab (University of Calgary), Adam Poulsen (Charles Sturt University), Chad Jenkins (University of Michigan), Kendall Queen (University of Pennsylvania), Naveen Kuppuswamy (Toyota Research Institute).

The RSS community is committed to increasing the participation of groups traditionally underrepresented in robotics (including but not limited to: women, LGBTQ+, underrepresented minorities, and people with disabilities), especially people early in their studies and career. Such efforts are crucial for increasing research capacity, creativity, and broadening the impact of robotics research.

The RSS Pioneers Workshop for senior Ph.D. students and postdocs, was modelled on the highly successful HRI Pioneers Workshop and took place on Saturday July 11. The goal of RSS Pioneers is to bring together a cohort of the world’s top early career researchers to foster creativity and collaborations surrounding challenges in all areas of robotics, as well as to help young researchers navigate their next career stages. The workshop included a mix of research and career talks from senior scholars in the field from both academia and industry, research presentations from attendees and networking activities, with a poster session where Pioneers will get a chance to externally showcase their research.

Content from the various workshops on July 12 and 13 may be available through the various workshop websites.

RSS 2020 Accepted Workshops

WS1-2 Reacting to contact: Enabling transparent interactions through intelligent sensing and actuation Ankit Bhatia
Aaron M. Johnson
Matthew T. Mason
[Session]
WS1-3 Certifiable Robot Perception: from Global Optimization to Safer Robots Luca Carlone
Tat-Jun Chin
Anders Eriksson
Heng Yang
[Session]
WS1-4 Advancing the State of Machine Learning for Manufacturing Robotics Elena Messina
Holly Yanco
Megan Zimmerman
Craig Schlenoff
Dragos Margineantu
[Session]
WS1-5 Advances and Challenges in Imitation Learning for Robotics  Scott Niekum
Akanksha Saran
Yuchen Cui
Nick Walker
Andreea Bobu
Ajay Mandlekar
Danfei Xu
[Session]
WS1-6 2nd Workshop on Closing the Reality Gap in Sim2Real Transfer for Robotics Sebastian Höfer
Kostas Bekris
Ankur Handa
Juan Camilo Gamboa
Florian Golemo
Melissa Mozifian
[Session]
WS1-7 ROS Carpentry Workshop Katherine Scott
Mabel Zhang
Camilo Buscaron
Steve Macenski
N/A
WS1-8 Perception and Control for Fast and Agile Super-Vehicles II Varun Murali
Phillip Foehn
Davide Scaramuzza
Sertac Karaman
[Session]
WS1-9 Robotics Retrospectives  Jeannette Bohg
Franziska Meier
Arunkumar Byravan
Akshara Rai
[Session]
WS1-10 Heterogeneous Multi-Robot Task Allocation and Coordination  Harish Ravichandar
Ragesh Ramachandran
Sonia Chernova
Seth Hutchinson
Gaurav Sukhatme
Vijay Kumar
[Session]
WS1-11 Learning (in) Task and Motion Planning  Danny Driess
Neil T. Dantam
Lydia E. Kavraki
Marc Toussaint
[Session]
WS1-12 Performing Arts Robots & Technologies, Integrated (PARTI)  Naomi Fitter
Heather Knight
Amy LaViers
[Session]
WS1-13 Robots in the Wild: Challenges in Deploying Robust Autonomy for Robotic Exploration Hannah Kerner
Amy Tabb
Jnaneshwar Das
Pratap Tokekar
Masahiro Ono
[Session]
WS1-14 Emergent Behaviors in Human-Robot Systems  Erdem Bıyık
Minae Kwon
Dylan Losey
Noah Goodman
Stefanos Nikolaidis
Dorsa Sadigh
[Session]

Monday, July 13

WS Title Organizers Virtual Session Link
WS2-1 Interaction and Decision-Making in Autonomous Driving  Rowan McAllister
Litin Sun
Igor Gilitschenski
Daniela Rus
[Session]
WS2-2 2nd RSS Workshop on Robust Autonomy: Tools for Safety in Real-World Uncertain Environments Andrea Bajcsy
Ransalu Senanayake
Somil Bansal
Sylvia Herbert
David Fridovich-Keil
Jaime Fernández Fisac
[Session]
WS2-3 AI & Its Alternatives in Assistive & Collaborative Robotics  Deepak Gopinath
Aleksandra Kalinowska
Mahdieh Nejati
Katarina Popovic
Brenna Argall
Todd Murphey
[Session]
WS2-4 Benchmarking Tools for Evaluating Robotic Assembly of Small Parts Adam Norton
Holly Yanco
Joseph Falco
Kenneth Kimble
[Session]
WS2-5 Good Citizens of Robotics Research Mustafa Mukadam
Nima Fazeli
Niko Sünderhauf
[Session]
WS2-6 Structured Approaches to Robot Learning for Improved Generalization  Arunkumar Byravan
Markus Wulfmeier
Franziska Meier
Mustafa Mukadam
Nicolas Heess
Angela Schoellig
Dieter Fox
[Session]
WS2-7 Explainable and Trustworthy Robot Decision Making for Scientific Data Collection Nisar Ahmed
P. Michael Furlong
Geoff Hollinger
Seth McCammon
[Session]
WS2-8 Closing the Academia to Real-World Gap in Service Robotics  Guilherme Maeda
Nick Walker
Petar Kormushev
Maru Cabrera
[Session]
WS2-9 Visuotactile Sensors for Robust Manipulation: From Perception to Control  Alex Alspach
Naveen Kuppuswamy
Avinash Uttamchandani
Filipe Veiga
Wenzhen Yuan
[Session]
WS2-10 Self-Supervised Robot Learning  Abhinav Valada
Anelia Angelova
Joschka Boedecker
Oier Mees
Wolfram Burgard
[Session]
WS2-11 Power On and Go Robots: ‘Out-of-the-Box’ Systems for Real-World Applications Jonathan Kelly
Stephan Weiss
Robuffo Giordana
Valentin Peretroukhin
[Session]
WS2-12 Workshop on Visual Learning and Reasoning for Robotic Manipulation  Kuan Fang
David Held
Yuke Zhu
Dinesh Jayaraman
Animesh Garg
Lin Sun
Yu Xiang
Greg Dudek
[Session]
WS2-13 Action Representations for Learning in Continuous Control  Tamim Asfour
Miroslav Bogdanovic
Jeannette Bohg
Animesh Garg
Roberto Martín-Martín
Ludovic Righetti
[Se

RSS 2020 Accepted Papers

Paper ID Title Authors Virtual Session Link
1 Planning and Execution using Inaccurate Models with Provable Guarantees Anirudh Vemula (Carnegie Mellon University)*; Yash Oza (CMU); J. Bagnell (Aurora Innovation); Maxim Likhachev (CMU) Virtual Session #1
2 Swoosh! Rattle! Thump! – Actions that Sound Dhiraj Gandhi (Carnegie Mellon University)*; Abhinav Gupta (Carnegie Mellon University); Lerrel Pinto (NYU/Berkeley) Virtual Session #1
3 Deep Visual Reasoning: Learning to Predict Action Sequences for Task and Motion Planning from an Initial Scene Image Danny Driess (Machine Learning and Robotics Lab, University of Stuttgart)*; Jung-Su Ha (); Marc Toussaint () Virtual Session #1
4 Elaborating on Learned Demonstrations with Temporal Logic Specifications Craig Innes (University of Edinburgh)*; Subramanian Ramamoorthy (University of Edinburgh) Virtual Session #1
5 Non-revisiting Coverage Task with Minimal Discontinuities for Non-redundant Manipulators Tong Yang (Zhejiang University)*; Jaime Valls Miro (University of Technology Sydney); Yue Wang (Zhejiang University); Rong Xiong (Zhejiang University) Virtual Session #1
6 LatticeNet: Fast Point Cloud Segmentation Using Permutohedral Lattices Radu Alexandru Rosu (University of Bonn)*; Peer Schütt (University of Bonn); Jan Quenzel (University of Bonn); Sven Behnke (University of Bonn) Virtual Session #1
7 A Smooth Representation of Belief over SO(3) for Deep Rotation Learning with Uncertainty Valentin Peretroukhin (University of Toronto)*; Matthew Giamou (University of Toronto); W. Nicholas Greene (MIT); David Rosen (MIT Laboratory for Information and Decision Systems); Jonathan Kelly (University of Toronto); Nicholas Roy (MIT) Virtual Session #1
8 Leading Multi-Agent Teams to Multiple Goals While Maintaining Communication Brian Reily (Colorado School of Mines)*; Christopher Reardon (ARL); Hao Zhang (Colorado School of Mines) Virtual Session #1
9 OverlapNet: Loop Closing for LiDAR-based SLAM Xieyuanli Chen (Photogrammetry & Robotics Lab, University of Bonn)*; Thomas Läbe (Institute for Geodesy and Geoinformation, University of Bonn); Andres Milioto (University of Bonn); Timo Röhling (Fraunhofer FKIE); Olga Vysotska (Autonomous Intelligent Driving GmbH); Alexandre Haag (AID); Jens Behley (University of Bonn); Cyrill Stachniss (University of Bonn) Virtual Session #1
10 The Dark Side of Embodiment – Teaming Up With Robots VS Disembodied Agents Filipa Correia (INESC-ID & University of Lisbon)*; Samuel Gomes (IST/INESC-ID); Samuel Mascarenhas (INESC-ID); Francisco S. Melo (IST/INESC-ID); Ana Paiva (INESC-ID U of Lisbon) Virtual Session #1
11 Shared Autonomy with Learned Latent Actions Hong Jun Jeon (Stanford University)*; Dylan Losey (Stanford University); Dorsa Sadigh (Stanford) Virtual Session #1
12 Regularized Graph Matching for Correspondence Identification under Uncertainty in Collaborative Perception Peng Gao (Colorado school of mines)*; Rui Guo (Toyota Motor North America); Hongsheng Lu (Toyota Motor North America); Hao Zhang (Colorado School of Mines) Virtual Session #1
13 Frequency Modulation of Body Waves to Improve Performance of Limbless Robots Baxi Zhong (Goergia Tech)*; Tianyu Wang (Carnegie Mellon University); Jennifer Rieser (Georgia Institute of Technology); Abdul Kaba (Morehouse College); Howie Choset (Carnegie Melon University); Daniel Goldman (Georgia Institute of Technology) Virtual Session #1
14 Self-Reconfiguration in Two-Dimensions via Active Subtraction with Modular Robots Matthew Hall (The University of Sheffield)*; Anil Ozdemir (The University of Sheffield); Roderich Gross (The University of Sheffield) Virtual Session #1
15 Singularity Maps of Space Robots and their Application to Gradient-based Trajectory Planning Davide Calzolari (Technical University of Munich (TUM), German Aerospace Center (DLR))*; Roberto Lampariello (German Aerospace Center); Alessandro Massimo Giordano (Deutches Zentrum für Luft und Raumfahrt) Virtual Session #1
16 Grounding Language to Non-Markovian Tasks with No Supervision of Task Specifications Roma Patel (Brown University)*; Ellie Pavlick (Brown University); Stefanie Tellex (Brown University) Virtual Session #1
17 Fast Uniform Dispersion of a Crash-prone Swarm Michael Amir (Technion – Israel Institute of Technology)*; Freddy Bruckstein (Technion) Virtual Session #1
18 Simultaneous Enhancement and Super-Resolution of Underwater Imagery for Improved Visual Perception Md Jahidul Islam (University of Minnesota Twin Cities)*; Peigen Luo (University of Minnesota-Twin Cities); Junaed Sattar (University of Minnesota) Virtual Session #1
19 Collision Probabilities for Continuous-Time Systems Without Sampling Kristoffer Frey (MIT)*; Ted Steiner (Charles Stark Draper Laboratory, Inc.); Jonathan How (MIT) Virtual Session #1
20 Event-Driven Visual-Tactile Sensing and Learning for Robots Tasbolat Taunyazov (National University of Singapore); Weicong Sng (National University of Singapore); Brian Lim (National University of Singapore); Hian Hian See (National University of Singapore); Jethro Kuan (National University of Singapore); Abdul Fatir Ansari (National University of Singapore); Benjamin Tee (National University of Singapore); Harold Soh (National University Singapore)* Virtual Session #1
21 Resilient Distributed Diffusion for Multi-Robot Systems Using Centerpoint JIANI LI (Vanderbilt University)*; Waseem Abbas (Vanderbilt University); Mudassir Shabbir (Information Technology University); Xenofon Koutsoukos (Vanderbilt University) Virtual Session #1
22 Pixel-Wise Motion Deblurring of Thermal Videos Manikandasriram Srinivasan Ramanagopal (University of Michigan)*; Zixu Zhang (University of Michigan); Ram Vasudevan (University of Michigan); Matthew Johnson Roberson (University of Michigan) Virtual Session #1
23 Controlling Contact-Rich Manipulation Under Partial Observability Florian Wirnshofer (Siemens AG)*; Philipp Sebastian Schmitt (Siemens AG); Georg von Wichert (Siemens AG); Wolfram Burgard (University of Freiburg) Virtual Session #1
24 AVID: Learning Multi-Stage Tasks via Pixel-Level Translation of Human Videos Laura Smith (UC Berkeley)*; Nikita Dhawan (UC Berkeley); Marvin Zhang (UC Berkeley); Pieter Abbeel (UC Berkeley); Sergey Levine (UC Berkeley) Virtual Session #1
25 Provably Constant-time Planning and Re-planning for Real-time Grasping Objects off a Conveyor Belt Fahad Islam (Carnegie Mellon University)*; Oren Salzman (Technion); Aditya Agarwal (CMU); Likhachev Maxim (Carnegie Mellon University) Virtual Session #1
26 Online IMU Intrinsic Calibration: Is It Necessary? Yulin Yang (University of Delaware)*; Patrick Geneva (University of Delaware); Xingxing Zuo (Zhejiang University); Guoquan Huang (University of Delaware) Virtual Session #1
27 A Berry Picking Robot With A Hybrid Soft-Rigid Arm: Design and Task Space Control Naveen Kumar Uppalapati (University of Illinois at Urbana Champaign)*; Benjamin Walt ( University of Illinois at Urbana Champaign); Aaron Havens (University of Illinois Urbana Champaign); Armeen Mahdian (University of Illinois at Urbana Champaign); Girish Chowdhary (University of Illinois at Urbana Champaign); Girish Krishnan (University of Illinois at Urbana Champaign) Virtual Session #1
28 Iterative Repair of Social Robot Programs from Implicit User Feedback via Bayesian Inference Michael Jae-Yoon Chung (University of Washington)*; Maya Cakmak (University of Washington) Virtual Session #1
29 Cable Manipulation with a Tactile-Reactive Gripper Siyuan Dong (MIT); Shaoxiong Wang (MIT); Yu She (MIT)*; Neha Sunil (Massachusetts Institute of Technology); Alberto Rodriguez (MIT); Edward Adelson (MIT, USA) Virtual Session #1
30 Automated Synthesis of Modular Manipulators’ Structure and Control for Continuous Tasks around Obstacles Thais Campos de Almeida (Cornell University)*; Samhita Marri (Cornell University); Hadas Kress-Gazit (Cornell) Virtual Session #1
31 Learning Memory-Based Control for Human-Scale Bipedal Locomotion Jonah Siekmann (Oregon State University)*; Srikar Valluri (Oregon State University); Jeremy Dao (Oregon State University); Francis Bermillo (Oregon State University); Helei Duan (Oregon State University); Alan Fern (Oregon State University); Jonathan Hurst (Oregon State University) Virtual Session #1
32 Multi-Fidelity Black-Box Optimization for Time-Optimal Quadrotor Maneuvers Gilhyun Ryou (Massachusetts Institute of Technology)*; Ezra Tal (Massachusetts Institute of Technology); Sertac Karaman (Massachusetts Institute of Technology) Virtual Session #1
33 Manipulation Trajectory Optimization with Online Grasp Synthesis and Selection Lirui Wang (University of Washington)*; Yu Xiang (NVIDIA); Dieter Fox (NVIDIA Research / University of Washington) Virtual Session #1
34 VisuoSpatial Foresight for Multi-Step, Multi-Task Fabric Manipulation Ryan Hoque (UC Berkeley)*; Daniel Seita (University of California, Berkeley); Ashwin Balakrishna (UC Berkeley); Aditya Ganapathi (University of California, Berkeley); Ajay Tanwani (UC Berkeley); Nawid Jamali (Honda Research Institute); Katsu Yamane (Honda Research Institute); Soshi Iba (Honda Research Institute); Ken Goldberg (UC Berkeley) Virtual Session #1
35 Spatial Action Maps for Mobile Manipulation Jimmy Wu (Princeton University)*; Xingyuan Sun (Princeton University); Andy Zeng (Google); Shuran Song (Columbia University); Johnny Lee (Google); Szymon Rusinkiewicz (Princeton University); Thomas Funkhouser (Princeton University) Virtual Session #2
36 Generalized Tsallis Entropy Reinforcement Learning and Its Application to Soft Mobile Robots Kyungjae Lee (Seoul National University)*; Sungyub Kim (KAIST); Sungbin Lim (UNIST); Sungjoon Choi (Disney Research); Mineui Hong (Seoul National University); Jaein Kim (Seoul National University); Yong-Lae Park (Seoul National University); Songhwai Oh (Seoul National University) Virtual Session #2
37 Learning Labeled Robot Affordance Models Using Simulations and Crowdsourcing Adam Allevato (UT Austin)*; Elaine Short (Tufts University); Mitch Pryor (UT Austin); Andrea Thomaz (UT Austin) Virtual Session #2
38 Towards Embodied Scene Description Sinan Tan (Tsinghua University); Huaping Liu (Tsinghua University)*; Di Guo (Tsinghua University); Xinyu Zhang (Tsinghua University); Fuchun Sun (Tsinghua University) Virtual Session #2
39 Reinforcement Learning based Control of Imitative Policies for Near-Accident Driving Zhangjie Cao (Stanford University); Erdem Biyik (Stanford University)*; Woodrow Wang (Stanford University); Allan Raventos (Toyota Research Institute); Adrien Gaidon (Toyota Research Institute); Guy Rosman (Toyota Research Institute); Dorsa Sadigh (Stanford) Virtual Session #2
40 Deep Drone Acrobatics Elia Kaufmann (ETH / University of Zurich)*; Antonio Loquercio (ETH / University of Zurich); Rene Ranftl (Intel Labs); Matthias Müller (Intel Labs); Vladlen Koltun (Intel Labs); Davide Scaramuzza (University of Zurich & ETH Zurich, Switzerland) Virtual Session #2
41 Active Preference-Based Gaussian Process Regression for Reward Learning Erdem Biyik (Stanford University)*; Nicolas Huynh (École Polytechnique); Mykel Kochenderfer (Stanford University); Dorsa Sadigh (Stanford) Virtual Session #2
42 A Bayesian Framework for Nash Equilibrium Inference in Human-Robot Parallel Play Shray Bansal (Georgia Institute of Technology)*; Jin Xu (Georgia Institute of Technology); Ayanna Howard (Georgia Institute of Technology); Charles Isbell (Georgia Institute of Technology) Virtual Session #2
43 Data-driven modeling of a flapping bat robot with a single flexible wing surface Jonathan Hoff (University of Illinois at Urbana-Champaign)*; Seth Hutchinson (Georgia Tech) Virtual Session #2
44 Safe Motion Planning for Autonomous Driving using an Adversarial Road Model Alex Liniger (ETH Zurich)*; Luc Van Gool (ETH Zurich) Virtual Session #2
45 A Motion Taxonomy for Manipulation Embedding David Paulius (University of South Florida)*; Nicholas Eales (University of South Florida); Yu Sun (University of South Florida) Virtual Session #2
46 Aerial Manipulation Using Hybrid Force and Position NMPC Applied to Aerial Writing Dimos Tzoumanikas (Imperial College London)*; Felix Graule (ETH Zurich); Qingyue Yan (Imperial College London); Dhruv Shah (Berkeley Artificial Intelligence Research); Marija Popovic (Imperial College London); Stefan Leutenegger (Imperial College London) Virtual Session #2
47 A Global Quasi-Dynamic Model for Contact-Trajectory Optimization in Manipulation Bernardo Aceituno-Cabezas (MIT)*; Alberto Rodriguez (MIT) Virtual Session #2
48 Vision-Based Goal-Conditioned Policies for Underwater Navigation in the Presence of Obstacles Travis Manderson (McGill University)*; Juan Camilo Gamboa Higuera (McGill University); Stefan Wapnick (McGill University); Jean-François Tremblay (McGill University); Florian Shkurti (University of Toronto); David Meger (McGill University); Gregory Dudek (McGill University) Virtual Session #2
49 Spatio-Temporal Stochastic Optimization: Theory and Applications to Optimal Control and Co-Design Ethan Evans (Georgia Institute of Technology)*; Andrew Kendall (Georgia Institute of Technology); Georgios Boutselis (Georgia Institute of Technology ); Evangelos Theodorou (Georgia Institute of Technology) Virtual Session #2
50 Kernel Taylor-Based Value Function Approximation for Continuous-State Markov Decision Processes Junhong Xu (INDIANA UNIVERSITY)*; Kai Yin (Vrbo, Expedia Group); Lantao Liu (Indiana University, Intelligent Systems Engineering) Virtual Session #2
51 HMPO: Human Motion Prediction in Occluded Environments for Safe Motion Planning Jaesung Park (University of North Carolina at Chapel Hill)*; Dinesh Manocha (University of Maryland at College Park) Virtual Session #2
52 Motion Planning for Variable Topology Truss Modular Robot Chao Liu (University of Pennsylvania)*; Sencheng Yu (University of Pennsylvania); Mark Yim (University of Pennsylvania) Virtual Session #2
53 Emergent Real-World Robotic Skills via Unsupervised Off-Policy Reinforcement Learning Archit Sharma (Google)*; Michael Ahn (Google); Sergey Levine (Google); Vikash Kumar (Google); Karol Hausman (Google Brain); Shixiang Gu (Google Brain) Virtual Session #2
54 Compositional Transfer in Hierarchical Reinforcement Learning Markus Wulfmeier (DeepMind)*; Abbas Abdolmaleki (Google DeepMind); Roland Hafner (Google DeepMind); Jost Tobias Springenberg (DeepMind); Michael Neunert (Google DeepMind); Noah Siegel (DeepMind); Tim Hertweck (DeepMind); Thomas Lampe (DeepMind); Nicolas Heess (DeepMind); Martin Riedmiller (DeepMind) Virtual Session #2
55 Learning from Interventions: Human-robot interaction as both explicit and implicit feedback Jonathan Spencer (Princeton University)*; Sanjiban Choudhury (University of Washington); Matt Barnes (University of Washington); Matthew Schmittle (University of Washington); Mung Chiang (Princeton University); Peter Ramadge (Princeton); Siddhartha Srinivasa (University of Washington) Virtual Session #2
56 Fourier movement primitives: an approach for learning rhythmic robot skills from demonstrations Thibaut Kulak (Idiap Research Institute)*; Joao Silverio (Idiap Research Institute); Sylvain Calinon (Idiap Research Institute) Virtual Session #2
57 Self-Supervised Localisation between Range Sensors and Overhead Imagery Tim Tang (University of Oxford)*; Daniele De Martini (University of Oxford); Shangzhe Wu (University of Oxford); Paul Newman (University of Oxford) Virtual Session #2
58 Probabilistic Swarm Guidance Subject to Graph Temporal Logic Specifications Franck Djeumou (University of Texas at Austin)*; Zhe Xu (University of Texas at Austin); Ufuk Topcu (University of Texas at Austin) Virtual Session #2
59 In-Situ Learning from a Domain Expert for Real World Socially Assistive Robot Deployment Katie Winkle (Bristol Robotics Laboratory)*; Severin Lemaignan (); Praminda Caleb-Solly (); Paul Bremner (); Ailie Turton (University of the West of England); Ute Leonards () Virtual Session #2
60 MRFMap: Online Probabilistic 3D Mapping using Forward Ray Sensor Models Kumar Shaurya Shankar (Carnegie Mellon University)*; Nathan Michael (Carnegie Mellon University) Virtual Session #2
61 GTI: Learning to Generalize across Long-Horizon Tasks from Human Demonstrations Ajay Mandlekar (Stanford University); Danfei Xu (Stanford University)*; Roberto Martín-Martín (Stanford University); Silvio Savarese (Stanford University); Li Fei-Fei (Stanford University) Virtual Session #2
62 Agbots 2.0: Weeding Denser Fields with Fewer Robots Wyatt McAllister (University of Illinois)*; Joshua Whitman (University of Illinois); Allan Axelrod (University of Illinois); Joshua Varghese (University of Illinois); Girish Chowdhary (University of Illinois at Urbana Champaign); Adam Davis (University of Illinois) Virtual Session #2
63 Optimally Guarding Perimeters and Regions with Mobile Range Sensors Siwei Feng (Rutgers University)*; Jingjin Yu (Rutgers Univ.) Virtual Session #2
64 Learning Agile Robotic Locomotion Skills by Imitating Animals Xue Bin Peng (UC Berkeley)*; Erwin Coumans (Google); Tingnan Zhang (Google); Tsang-Wei Lee (Google Brain); Jie Tan (Google); Sergey Levine (UC Berkeley) Virtual Session #2
65 Learning to Manipulate Deformable Objects without Demonstrations Yilin Wu (UC Berkeley); Wilson Yan (UC Berkeley)*; Thanard Kurutach (UC Berkeley); Lerrel Pinto (); Pieter Abbeel (UC Berkeley) Virtual Session #2
66 Deep Differentiable Grasp Planner for High-DOF Grippers Min Liu (National University of Defense Technology)*; Zherong Pan (University of North Carolina at Chapel Hill); Kai Xu (National University of Defense Technology); Kanishka Ganguly (University of Maryland at College Park); Dinesh Manocha (University of North Carolina at Chapel Hill) Virtual Session #2
67 Ergodic Specifications for Flexible Swarm Control: From User Commands to Persistent Adaptation Ahalya Prabhakar (Northwestern University)*; Ian Abraham (Northwestern University); Annalisa Taylor (Northwestern University); Millicent Schlafly (Northwestern University); Katarina Popovic (Northwestern University); Giovani Diniz (Raytheon); Brendan Teich (Raytheon); Borislava Simidchieva (Raytheon); Shane Clark (Raytheon); Todd Murphey (Northwestern Univ.) Virtual Session #2
68 Dynamic Multi-Robot Task Allocation under Uncertainty and Temporal Constraints Shushman Choudhury (Stanford University)*; Jayesh Gupta (Stanford University); Mykel Kochenderfer (Stanford University); Dorsa Sadigh (Stanford); Jeannette Bohg (Stanford) Virtual Session #2
69 Latent Belief Space Motion Planning under Cost, Dynamics, and Intent Uncertainty Dicong Qiu (iSee); Yibiao Zhao (iSee); Chris Baker (iSee)* Virtual Session #2
70 Learning of Sub-optimal Gait Controllers for Magnetic Walking Soft Millirobots Utku Culha (Max-Planck Institute for Intelligent Systems); Sinan Ozgun Demir (Max Planck Institute for Intelligent Systems); Sebastian Trimpe (Max Planck Institute for Intelligent Systems); Metin Sitti (Carnegie Mellon University)* Virtual Session #3
71 Nonparametric Motion Retargeting for Humanoid Robots on Shared Latent Space Sungjoon Choi (Disney Research)*; Matthew Pan (Disney Research); Joohyung Kim (University of Illinois Urbana-Champaign) Virtual Session #3
72 Residual Policy Learning for Shared Autonomy Charles Schaff (Toyota Technological Institute at Chicago)*; Matthew Walter (Toyota Technological Institute at Chicago) Virtual Session #3
73 Efficient Parametric Multi-Fidelity Surface Mapping Aditya Dhawale (Carnegie Mellon University)*; Nathan Michael (Carnegie Mellon University) Virtual Session #3
74 Towards neuromorphic control: A spiking neural network based PID controller for UAV Rasmus Stagsted (University of Southern Denmark); Antonio Vitale (ETH Zurich); Jonas Binz (ETH Zurich); Alpha Renner (Institute of Neuroinformatics, University of Zurich and ETH Zurich); Leon Bonde Larsen (University of Southern Denmark); Yulia Sandamirskaya (Institute of Neuroinformatics, University of Zurich and ETH Zurich, Switzerland)* Virtual Session #3
75 Quantile QT-Opt for Risk-Aware Vision-Based Robotic Grasping Cristian Bodnar (University of Cambridge)*; Adrian Li (X); Karol Hausman (Google Brain); Peter Pastor (X); Mrinal Kalakrishnan (X) Virtual Session #3
76 Scaling data-driven robotics with reward sketching and batch reinforcement learning Serkan Cabi (DeepMind)*; Sergio Gómez Colmenarejo (DeepMind); Alexander Novikov (DeepMind); Ksenia Konyushova (DeepMind); Scott Reed (DeepMind); Rae Jeong (DeepMind); Konrad Zolna (DeepMind); Yusuf Aytar (DeepMind); David Budden (DeepMind); Mel Vecerik (Deepmind); Oleg Sushkov (DeepMind); David Barker (DeepMind); Jonathan Scholz (DeepMind); Misha Denil (DeepMind); Nando de Freitas (DeepMind); Ziyu Wang (Google Research, Brain Team) Virtual Session #3
77 MPTC – Modular Passive Tracking Controller for stack of tasks based control frameworks Johannes Englsberger (German Aerospace Center (DLR))*; Alexander Dietrich (DLR); George Mesesan (German Aerospace Center (DLR)); Gianluca Garofalo (German Aerospace Center (DLR)); Christian Ott (DLR); Alin Albu-Schaeffer (Robotics and Mechatronics Center (RMC), German Aerospace Center (DLR)) Virtual Session #3
78 NH-TTC: A gradient-based framework for generalized anticipatory collision avoidance Bobby Davis (University of Minnesota Twin Cities)*; Ioannis Karamouzas (Clemson University); Stephen Guy (University of Minnesota Twin Cities) Virtual Session #3
79 3D Dynamic Scene Graphs: Actionable Spatial Perception with Places, Objects, and Humans Antoni Rosinol (MIT)*; Arjun Gupta (MIT); Marcus Abate (MIT); Jingnan Shi (MIT); Luca Carlone (Massachusetts Institute of Technology) Virtual Session #3
80 Robot Object Retrieval with Contextual Natural Language Queries Thao Nguyen (Brown University)*; Nakul Gopalan (Georgia Tech); Roma Patel (Brown University); Matthew Corsaro (Brown University); Ellie Pavlick (Brown University); Stefanie Tellex (Brown University) Virtual Session #3
81 AlphaPilot: Autonomous Drone Racing Philipp Foehn (ETH / University of Zurich)*; Dario Brescianini (University of Zurich); Elia Kaufmann (ETH / University of Zurich); Titus Cieslewski (University of Zurich & ETH Zurich); Mathias Gehrig (University of Zurich); Manasi Muglikar (University of Zurich); Davide Scaramuzza (University of Zurich & ETH Zurich, Switzerland) Virtual Session #3
82 Concept2Robot: Learning Manipulation Concepts from Instructions and Human Demonstrations Lin Shao (Stanford University)*; Toki Migimatsu (Stanford University); Qiang Zhang (Shanghai Jiao Tong University); Kaiyuan Yang (Stanford University); Jeannette Bohg (Stanford) Virtual Session #3
83 A Variable Rolling SLIP Model for a Conceptual Leg Shape to Increase Robustness of Uncertain Velocity on Unknown Terrain Adar Gaathon (Technion – Israel Institute of Technology)*; Amir Degani (Technion – Israel Institute of Technology) Virtual Session #3
84 Interpreting and Predicting Tactile Signals via a Physics-Based and Data-Driven Framework Yashraj Narang (NVIDIA)*; Karl Van Wyk (NVIDIA); Arsalan Mousavian (NVIDIA); Dieter Fox (NVIDIA) Virtual Session #3
85 Learning Active Task-Oriented Exploration Policies for Bridging the Sim-to-Real Gap Jacky Liang (Carnegie Mellon University)*; Saumya Saxena (Carnegie Mellon University); Oliver Kroemer (Carnegie Mellon University) Virtual Session #3
86 Manipulation with Shared Grasping Yifan Hou (Carnegie Mellon University)*; Zhenzhong Jia (SUSTech); Matthew Mason (Carnegie Mellon University) Virtual Session #3
87 Deep Learning Tubes for Tube MPC David Fan (Georgia Institute of Technology )*; Ali Agha (Jet Propulsion Laboratory); Evangelos Theodorou (Georgia Institute of Technology) Virtual Session #3
88 Reinforcement Learning for Safety-Critical Control under Model Uncertainty, using Control Lyapunov Functions and Control Barrier Functions Jason Choi (UC Berkeley); Fernando Castañeda (UC Berkeley); Claire Tomlin (UC Berkeley); Koushil Sreenath (Berkeley)* Virtual Session #3
89 Fast Risk Assessment for Autonomous Vehicles Using Learned Models of Agent Futures Allen Wang (MIT)*; Xin Huang (MIT); Ashkan Jasour (MIT); Brian Williams (Massachusetts Institute of Technology) Virtual Session #3
90 Online Domain Adaptation for Occupancy Mapping Anthony Tompkins (The University of Sydney)*; Ransalu Senanayake (Stanford University); Fabio Ramos (NVIDIA, The University of Sydney) Virtual Session #3
91 ALGAMES: A Fast Solver for Constrained Dynamic Games Simon Le Cleac’h (Stanford University)*; Mac Schwager (Stanford, USA); Zachary Manchester (Stanford) Virtual Session #3
92 Scalable and Probabilistically Complete Planning for Robotic Spatial Extrusion Caelan Garrett (MIT)*; Yijiang Huang (MIT Department of Architecture); Tomas Lozano-Perez (MIT); Caitlin Mueller (MIT Department of Architecture) Virtual Session #3
93 The RUTH Gripper: Systematic Object-Invariant Prehensile In-Hand Manipulation via Reconfigurable Underactuation Qiujie Lu (Imperial College London)*; Nicholas Baron (Imperial College London); Angus Clark (Imperial College London); Nicolas Rojas (Imperial College London) Virtual Session #3
94 Heterogeneous Graph Attention Networks for Scalable Multi-Robot Scheduling with Temporospatial Constraints Zheyuan Wang (Georgia Institute of Technology)*; Matthew Gombolay (Georgia Institute of Technology) Virtual Session #3
95 Robust Multiple-Path Orienteering Problem: Securing Against Adversarial Attacks Guangyao Shi (University of Maryland)*; Pratap Tokekar (University of Maryland); Lifeng Zhou (Virginia Tech) Virtual Session #3
96 Eyes-Closed Safety Kernels: Safety of Autonomous Systems Under Loss of Observability Forrest Laine (UC Berkeley)*; Chih-Yuan Chiu (UC Berkeley); Claire Tomlin (UC Berkeley) Virtual Session #3
97 Explaining Multi-stage Tasks by Learning Temporal Logic Formulas from Suboptimal Demonstrations Glen Chou (University of Michigan)*; Necmiye Ozay (University of Michigan); Dmitry Berenson (U Michigan) Virtual Session #3
98 Nonlinear Model Predictive Control of Robotic Systems with Control Lyapunov Functions Ruben Grandia (ETH Zurich)*; Andrew Taylor (Caltech); Andrew Singletary (Caltech); Marco Hutter (ETHZ); Aaron Ames (Caltech) Virtual Session #3
99 Learning to Slide Unknown Objects with Differentiable Physics Simulations Changkyu Song (Rutgers University); Abdeslam Boularias (Rutgers University)* Virtual Session #3
100 Reachable Sets for Safe, Real-Time Manipulator Trajectory Design Patrick Holmes (University of Michigan); Shreyas Kousik (University of Michigan)*; Bohao Zhang (University of Michigan); Daphna Raz (University of Michigan); Corina Barbalata (Louisiana State University); Matthew Johnson Roberson (University of Michigan); Ram Vasudevan (University of Michigan) Virtual Session #3
101 Learning Task-Driven Control Policies via Information Bottlenecks Vincent Pacelli (Princeton University)*; Anirudha Majumdar (Princeton) Virtual Session #3
102 Simultaneously Learning Transferable Symbols and Language Groundings from Perceptual Data for Instruction Following Nakul Gopalan (Georgia Tech)*; Eric Rosen (Brown University); Stefanie Tellex (Brown University); George Konidaris (Brown) Virtual Session #3
103 A social robot mediator to foster collaboration and inclusion among children Sarah Gillet (Royal Institute of Technology)*; Wouter van den Bos (University of Amsterdam); Iolanda Leite (KTH) Virtual Session #3

The RSS Foundation is the governing body behind the Robotics: Science and Systems (RSS) conference. The foundation was started and is run by volunteers from the robotics community who believe that an open, high-quality, single-track conference is an important component of an active and growing scientific discipline.

Letting robots manipulate cables


By Rachel Gordon
The opposing fingers are lightweight and quick moving, allowing nimble, real-time adjustments of force and position.
Photo courtesy of MIT CSAIL.

For humans, it can be challenging to manipulate thin flexible objects like ropes, wires, or cables. But if these problems are hard for humans, they are nearly impossible for robots. As a cable slides between the fingers, its shape is constantly changing, and the robot’s fingers must be constantly sensing and adjusting the cable’s position and motion.

Standard approaches have used a series of slow and incremental deformations, as well as mechanical fixtures, to get the job done. Recently, a group of researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) pursued the task from a different angle, in a manner that more closely mimics us humans. The team’s new system uses a pair of soft robotic grippers with high-resolution tactile sensors (and no added mechanical constraints) to successfully manipulate freely moving cables.

One could imagine using a system like this for both industrial and household tasks, to one day enable robots to help us with things like tying knots, wire shaping, or even surgical suturing. 

The team’s first step was to build a novel two-fingered gripper. The opposing fingers are lightweight and quick moving, allowing nimble, real-time adjustments of force and position. On the tips of the fingers are vision-based “GelSight” sensors, built from soft rubber with embedded cameras. The gripper is mounted on a robot arm, which can move as part of the control system.

The team’s second step was to create a perception-and-control framework to allow cable manipulation. For perception, they used the GelSight sensors to estimate the pose of the cable between the fingers, and to measure the frictional forces as the cable slides. Two controllers run in parallel: one modulates grip strength, while the other adjusts the gripper pose to keep the cable within the gripper.

When mounted on the arm, the gripper could reliably follow a USB cable starting from a random grasp position. Then, in combination with a second gripper, the robot can move the cable “hand over hand” (as a human would) in order to find the end of the cable. It could also adapt to cables of different materials and thicknesses.

As a further demo of its prowess, the robot performed an action that humans routinely do when plugging earbuds into a cell phone. Starting with a free-floating earbud cable, the robot was able to slide the cable between its fingers, stop when it felt the plug touch its fingers, adjust the plug’s pose, and finally insert the plug into the jack. 

“Manipulating soft objects is so common in our daily lives, like cable manipulation, cloth folding, and string knotting,” says Yu She, MIT postdoc and lead author on a new paper about the system. “In many cases, we would like to have robots help humans do this kind of work, especially when the tasks are repetitive, dull, or unsafe.” 

String me along 

Cable following is challenging for two reasons. First, it requires controlling the “grasp force” (to enable smooth sliding), and the “grasp pose” (to prevent the cable from falling from the gripper’s fingers).  

This information is hard to capture from conventional vision systems during continuous manipulation, because it’s usually occluded, expensive to interpret, and sometimes inaccurate. 

What’s more, this information can’t be directly observed with just vision sensors, hence the team’s use of tactile sensors. The gripper’s joints are also flexible — protecting them from potential impact. 

The algorithms can also be generalized to different cables with various physical properties like material, stiffness, and diameter, and also to those at different speeds. 

When comparing different controllers applied to the team’s gripper, their control policy could retain the cable in hand for longer distances than three others. For example, the “open-loop” controller only followed 36 percent of the total length, the gripper easily lost the cable when it curved, and it needed many regrasps to finish the task. 

Looking ahead 

The team observed that it was difficult to pull the cable back when it reached the edge of the finger, because of the convex surface of the GelSight sensor. Therefore, they hope to improve the finger-sensor shape to enhance the overall performance. 

In the future, they plan to study more complex cable manipulation tasks such as cable routing and cable inserting through obstacles, and they want to eventually explore autonomous cable manipulation tasks in the auto industry.

Yu She wrote the paper alongside MIT PhD students Shaoxiong Wang, Siyuan Dong, and Neha Sunil; Alberto Rodriguez, MIT associate professor of mechanical engineering; and Edward Adelson, the John and Dorothy Wilson Professor in the MIT Department of Brain and Cognitive Sciences

#ICRA2020 workshops on robotics and learning

This year the International Conference on Robotics and Automation (ICRA) is being run as a virtual event. One interesting feature of this conference is that it has been extended to run from 31 May to 31 August. A number of workshops were held on the opening day and here we focus on two of them: “Learning of manual skills in humans and robots” and “Emerging learning and algorithmic methods for data association in robotics”.

Learning of manual skills in humans and robots

This workshop was organised by Aude Billard, EPFL and Dagmar Sternad, Northeastern University. It brought together researchers from human motor control and from robotics to answer questions such as: How do humans achieve manual dexterity? What kind of practice schedules can shape these skills? Can some of these strategies be transferred to robots? To which extent is robot manual skill limited by the hardware, what can be learned and what cannot?

The third session of the workshop focussed on “Learning skills” and you can watch the two talks and the discussions below:

Jeannette Bohg – Learning to scaffold the development of robotic manipulation skills

Dagmar Sternad – Learning and control in skilled interactions with objects: A task-dynamic approach

Discussion with Jeannette Bohg and Dagmar Sternad

Emerging learning and algorithmic methods for data association in robotics

This workshop covered emerging algorithmic methods based on optimization and graph-theoretic techniques, learning and end-to-end solutions based on deep neural networks, and the relationships between these techniques.

You can watch the workshop in full here:

Below is the programme with the times indicating the position of that talk in the YouTube video:
11:00 Ayoung Kim – Learning motion and place descriptor from LiDARs for long-term navigation
34:11 Xiaowei Zhou – Learning correspondences for 3D reconstruction and pose estimation
51:30 Florian Bernard – Higher-order projected power iterations for scalable multi-matching
1:11:24 Cesar Cadena – High level understanding in the data association problem
1:34:55 Spotlight talk 1: Daniele Cattaneo – CMRNet++: map and camera agnostic monocular visual localization in LiDAR maps
1:50:45 Nicholas Roy – The role of semantics in perception
2:11:12 Kostas Daniilidis – Learning representations for matching
2:33:26 Jonathan How – Consistent multi-view data association
2:51:40 John Leonard – A research agenda for robust semantic SLAM
3:17:58 Luca Carlone – Towards certifiably robust spatial perception
3:39:36 Roberto Tron – Fast, consistent distributed matching for robotics applications
3:59:22 Randal Beard – Tracking moving objects from a moving camera in 3d environments
4:18:49 Nikolay Atanasov – A unifying view of geometry, semantics, and data association in SLAM
4:39:03 Spotlight talk 2: Nathaniel Glaser – Enhancing multi-robot perception via learned data association

Researchers create new model that aims to give robots human-like perception of their physical environments

Wouldn't we all appreciate a little help around the house, especially if that help came in the form of a smart, adaptable, uncomplaining robot? Sure, there are the one-trick Roombas of the appliance world. But MIT engineers are envisioning robots more like home helpers, able to follow high-level, Alexa-type commands, such as "Go to the kitchen and fetch me a coffee cup."
Page 2 of 5
1 2 3 4 5