Archive 29.04.2022

Page 1 of 5
1 2 3 5

An easier way to teach robots new skills

MIT researchers have developed a system that enables a robot to learn a new pick-and-place task based on only a handful of human examples. This could allow a human to reprogram a robot to grasp never-before-seen objects, presented in random poses, in about 15 minutes. Courtesy of the researchers

By Adam Zewe | MIT News Office

With e-commerce orders pouring in, a warehouse robot picks mugs off a shelf and places them into boxes for shipping. Everything is humming along, until the warehouse processes a change and the robot must now grasp taller, narrower mugs that are stored upside down.

Reprogramming that robot involves hand-labeling thousands of images that show it how to grasp these new mugs, then training the system all over again.

But a new technique developed by MIT researchers would require only a handful of human demonstrations to reprogram the robot. This machine-learning method enables a robot to pick up and place never-before-seen objects that are in random poses it has never encountered. Within 10 to 15 minutes, the robot would be ready to perform a new pick-and-place task.

The technique uses a neural network specifically designed to reconstruct the shapes of 3D objects. With just a few demonstrations, the system uses what the neural network has learned about 3D geometry to grasp new objects that are similar to those in the demos.

In simulations and using a real robotic arm, the researchers show that their system can effectively manipulate never-before-seen mugs, bowls, and bottles, arranged in random poses, using only 10 demonstrations to teach the robot.

“Our major contribution is the general ability to much more efficiently provide new skills to robots that need to operate in more unstructured environments where there could be a lot of variability. The concept of generalization by construction is a fascinating capability because this problem is typically so much harder,” says Anthony Simeonov, a graduate student in electrical engineering and computer science (EECS) and co-lead author of the paper.

Simeonov wrote the paper with co-lead author Yilun Du, an EECS graduate student; Andrea Tagliasacchi, a staff research scientist at Google Brain; Joshua B. Tenenbaum, the Paul E. Newton Career Development Professor of Cognitive Science and Computation in the Department of Brain and Cognitive Sciences and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL); Alberto Rodriguez, the Class of 1957 Associate Professor in the Department of Mechanical Engineering; and senior authors Pulkit Agrawal, a professor in CSAIL, and Vincent Sitzmann, an incoming assistant professor in EECS. The research will be presented at the International Conference on Robotics and Automation.

Grasping geometry

A robot may be trained to pick up a specific item, but if that object is lying on its side (perhaps it fell over), the robot sees this as a completely new scenario. This is one reason it is so hard for machine-learning systems to generalize to new object orientations.

To overcome this challenge, the researchers created a new type of neural network model, a Neural Descriptor Field (NDF), that learns the 3D geometry of a class of items. The model computes the geometric representation for a specific item using a 3D point cloud, which is a set of data points or coordinates in three dimensions. The data points can be obtained from a depth camera that provides information on the distance between the object and a viewpoint. While the network was trained in simulation on a large dataset of synthetic 3D shapes, it can be directly applied to objects in the real world.

The team designed the NDF with a property known as equivariance. With this property, if the model is shown an image of an upright mug, and then shown an image of the same mug on its side, it understands that the second mug is the same object, just rotated.

“This equivariance is what allows us to much more effectively handle cases where the object you observe is in some arbitrary orientation,” Simeonov says.

As the NDF learns to reconstruct shapes of similar objects, it also learns to associate related parts of those objects. For instance, it learns that the handles of mugs are similar, even if some mugs are taller or wider than others, or have smaller or longer handles.

“If you wanted to do this with another approach, you’d have to hand-label all the parts. Instead, our approach automatically discovers these parts from the shape reconstruction,” Du says.

The researchers use this trained NDF model to teach a robot a new skill with only a few physical examples. They move the hand of the robot onto the part of an object they want it to grip, like the rim of a bowl or the handle of a mug, and record the locations of the fingertips.

Because the NDF has learned so much about 3D geometry and how to reconstruct shapes, it can infer the structure of a new shape, which enables the system to transfer the demonstrations to new objects in arbitrary poses, Du explains.

Picking a winner

They tested their model in simulations and on a real robotic arm using mugs, bowls, and bottles as objects. Their method had a success rate of 85 percent on pick-and-place tasks with new objects in new orientations, while the best baseline was only able to achieve a success rate of 45 percent. Success means grasping a new object and placing it on a target location, like hanging mugs on a rack.

Many baselines use 2D image information rather than 3D geometry, which makes it more difficult for these methods to integrate equivariance. This is one reason the NDF technique performed so much better.

While the researchers were happy with its performance, their method only works for the particular object category on which it is trained. A robot taught to pick up mugs won’t be able to pick up boxes or headphones, since these objects have geometric features that are too different than what the network was trained on.

“In the future, scaling it up to many categories or completely letting go of the notion of category altogether would be ideal,” Simeonov says.

They also plan to adapt the system for nonrigid objects and, in the longer term, enable the system to perform pick-and-place tasks when the target area changes.

This work is supported, in part, by the Defense Advanced Research Projects Agency, the Singapore Defense Science and Technology Agency, and the National Science Foundation.

Developing a better ionic skin

In the quest to build smart skin that mimics the sensing capabilities of natural skin, ionic skins have shown significant advantages. They're made of flexible, biocompatible hydrogels that use ions to carry an electrical charge. In contrast to smart skins made of plastics and metals, the hydrogels have the softness of natural skin. This offers a more natural feel to the prosthetic arm or robot hand they are mounted on, and makes them comfortable to wear.

New jumping device achieves the tallest height of any known jumper, engineered or biological

A mechanical jumper developed by UC Santa Barbara engineering professor Elliot Hawkes and collaborators is capable of achieving the tallest height—roughly 100 feet (30 meters)—of any jumper to date, engineered or biological. The feat represents a fresh approach to the design of jumping devices and advances the understanding of jumping as a form of locomotion.

Electronic skin anticipates and perceives touch from different directions for the first time

A research team from Chemnitz and Dresden has taken a major step forward in the development of sensitive electronic skin (e-skin) with integrated artificial hairs. E-skins are flexible electronic systems that try to mimic the sensitivity of their natural human skin counterparts. Applications range from skin replacement and medical sensors on the body to artificial skin for humanoid robots and androids. Tiny surface hairs can perceive and anticipate the slightest tactile sensation on human skin and even recognize the direction of touch. Modern electronic skin systems lack this capability and cannot gather this critical information about their vicinity.

A newcomer’s guide to #ICRA2022: A primer

ICRA 2022 graphics

Dear robotics graduate students and newcomers to robotics,

If you are what I imagine delving into robotics to be like today, the majority of your time is spent as follows:

  • navigating Slack channels while tuning into some online lectures,
  • trying to figure out whether you should be reading more papers, coding more, or if you are just a slow reader/coder/etc,
  • and if you are a grad student, in particular, having a never-ending cycle of self-doubt questions that seem super important, such as “what is a research question anyway? And how do you find one that no one has tackled before — and not because it’s a dumb question? Oh wait… is my question dumb? Can I find out without asking my prof?”

Much of the questions I had as a grad student stemmed from my lack of knowledge about what is considered to be normal by academia, the robotics community, and more narrowly the subdomain of robotics I belonged to. The friends/colleagues/people I met along the way helped me fill that much-needed knowledge gap about the norm because many of us were on the same journey with similar struggles/questions.

As an assistant professor who spent the majority of my professorship in the COVID-19 pandemic mode, I worry that my students’ grad school journey has not offered the same kind of shared experience and camaraderie with people in the domain that I am now seeing the huge benefit of.

The upcoming IEEE International Conference on Robotics and Automation 2022 will be the first robotics conference that many of you attend in-person since the pandemic (I’m in this category). For many of you, it may be your first time attending an academic conference. For even more of you, this may be your first virtual attendance at ICRA.

ICRA is a multi-track, full-week of robotics festivity that draws in thousands. It can pass by you in a blink.

So, in a series of short blog posts (because, who has the time these days), I am going to highlight a few things in the form of a millennial’s guide to ICRA.

I’m assuming that you, the reader, may be as impatient a reader as I am, who likes information presented in a short, snappy, and organized way. The more bullet points the better.

So let’s get started.

** Full disclosure, I’m one of the two publicity co-chairs for the ICRA conference. If you want to be on the grounds of ICRA as a student science communicator, reach out to us. **

The ALLOMAN hexapod robot is a novel multifunctional platform with leg-arm integration

A research group from Robotics Institute of Beihang University, China has developed a novel multifunctional hexapod robot with leg-arm integration, named ALLOMAN (Arm-Leg Locomotion and Manipulation). This robot possesses various "fixed" manipulation functions besides locomotion, and the researchers have achieved mobile manipulation function on this robot successfully, which is difficult for legged robots. Their study can be found in the journal Frontiers of Mechanical Engineering on 8 April, 2022.

Scientists to develop electronic noses to track down body odors

In April 2022, the project "Smart Electronic Olfaction for Body Odor Diagnostics"—SMELLODI for short—started with the kick-off meeting. The objective of the seven partners from Germany, Israel and Finland is to develop intelligent electronic sensor systems that can distinguish between healthy body odors and those altered by disease and transmit them digitally. Over a period of three years and with funding of almost 3 million euros, the technology developed is to pave the way for the digitization of the sense of smell.

A robot called Lyra is helping transform nuclear infrastructure inspection

A robot named Lyra has been used to inspect a ventilation duct in Dounreay's redundant nuclear laboratories and map radioactive materials. Lyra traversed 140m of duct from a single entry point and provided operators with detailed radiological characterization information that can now be used to help plan safe and efficient decommissioning of the laboratories.

Microrobot collectives display versatile movement patterns

Researchers at the Max Planck Institute for Intelligent Systems (MPI-IS), Cornell University and Shanghai Jiao Tong University have developed collectives of microrobots which can move in any desired formation. The miniature particles are capable of reconfiguring their swarm behavior quickly and robustly. Floating on the surface of water, the versatile microrobotic disks can go round in circles, dance the boogie, bunch up into a clump, spread out like gas or form a straight line like beads on a string.

A model to improve robots’ ability to hand over objects to humans

For decades, researchers worldwide have been trying to develop robots that can efficiently assist humans and work alongside them as they tackle a variety of everyday tasks. To do this effectively, however, the robots should be able to interact naturally with humans, including handing them and receiving objects from them.
Page 1 of 5
1 2 3 5