Artificial skin could give robots a sense of touch similar to humans
Soft assistive robotic wearables get a boost from rapid design tool
What’s the Difference Between AGVs and AMRs?
Automation for All
A newcomer’s guide to #ICRA2022: Tutorials
I believe that one of the best ways to get the training you need for a job market in robotics is to attend tutorials at conferences like ICRA. Unlike workshops where you might listen to some work-in-progress, other workshop paper presentations and panel discussions, tutorials are exactly what they sound like. They aim to give you some hands-on learning sessions on technical tools/skills with specific learning objectives.
As such, most tutorials would expect you to come prepared to actively participate and follow along. For instance, the “Tools for Robotic Reinforcement Learning” tutorial expects you to come knowing how to code in python and have basic knowledge of reinforcement learning because you’ll be expected to use those skills/knowledge in the hands-on sessions.
There are seven tutorials this year.
Those interested in the intersection of machine learning and robotics (yes, yes, robotics and AI don’t refer to the same things, even if many people think they are the same) might find the following tutorials interesting.
- Tools for Robotic Reinforcement Learning
- Data-driven revolution in autonomous driving
- Learning Motion Control for Mobile Robot Navigation – A Tutorial
For those who are more into robots that navigate around our environment, the NavAbility Tutorial Workshop on Non-Gaussian SLAM and Computation is a series of hands-on tutorials that would be highly interesting for you. Be prepared to come with your own laptop to go from getting to know non-gaussian SLAM to solve your SLAM problems.
Meanwhile, roboticists who are more hardware and design-oriented might find the Jamming in Robotics: From Fundamental Building Blocks to Robotic Applications tutorial useful. By “jamming”, they don’t mean musicians coming together to create cool music together — I only found this out through the tutorial website. It refers to the way robots can grab items without needing to have traditional, fingered grippers.
The Tutorial on Koopman Operator and Lifting Linearization: Emerging Theory and Applications of Exact Global Linearization would be interesting for anyone interested in mathy/control/theory side of robotics. Koopman operators have been the buzz in the robotics community recently, and the tutorial is sure to give you the in-depth look at what the buzz is all about.
Lastly, the How to write an R-article and benchmark your results tutorial is one to watch for. It will tell you all about publishing reproducibility-friendly articles, and emphasize the usefulness of doing research in reproducible ways.
How to compete with robots
When it comes to the future of intelligent robots, the first question people ask is often: how many jobs will they make disappear? Whatever the answer, the second question is likely to be: how can I make sure that my job is not among them?
In a study just published in Science Robotics, a team of roboticists from EPFL and economists from the University of Lausanne offers answers to both questions. By combining the scientific and technical literature on robotic abilities with employment and wage statistics, they have developed a method to calculate which of the currently existing jobs are more at risk of being performed by machines in the near future. Additionally, they have devised a method for suggesting career transitions to jobs that are less at risk and require smallest retraining efforts.
“There are several studies predicting how many jobs will be automated by robots, but they all focus on software robots, such as speech and image recognition, financial robo-advisers, chatbots, and so forth. Furthermore, those predictions wildly oscillate depending on how job requirements and software abilities are assessed. Here, we consider not only artificial intelligence software, but also real intelligent robots that perform physical work and we developed a method for a systematic comparison of human and robotic abilities used in hundreds of jobs”, says Prof. Dario Floreano, Director of EPFL’s Laboratory of Intelligent Systems, who led the study at EPFL.
The key innovation of the study is a new mapping of robot capabilities onto job requirements. The team looked into the European H2020 Robotic Multi-Annual Roadmap (MAR), a strategy document by the European Commission that is periodically revised by robotics experts. The MAR describes dozens of abilities that are required from current robot or may be required by future ones, ranging, organised in categories such as manipulation, perception, sensing, interaction with humans. The researchers went through research papers, patents, and description of robotic products to assess the maturity level of robotic abilities, using a well-known scale for measuring the level of technology development, “technology readiness level” (TRL).
For human abilities, they relied on the O*net database, a widely-used resource database on the US job market, that classifies approximately 1,000 occupations and breaks down the skills and knowledge that are most crucial for each of them
After selectively matching the human abilities from O*net list to robotic abilities from the MAR document, the team could calculate how likely each existing job occupation is to be performed by a robot. Say, for example, that a job requires a human to work at millimetre-level precision of movements. Robots are very good at that, and the TRL of the corresponding ability is thus the highest. If a job requires enough such skills, it will be more likely to be automated than one that requires abilities such as critical thinking or creativity.
The result is a ranking of the 1,000 jobs, with “Physicists” being the ones who have the lowest risk of being replaced by a machine, and “Slaughterers and Meat Packers”, who face the highest risk. In general, jobs in food processing, building and maintenance, construction and extraction appear to have the highest risk.
“The key challenge for society today is how to become resilient against automation” says Prof. Rafael Lalive. who co-led the study at the University of Lausanne. “Our work provides detailed career advice for workers who face high risks of automation, which allows them to take on more secure jobs while re-using many of the skills acquired on the old job. Through this advice, governments can support society in becoming more resilient against automation.”
The authors then created a method to find, for any given job, alternative jobs that have a significantly lower automation risk and are reasonably close to the original one in terms of the abilities and knowledge they require – thus keeping the retraining effort minimal and making the career transition feasible. To test how that method would perform in real life, they used data from the US workforce and simulated thousands of career moves based on the algorithm’s suggestions, finding that it would indeed allow workers in the occupations with the highest risk to shift towards medium-risk occupations, while undergoing a relatively low retraining effort.
The method could be used by governments to measure how many workers could face automation risks and adjust retraining policies, by companies to assess the costs of increasing automation, by robotics manufacturers to better tailor their products to the market needs; and by the public to identify the easiest route to reposition themselves on the job market.
Finally, the authors translated the new methods and data into an algorithm that predicts the risk of automation for hundreds of jobs and suggests resilient career transitions at minimal retraining effort, publicly accessible at http://lis2.epfl.ch/resiliencetorobots.
This research was funded by the CROSS (Collaborative Research on Science and Society) Program in EPFL’s College of Humanities; by the Enterprise for Society Center at EPFL; as a part of NCCR Robotics, a National Centres of Competence in Research, funded by the Swiss National Science Foundation (SNSF grant number 51NF40_185543); by the European Commission through the Horizon 2020 projects AERIAL-CORE (grant agreement no. 871479) and MERGING (grant agreement no. 869963); and by SNSF grant no. 100018_178878.
6 Emerging Trends in Marine Robotics
An easier way to teach robots new skills
By Adam Zewe | MIT News Office
With e-commerce orders pouring in, a warehouse robot picks mugs off a shelf and places them into boxes for shipping. Everything is humming along, until the warehouse processes a change and the robot must now grasp taller, narrower mugs that are stored upside down.
Reprogramming that robot involves hand-labeling thousands of images that show it how to grasp these new mugs, then training the system all over again.
But a new technique developed by MIT researchers would require only a handful of human demonstrations to reprogram the robot. This machine-learning method enables a robot to pick up and place never-before-seen objects that are in random poses it has never encountered. Within 10 to 15 minutes, the robot would be ready to perform a new pick-and-place task.
The technique uses a neural network specifically designed to reconstruct the shapes of 3D objects. With just a few demonstrations, the system uses what the neural network has learned about 3D geometry to grasp new objects that are similar to those in the demos.
In simulations and using a real robotic arm, the researchers show that their system can effectively manipulate never-before-seen mugs, bowls, and bottles, arranged in random poses, using only 10 demonstrations to teach the robot.
“Our major contribution is the general ability to much more efficiently provide new skills to robots that need to operate in more unstructured environments where there could be a lot of variability. The concept of generalization by construction is a fascinating capability because this problem is typically so much harder,” says Anthony Simeonov, a graduate student in electrical engineering and computer science (EECS) and co-lead author of the paper.
Simeonov wrote the paper with co-lead author Yilun Du, an EECS graduate student; Andrea Tagliasacchi, a staff research scientist at Google Brain; Joshua B. Tenenbaum, the Paul E. Newton Career Development Professor of Cognitive Science and Computation in the Department of Brain and Cognitive Sciences and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL); Alberto Rodriguez, the Class of 1957 Associate Professor in the Department of Mechanical Engineering; and senior authors Pulkit Agrawal, a professor in CSAIL, and Vincent Sitzmann, an incoming assistant professor in EECS. The research will be presented at the International Conference on Robotics and Automation.
Grasping geometry
A robot may be trained to pick up a specific item, but if that object is lying on its side (perhaps it fell over), the robot sees this as a completely new scenario. This is one reason it is so hard for machine-learning systems to generalize to new object orientations.
To overcome this challenge, the researchers created a new type of neural network model, a Neural Descriptor Field (NDF), that learns the 3D geometry of a class of items. The model computes the geometric representation for a specific item using a 3D point cloud, which is a set of data points or coordinates in three dimensions. The data points can be obtained from a depth camera that provides information on the distance between the object and a viewpoint. While the network was trained in simulation on a large dataset of synthetic 3D shapes, it can be directly applied to objects in the real world.
The team designed the NDF with a property known as equivariance. With this property, if the model is shown an image of an upright mug, and then shown an image of the same mug on its side, it understands that the second mug is the same object, just rotated.
“This equivariance is what allows us to much more effectively handle cases where the object you observe is in some arbitrary orientation,” Simeonov says.
As the NDF learns to reconstruct shapes of similar objects, it also learns to associate related parts of those objects. For instance, it learns that the handles of mugs are similar, even if some mugs are taller or wider than others, or have smaller or longer handles.
“If you wanted to do this with another approach, you’d have to hand-label all the parts. Instead, our approach automatically discovers these parts from the shape reconstruction,” Du says.
The researchers use this trained NDF model to teach a robot a new skill with only a few physical examples. They move the hand of the robot onto the part of an object they want it to grip, like the rim of a bowl or the handle of a mug, and record the locations of the fingertips.
Because the NDF has learned so much about 3D geometry and how to reconstruct shapes, it can infer the structure of a new shape, which enables the system to transfer the demonstrations to new objects in arbitrary poses, Du explains.
Picking a winner
They tested their model in simulations and on a real robotic arm using mugs, bowls, and bottles as objects. Their method had a success rate of 85 percent on pick-and-place tasks with new objects in new orientations, while the best baseline was only able to achieve a success rate of 45 percent. Success means grasping a new object and placing it on a target location, like hanging mugs on a rack.
Many baselines use 2D image information rather than 3D geometry, which makes it more difficult for these methods to integrate equivariance. This is one reason the NDF technique performed so much better.
While the researchers were happy with its performance, their method only works for the particular object category on which it is trained. A robot taught to pick up mugs won’t be able to pick up boxes or headphones, since these objects have geometric features that are too different than what the network was trained on.
“In the future, scaling it up to many categories or completely letting go of the notion of category altogether would be ideal,” Simeonov says.
They also plan to adapt the system for nonrigid objects and, in the longer term, enable the system to perform pick-and-place tasks when the target area changes.
This work is supported, in part, by the Defense Advanced Research Projects Agency, the Singapore Defense Science and Technology Agency, and the National Science Foundation.