UNIVERSITY PARK, Pa. — Repeated activity wears on soft robotic actuators, but these machines’ moving parts need to be reliable and easily fixed. Now a team of researchers has a biosynthetic polymer, patterned after squid ring teeth, that is self-healing and biodegradable, creating a material not only good for actuators, but also for hazmat suits and other applications where tiny holes could cause a danger.
“Current self-healing materials have shortcomings that limit their practical application, such as low healing strength and long healing times (hours),” the researchers report in today’s (July 27) issue of Nature Materials.
The researchers produced high-strength synthetic proteins that mimic those found in nature. Like the creatures they are patterned on, the proteins can self-heal both minute and visible damage.
“Our goal is to create self-healing programmable materials with unprecedented control over their physical properties using synthetic biology,” said Melik Demirel, professor of engineering science and mechanics and holder of the Lloyd and Dorothy Foehr Huck Chair in Biomimetic Materials at Penn State.
Robotic machines with industrial robotic arms and prosthetic legs have joints that move and require a soft material that will accommodate this movement. So do ventilators and personal protective equipment of various kinds. But, all materials under continual repetitive motion develop tiny tears and cracks and eventually break. Using a self-healing material, the initial tiny defects are repairable before catastrophic failure ensues.
Demirel’s team creates the self-healing polymer by using a series of DNA tandem repeats made up of amino acids produced by gene duplication. Tandem repeats are usually short series of molecules arranged to repeat themselves any number of times. The researchers manufacture the polymer in standard bacterial bioreactors.
“We were able to reduce a typical 24-hour healing period to one second so our protein-based soft robots can now repair themselves immediately,” said Abdon Pena-Francesch, lead author of the paper and a former doctoral student in Demirel’s lab. “In nature, self-healing takes a long time. In this sense, our technology outsmarts nature.”
The self-healing polymer heals with the application of water and heat, although Demirel said that it could also heal using light.
“If you cut this polymer in half, when it heals it gains back 100% of its strength,” said Demirel.
Metin Sitti, director of the Physical Intelligence Department at the Max Planck Institute for Intelligent Systems, Stuttgart, Germany, and his team were working with the polymer, creating holes and healing them. They then created soft actuators that, through use, cracked and then healed in real time — about one second.
“Self-repairing, physically intelligent soft materials are essential for building robust and fault-tolerant soft robots and actuators in the near future,” said Sitti.
By adjusting the number of tandem repeats, Demirel’s team created a soft polymer that healed rapidly and retained its original strength, but they also created a polymer that is 100% biodegradable and 100% recyclable into the same, original polymer.
“We want to minimize the use of petroleum-based polymers for many reasons,” said Demirel. “Sooner or later we will run out of petroleum and it is also polluting and causing global warming. We can’t compete with the really inexpensive plastics. The only way to compete is to supply something the petroleum-based polymers can’t deliver and self-healing provides the performance needed.”
Demirel explained that while many petroleum-based polymers can be recycled, they are recycled into something different. For example, polyester t-shirts can be recycled into bottles, but not into polyester fibers again.
Just as the squid that the polymer mimics biodegrades in the ocean, the biomimetic polymer will biodegrade. With the addition of an acid-like vinegar, the polymer will also recycle into a powder that is again manufacturable into the same, soft, self-healing polymer.
“This research illuminates the landscape of material properties that become accessible by going beyond proteins that exist in nature using synthetic biology approaches,” said Stephanie McElhinny, biochemistry program manager in the Army Research Office, an element of the U.S. Army Combat Capabilities Development Command’s Army Research Laboratory. “The rapid and high-strength self-healing of these synthetic proteins demonstrates the potential of this approach to deliver novel materials for future Army applications, such as personal protective equipment or flexible robots that could maneuver in confined spaces.”
Also working on this project was Huihun Jung, postdoctoral fellow in engineering science and mechanics, Penn State.
The Max Planck Society, the Alexander von Humbolt Foundation, the Federal Ministry for Education and Research of Germany, the U.S. Army Research Office, and the Huck Endowment of the Pennsylvania State University supported this work.
Originally posted as “Soft robotic actuators heal themselves” at Penn State on July 27 2020
In this episode, Lauren Klein interviews Muralidharan Arikara, CEO of Xarpie Labs. Xarpie Labs creates visualization and simulation experiences within the retail, healthcare, and defense industries. Arikara describes how Xarpie Labs grew as part of the Machani Group, which has decades of experience in automotive manufacturing, into an innovator in virtual and augmented reality. He elaborates on the role of Xarpie Lab’s virtual reality and augmented reality experiences in allowing real estate customers a vision of properties from a distance. Arikara also paints a picture of Xarpie Labs’ augmented reality tools, including a project for a museum to visualize old gramophones for visitors and an air conditioning troubleshooting tool for technicians.
Muralidharan Arikara is an entrepreneur and the CEO of Xarpie Labs. Arikara is an engineer by training, and worked in the clean energy industry prior to his current role. In addition to his role as CEO at Xarpie Labs, he is also a member of the advisory board of several startups in the energy and consumer segments.
The DARPA Subterranean (SubT) Challenge aims to develop innovative technologies that would augment operations underground. On July 20, Dr Timothy Chung, the DARPA SubTChallenge Program Manager, joined Silicon Valley Robotics to discuss the upcoming Cave Circuit and Subterranean Challenge Finals, and the opportunities that still exist for individual and team entries in both Virtual and Systems Challenges, as per the video below.
The SubT Challenge allows teams to demonstrate new approaches for robotic systems to rapidly map, navigate, and search complex underground environments, including human-made tunnel systems, urban underground, and natural cave networks.
The SubT Challenge is organized into two Competitions (Systems and Virtual), each with two tracks (DARPA-funded and self-funded).
SYSTEMS COMPETITION RESULTS
Teams in the Systems Competition completed four total runs, two 60-minute runs on each of two courses, Experimental and Safety Research. The courses varied in difficulty and included 20 artifacts each. Teams earned points by correctly identifying artifacts within a five-meter accuracy. The final score was a total of each team’s best score from each of the courses. In instances of a points tie, team rank was determined by (1) earliest time the last artifact was successfully reported, averaged across the team’s best runs on each course; (2) earliest time the first artifact was successfully reported, averaged across the team’s best runs on each course; and (3) lowest average time across all valid artifact reports, averaged across the team’s best runs on each course.
The Tunnel Circuit final scores were as follows
|11||CoSTAR (Collaborative SubTerranean Autonomous Resilient Robots), DARPA-funded|
|10||CTU-CRAS, self-funded winner of the $200,000 Tunnel Circuit prize|
|9||MARBLE (Multi-agent Autonomy with Radar-Based Localization for Exploration), DARPA-funded|
|7||CSIRO Data61, DARPA-funded|
|5||CERBERUS (CollaborativE walking & flying RoBots for autonomous ExploRation in Underground Settings), DARPA-funded|
|2||NCTU (National Chiao Tung University), self-funded|
|1||CRETISE (Collaborative Robot Exploration and Teaming In Subterranean Environments), DARPA-funded|
|1||PLUTO (Pennsylvania Laboratory for Underground Tunnel Operations), DARPA-funded|
|0||Coordinated Robotics, self-funded|
The Urban Circuit final scores were as follows:
|16||CoSTAR (Collaborative SubTerranean Autonomous Resilient Robots), DARPA-funded|
|10||CTU-CRAS-NORLAB (Czech Technical University in Prague – Center for Robotics and Autonomous Systems – Northern Robotics Laboratory), self-funded winner of $500,000 first place prize|
|9||CSIRO Data61, DARPA-funded|
|7||CERBERUS (CollaborativE walking & flying RoBots for autonomous ExploRation in Underground Settings), DARPA-funded|
|4||Coordinated Robotics, self-funded winner of the $250,000 second place prize|
|4||MARBLE (Multi-agent Autonomy with Radar-Based Localization for Exploration), DARPA-funded|
|2||NCTU (National Chiao Tung University), self-funded|
|1||NUS SEDS, (National University of Singapore Students for Exploration and Development of Space), self-funded|
VIRTUAL COMPETITION RESULTS
The Virtual competitors developed advanced software for their respective teams of virtual aerial and wheeled robots to explore tunnel environments, with the goal of finding various artifacts hidden throughout the virtual environment and reporting their locations and types to within a five-meter radius during each 60-minute simulation run. A correct report is worth one point and competitors win by accruing the most points across multiple, diverse simulated environments.
The Tunnel Circuit final scores were as follows:
|50||Coordinated Robotics, self-funded|
|14||SODIUM-24 Robotics, self-funded|
|1||Flying Fitches, self-funded|
The Urban Circuit final scores were as follows:
|150||BARCS (Bayesian Adaptive Robot Control System), DARPA-funded|
|115||Coordinated Robotics, self-funded winner of the $250,000 first place prize|
|21||Robotika, self-funded winner of the $150,000 second place prize|
|17||COLLEMBOLA (Communication Optimized, Low Latency Exploration, Map-Building and Object Localization Autonomy), DARPA-funded|
|7||Flying Fitches, self-funded winner of the $100,000 third place prize|
|7||SODIUM-24 Robotics, self-funded|
2020 Cave Circuit and Finals
The Cave Circuit, the final of three Circuit events, is planned for later this year. Final Event, planned for summer of 2021, will put both Systems and Virtual teams to the test with courses that incorporate diverse elements from all three environments. Teams will compete for up to $2 million in the Systems Final Event and up to $1.5 million in the Virtual Final Event, with additional prizes.
Learn more about the opportunities to participate either virtual or systems Team: https://www.subtchallenge.com/
Dr. Timothy Chung joined DARPA’s Tactical Technology Office as a program manager in February 2016. He serves as the Program Manager for the OFFensive Swarm-Enabled Tactics Program and the DARPA Subterranean (SubT) Challenge.
Prior to joining DARPA, Dr. Chung served as an Assistant Professor at the Naval Postgraduate School and Director of the Advanced Robotic Systems Engineering Laboratory (ARSENL). His academic interests included modeling, analysis, and systems engineering of operational settings involving unmanned systems, combining collaborative autonomy development efforts with an extensive live-fly field experimentation program for swarm and counter-swarm unmanned system tactics and associated technologies.
Dr. Chung holds a Bachelor of Science in Mechanical and Aerospace Engineering from Cornell University. He also earned Master of Science and Doctor of Philosophy degrees in Mechanical Engineering from the California Institute of Technology.
Learn more about DARPA here: www.darpa.mil
By Akhil Padmanabha and Frederik Ebert
Touch has been shown to be important for dexterous manipulation in robotics. Recently, the GelSight sensor has caught significant interest for learning-based robotics due to its low cost and rich signal. For example, GelSight sensors have been used for learning inserting USB cables (Li et al, 2014), rolling a die (Tian et al. 2019) or grasping objects (Calandra et al. 2017).
The reason why learning-based methods work well with GelSight sensors is that they output high-resolution tactile images from which a variety of features such as object geometry, surface texture, normal and shear forces can be estimated that often prove critical to robotic control. The tactile images can be fed into standard CNN-based computer vision pipelines allowing the use of a variety of different learning-based techniques: In Calandra et al. 2017 a grasp-success classifier is trained on GelSight data collected in self-supervised manner, in Tian et al. 2019 Visual Foresight, a video-prediction-based control algorithm is used to make a robot roll a die purely based on tactile images, and in Lambeta et al. 2020 a model-based RL algorithm is applied to in-hand manipulation using GelSight images.
Unfortunately applying GelSight sensors in practical real-world scenarios is still challenging due to its large size and the fact that it is only sensitive on one side. Here we introduce a new, more compact tactile sensor design based on GelSight that allows for omnidirectional sensing, i.e. making the sensor sensitive on all sides like a human finger, and show how this opens up new possibilities for sensorimotor learning. We demonstrate this by teaching a robot to pick up electrical plugs and insert them purely based on tactile feedback.
A standard GelSight sensor, shown in the figure below on the left, uses an off-the-shelf webcam to capture high-resolution images of deformations on the silicone gel skin. The inside surface of the gel skin is illuminated with colored LEDs, providing sufficient lighting for the tactile image.
Existing GelSight designs are either flat, have small sensitive fields or only provide low-resolution signals. For example, prior versions of the GelSight sensor, provide high resolution (400×400 pixel) images but are large and flat, providing sensitivity on only one side, while the commercial OptoForce sensor (recently discontinued by OnRobot) is curved, but only provides force readings as a single 3-dimensional force vector.
The OmniTact Sensor
Our OmniTact sensor design aims to address these limitations. It provides both multi-directional and high-resolution sensing on its curved surface in a compact form factor. Similar to GelSight, OmniTact uses cameras embedded into a silicone gel skin to capture deformation of the skin, providing a rich signal from which a wide range of features such as shear and normal forces, object pose, geometry and material properties can be inferred. OmniTact uses multiple cameras giving it both high-resolution and multi-directional capabilities. The sensor itself can be used as a “finger” and can be integrated into a gripper or robotic hand. It is more compact than previous GelSight sensors, which is accomplished by utilizing micro-cameras typically used in endoscopes, and by casting the silicone gel directly onto the cameras. Tactile images from OmniTact are shown in the figures below.
One of our primary goals throughout the design process was to make OmniTact as compact as possible. To accomplish this goal, we used micro-cameras with large viewing angles and a small focus distance. Specifically we picked cameras that are commonly used in medical endoscopes measuring just (1.35 x 1.35 x 5 mm) in size with a focus distance of 5 mm. These cameras were arranged in a 3D printed camera mount as shown in the figure below which allowed us to minimize blind spots on the surface of the sensor and reduce the diameter (D) of the sensor to 30 mm.
Electrical Connector Insertion Task
We show that OmniTact’s multi-directional tactile sensing capabilities can be leveraged to solve a challenging robotic control problem: Inserting an electrical connector blindly into a wall outlet purely based on information from the multi-directional touch sensor (shown in the figure below). This task is challenging since it requires localizing the electrical connector relative to the gripper and localizing the gripper relative to the wall outlet.
To learn the insertion task, we used a simple imitation learning algorithm that estimates the end-effector displacement required for inserting the plug into the outlet based on the tactile images from the OmniTact sensor. Our model was trained with just 100 demonstrations of insertion by controlling the robot using keyboard control. Successful insertions obtained by running the trained policy are shown in the gifs below.
As shown in the table below, using the multi-directional capabilities (both the top and side camera) of our sensor allowed for the highest success rate (80%) in comparison to using just one camera from the sensor, indicating that multi-directional touch sensing is indeed crucial for solving this task. We additionally compared performance with another multi-directional tactile sensor, the OptoForce sensor, which only had a success rate of 17%.
We believe that compact, high resolution and multi-directional touch sensing has the potential to transform the capabilities of current robotic manipulation systems. We suspect that multi-directional tactile sensing could be an essential element in general-purpose robotic manipulation in addition to applications such as robotic teleoperation in surgery, as well as in sea and space missions. In the future, we plan to make OmniTact cheaper and more compact, allowing it to be used in a wider range of tasks. Our team additionally plans to conduct more robotic manipulation research that will inform future generations of tactile sensors.
This blog post is based on the following paper which will be presented at the International Conference on Robotics and Automation 2020:
- OmniTact: A Multi-Directional High-Resolution Touch Sensor
Akhil Padmanabha, Frederik Ebert, Stephen Tian, Roberto Calandra, Chelsea Finn, Sergey Levine
Paper Link: https://arxiv.org/abs/2003.06965
Research Website: https://sites.google.com/berkeley.edu/omnitact/home
We would like to thank Professor Sergey Levine, Professor Chelsea Finn, and Stephen Tian for their valuable feedback when preparing this blog post.
This article was initially published on the BAIR blog, and appears here with the authors’ permission.