Category robots in business

Page 265 of 347
1 263 264 265 266 267 347

A rubber computer eliminates the last hard components from soft robots

The toggle gripper holding a screwdriver. Credit: Daniel Preston / Harvard University
By Caitlin McDermott-Murphy, Harvard University Department of Chemistry and Chemical Biology

A soft robot, attached to a balloon and submerged in a transparent column of water, dives and surfaces, then dives and surfaces again, like a fish chasing flies. Soft robots have performed this kind of trick before. But unlike most soft robots, this one is made and operated with no hard or electronic parts. Inside, a soft, rubber computer tells the balloon when to ascend or descend. For the first time, this robot relies exclusively on soft digital logic.

In the last decade, soft robots have surged into the metal-dominant world of robotics. Grippers made from rubbery silicone materials are already used in assembly lines: Cushioned claws handle delicate fruit and vegetables like tomatoes, celery, and sausage links or extract bottles and sweaters from crates. In laboratories, the grippers can pick up slippery fish, live mice, and even insects, eliminating the need for more human interaction.

Soft robots already require simpler control systems than their hard counterparts. The grippers are so compliant, they simply cannot exert enough pressure to damage an object and without the need to calibrate pressure, a simple on-off switch suffices. But until now, most soft robots still rely on some hardware: Metal valves open and close channels of air that operate the rubbery grippers and arms, and a computer tells those valves when to move.

Now, researchers have built a soft computer using just rubber and air. “We’re emulating the thought process of an electronic computer, using only soft materials and pneumatic signals, replacing electronics with pressurized air,” says Daniel J. Preston, first author on a paper published in PNAS and a postdoctoral researcher working with George Whitesides, a Founding Core Faculty member of Harvard’s Wyss Institute for Biologically Inspired Engineering, and the Woodford L. and Ann A. Flowers University Professor at Harvard University’s Department of Chemistry and Chemical Biology.

To make decisions, computers use digital logic gates, electronic circuits that receive messages (inputs) and determine reactions (outputs) based on their programming. Our circuitry isn’t so different: When a doctor strikes a tendon below our kneecap (input), the nervous system is programmed to jerk (output).

Preston’s soft computer mimics this system using silicone tubing and pressurized air. To achieve the minimum types of logic gates required for complex operations—in this case, NOT, AND, and OR—he programmed the soft valves to react to different air pressures. For the NOT logic gate, for example, if the input is high pressure, the output will be low pressure. With these three logic gates, Preston says, “you could replicate any behavior found on any electronic computer.”

The bobbing fish-like robot in the water tank, for example, uses an environmental pressure sensor (a modified NOT gate) to determine what action to take. The robot dives when the circuit senses low pressure at the top of the tank and surfaces when it senses high pressure at depth. The robot can also surface on command if someone pushes an external soft button.

Robots built with only soft parts have several benefits. In industrial settings, like automobile factories, massive metal machines operate with blind speed and power. If a human gets in the way, a hard robot could cause irreparable damage. But if a soft robot bumps into a human, Preston says, “you wouldn’t have to worry about injury or a catastrophic failure.” They can only exert so much force.

But soft robots are more than just safer: They are generally cheaper and simpler to make, light weight, resistant to damage and corrosive materials, and durable. Add intelligence and soft robots could be used for much more than just handling tomatoes. For example, a robot could sense a user’s temperature and deliver a soft squeeze to indicate a fever, alert a diver when the water pressure rises too high, or push through debris after a natural disaster to help find victims and offer aid.

Soft robots can also venture where electronics struggle: High radiative fields, like those produced after a nuclear malfunction or in outer-space, and inside Magnetic Resonance Imaging (MRI) machines. In the wake of a hurricane or flooding, a hardy soft robot could manage hazardous terrain and noxious air. “If it gets run over by a car, it just keeps going, which is something we don’t have with hard robots,” Preston says.

Preston and colleagues are not the first to control robots without electronics. Other research teams have designed microfluidic circuits, which can use liquid and air to create nonelectronic logic gates. One microfluidic oscillator helped a soft octopus-shaped robot flail all eight arms.

Yet, microfluidic logic circuits often rely on hard materials like glass or hard plastics, and they use such thin channels that only small amounts of air can move through at a time, slowing the robot’s motion. In comparison, Preston’s channels are larger—close to one millimeter in diameter—which enables much faster air flow rates. His air-based grippers can grasp an object in a matter of seconds.

Microfluidic circuits are also less energy efficient. Even at rest, the devices use a pneumatic resistor, which flows air from the atmosphere to either a vacuum or pressure source to maintain stasis. Preston’s circuits require no energy input when dormant. Such energy conservation could be crucial in emergency or disaster situations where the robots travel far from a reliable energy source.

The soft, pneumatic robots also offer an enticing possibility: Invisibility. Depending on which material Preston selects, he could design a robot that is index-matched to a specific substance. So, if he chooses a material that camouflages in water, the robot would appear transparent when submerged. In the future, he and his colleagues hope to create autonomous robots that are invisible to the naked eye or even sonar detection. “It’s just a matter of choosing the right materials,” he says.

For Preston, the right materials are elastomers or rubbers. While other fields chase higher power with machine learning and artificial intelligence, the Whitesides team turns away from the mounting complexity. “There’s a lot of capability there,” Preston says, “but it’s also good to take a step back and think about whether or not there’s a simpler way to do things that gives you the same result, especially if it’s not only simpler, it’s also cheaper.”

“Particle robot” works as a cluster of simple units


By Rob Matheson
Researchers have developed computationally simple robots, called particles, that cluster and form a single “particle robot” that moves around, transports objects, and completes other tasks. The work hails from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), Columbia University, and elsewhere.
Image: Felice Frankel

Taking a cue from biological cells, researchers from MIT, Columbia University, and elsewhere have developed computationally simple robots that connect in large groups to move around, transport objects, and complete other tasks.

This so-called “particle robotics” system — based on a project by MIT, Columbia Engineering, Cornell University, and Harvard University researchers — comprises many individual disc-shaped units, which the researchers call “particles.” The particles are loosely connected by magnets around their perimeters, and each unit can only do two things: expand and contract. (Each particle is about 6 inches in its contracted state and about 9 inches when expanded.) That motion, when carefully timed, allows the individual particles to push and pull one another in coordinated movement. On-board sensors enable the cluster to gravitate toward light sources.

In a Nature paper published today, the researchers demonstrate a cluster of two dozen real robotic particles and a virtual simulation of up to 100,000 particles moving through obstacles toward a light bulb. They also show that a particle robot can transport objects placed in its midst.

Particle robots can form into many configurations and fluidly navigate around obstacles and squeeze through tight gaps. Notably, none of the particles directly communicate with or rely on one another to function, so particles can be added or subtracted without any impact on the group. In their paper, the researchers show particle robotic systems can complete tasks even when many units malfunction.

The paper represents a new way to think about robots, which are traditionally designed for one purpose, comprise many complex parts, and stop working when any part malfunctions. Robots made up of these simplistic components, the researchers say, could enable more scalable, flexible, and robust systems.

“We have small robot cells that are not so capable as individuals but can accomplish a lot as a group,” says Daniela Rus, director of the Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science. “The robot by itself is static, but when it connects with other robot particles, all of a sudden the robot collective can explore the world and control more complex actions. With these ‘universal cells,’ the robot particles can achieve different shapes, global transformation, global motion, global behavior, and, as we have shown in our experiments, follow gradients of light. This is very powerful.”

Joining Rus on the paper are: first author Shuguang Li, a CSAIL postdoc; co-first author Richa Batra and corresponding author Hod Lipson, both of Columbia Engineering; David Brown, Hyun-Dong Chang, and Nikhil Ranganathan of Cornell; and Chuck Hoberman of Harvard.

At MIT, Rus has been working on modular, connected robots for nearly 20 years, including an expanding and contracting cube robot that could connect to others to move around. But the square shape limited the robots’ group movement and configurations.

In collaboration with Lipson’s lab, where Li was a graduate student until coming to MIT in 2014, the researchers went for disc-shaped mechanisms that can rotate around one another. They can also connect and disconnect from each other, and form into many configurations.

Each unit of a particle robot has a cylindrical base, which houses a battery, a small motor, sensors that detect light intensity, a microcontroller, and a communication component that sends out and receives signals. Mounted on top is a children’s toy called a Hoberman Flight Ring — its inventor is one of the paper’s co-authors — which consists of small panels connected in a circular formation that can be pulled to expand and pushed back to contract. Two small magnets are installed in each panel.

The trick was programming the robotic particles to expand and contract in an exact sequence to push and pull the whole group toward a destination light source. To do so, the researchers equipped each particle with an algorithm that analyzes broadcasted information about light intensity from every other particle, without the need for direct particle-to-particle communication.

The sensors of a particle detect the intensity of light from a light source; the closer the particle is to the light source, the greater the intensity. Each particle constantly broadcasts a signal that shares its perceived intensity level with all other particles. Say a particle robotic system measures light intensity on a scale of levels 1 to 10: Particles closest to the light register a level 10 and those furthest will register level 1. The intensity level, in turn, corresponds to a specific time that the particle must expand. Particles experiencing the highest intensity — level 10 — expand first. As those particles contract, the next particles in order, level 9, then expand. That timed expanding and contracting motion happens at each subsequent level.

“This creates a mechanical expansion-contraction wave, a coordinated pushing and dragging motion, that moves a big cluster toward or away from environmental stimuli,” Li says. The key component, Li adds, is the precise timing from a shared synchronized clock among the particles that enables movement as efficiently as possible: “If you mess up the synchronized clock, the system will work less efficiently.”

In videos, the researchers demonstrate a particle robotic system comprising real particles moving and changing directions toward different light bulbs as they’re flicked on, and working its way through a gap between obstacles. In their paper, the researchers also show that simulated clusters of up to 10,000 particles maintain locomotion, at half their speed, even with up to 20 percent of units failed.

“It’s a bit like the proverbial ‘gray goo,’” says Lipson, a professor of mechanical engineering at Columbia Engineering, referencing the science-fiction concept of a self-replicating robot that comprises billions of nanobots. “The key novelty here is that you have a new kind of robot that has no centralized control, no single point of failure, no fixed shape, and its components have no unique identity.”

The next step, Lipson adds, is miniaturizing the components to make a robot composed of millions of microscopic particles.

Manipulation by feel

By Frederik Ebert and Stephen Tian

Guiding our fingers while typing, enabling us to nimbly strike a matchstick, and inserting a key in a keyhole all rely on our sense of touch. It has been shown that the sense of touch is very important for dexterous manipulation in humans. Similarly, for many robotic manipulation tasks, vision alone may not be sufficient – often, it may be difficult to resolve subtle details such as the exact position of an edge, shear forces or surface textures at points of contact, and robotic arms and fingers can block the line of sight between a camera and its quarry. Augmenting robots with this crucial sense, however, remains a challenging task.

Our goal is to provide a framework for learning how to perform tactile servoing, which means precisely relocating an object based on tactile information. To provide our robot with tactile feedback, we utilize a custom-built tactile sensor, based on similar principles as the GelSight sensor developed at MIT. The sensor is composed of a deformable, elastomer-based gel, backlit by three colored LEDs, and provides high-resolution RGB images of contact at the gel surface. Compared to other sensors, this tactile sensor sensor naturally provides geometric information in the form of rich visual information from which attributes such as force can be inferred. Previous work using similar sensors has leveraged the this kind of tactile sensor on tasks such as learning how to grasp, improving success rates when grasping a variety of objects.

Below is the real time raw sensor output as a marker cap is rolled along the gel surface:

Hardware Setup & Task Definition

For our experiments, we use a modified 3-axis CNC router with a tactile sensor mounted face-down on the end effector of the machine. The robot moves by changing the X, Y, and Z position of the sensor relative to its working stage, driving each axis with a separate stepper motor. Because of the precise control of these motors, our setup can achieve a resolution of roughly 0.04mm, helpful for careful movements in fine manipulation tasks.


The robot setup, prepared for the die rolling task is described below. The tactile sensor is mounted on the end effector at the top left of the image, facing downwards.

We demonstrate our method through three representative manipulation tasks:

  1. Ball repositioning task: The robot moves a small metal ball bearing to a target location on the sensor surface. This task can be difficult because coarse control will often apply too much force on the ball bearing, causing it to slip and shoot away from the sensor with any movement.

  2. Analog stick deflection task: When playing video games, we often rely solely on our sense of touch to manipulate an analog stick on a game controller. This task is of particular interest because deflecting the analog stick often requires an intentional break and return of contact, creating a partial observability situation.

  3. Die rolling task: In this task, the robot rolls a 20-sided die from one face to another. In this task the risk of the object slipping out under the sensor is even greater, thus making the task the hardest of the three. An advantage of this task is that it additionally provides an intuitive success metric – when the robot has finished manipulation, is the correct number showing face up?


From left to right: The ball repositioning, analog stick, and die rolling tasks.

Each of these control tasks are specified in terms of goal images directly in tactile space; that is, the robot aims to manipulate the objects so that they produce a particular imprint upon the gel surface. These goal tactile images can be more informative and natural to specify than, say, a 3D-pose specification for an object or desired force vector.

Deep Tactile Model-Predictive Control

How can we utilize our high-dimensional sensory information to accomplish these control tasks? All three manipulation tasks can be solved using the same model-based reinforcement learning algorithm, which we call tactile model-predictive control (tactile MPC), built on top of visual foresight. It is important to note that we can use the same set of hyperparameters for each task, eliminating manual hyperparameter tuning.


A summary of deep tactile model predictive control.

The tactile MPC algorithm works by training an action-conditioned visual dynamics or video-prediction model on autonomously collected data. This model learns from raw sensory data, such as image pixels, and is able to directly make predictions of future images taking as input future hypothetical actions taken by the robot and starting tactile images we call context frames. No other information, such as the absolute position of the end effector, is specified.


Video-prediction model architecture.

In tactile MPC, as shown in the figure above, at test time, a large number of action sequences, 200 in this case, are sampled and the resulting hypothetical trajectories are predicted by the model. The trajectory which is predicted to most closely reach the goal is selected, and the first action in this sequence is taken in the real world by the robot. To allow for recovery in case of small errors in the model, trajectories the planning procedure is repeated at every step.

This control scheme has previously been applied and found success at enabling robots to lift and rearrange objects, even generalizing to previously unseen objects. If you’re interested in reading more about this, details are available in the paper.

To train the video-prediction model, we need to collect diverse data that will allow the robot to generalize to tactile states that it has not seen before. While we could sit at the keyboard and tell the robot how to move for every step of each trajectory, it would be much nicer if we could give the robot a general idea of how to collect the data, and allow it to do its thing while we catch up on homework or sleep. With a few simple reset mechanisms ensuring that things on the stage do not get out of hand over the course of data collection, we are able to collect data in a fully self-supervised manner, by collecting trajectories based on randomized action sequences. During these trajectories, the robot records tactile images from the sensor as well as the randomized actions it takes at each step. Each task required about 36 hours, in wall clock time, of data collection to train the respective predictive model, with no human supervision necessary.


Randomized data collection for the analog stick task (video sped up).

For each of the three tasks, we present representative examples of plans and rollouts:


Ball rolling task – The robot rolls the ball along the target trajectory.


Analog stick task – To reach the target goal image, the robot breaks and re-establishes contact with the object.


Die task – The robot rolls the die from the starting face labeled 20 (as seen in the prediction frames with red borders, which indicate context frames fed into the video-prediction model) to the one labeled 2.

As can be seen in these example rollouts, using the same framework and model settings, tactile MPC is able to perform a variety of manipulation tasks.

What’s Next?

We have shown a touch-based control method, tactile MPC, based on learning forward predictive models for high resolution tactile sensors, which is able to reposition objects based on user provided goals. The use of this combination of algorithms and sensors for control seems promising, and more difficult tasks may be within reach with the use of combined vision and touch sensing. However, our control horizon remains relatively short, in the tens of timesteps, which may not be sufficient for more complex manipulation tasks that we would hope to achieve in the future. In addition substantial improvements are needed on methods for specifying goals to enable more complex tasks such as general purpose object positioning or assembly.


This blog post is based on the following paper which will be presented at International Conference on Robotics and Automation 2019:

  • Manipulation by Feel: Touch-Based Control with Deep Predictive Models
  • Stephen Tian*, Frederik Ebert*, Dinesh Jayaraman, Mayur Mudigonda, Chelsea Finn, Roberto Calandra, Sergey Levine
  • Paper link, video link

We would like to thank Sergey Levine, Roberto Calandra, Mayur Mudigonda, and Chelsea Finn for their valuable feedback when preparing this blog post. This article was initially published on the BAIR blog, and appears here with the authors’ permission.

50,000 Warehouses to Use Robots by 2025 as Barriers to Entry Fall and AI Innovation Accelerates

Flexibility and efficiency have become primary differentiators in the e-commerce fulfillment market as retailers and Third-Party Logistics (3PLs) struggle to cope with volatile product demand, seasonal peaks, and rising consumer delivery expectations.

Researchers create new kind of robot composed of many simple particles with no centralized control or single point of failure

In a new study published today in Nature, researchers at Columbia Engineering and MIT Computer Science & Artificial Intelligence Lab (CSAIL) demonstrate for the first time a way to make a robot composed of many loosely coupled components, or “particles.”

More robots, more work

Robots will take over all jobs, so it is often thought. On the contrary, say Charissa Freese and Ton Wilthagen: robots will create jobs. It's just that these new jobs will be different, and the challenge is to anticipate which jobs will disappear, which ones will change, and what the new ones will be like – and when. Tilburg University aims to prepare employers and employees to the labor market of the near future.
Page 265 of 347
1 263 264 265 266 267 347