Category Robotics Classification

Page 349 of 456
1 347 348 349 350 351 456

Flexible yet sturdy robot is designed to “grow” like a plant


The new “growing robot” can be programmed to grow, or extend, in different directions, based on the sequence of chain units that are locked and fed out from the “growing tip,” or gearbox.
Image courtesy of researchers, edited by MIT News

In today’s factories and warehouses, it’s not uncommon to see robots whizzing about, shuttling items or tools from one station to another. For the most part, robots navigate pretty easily across open layouts. But they have a much harder time winding through narrow spaces to carry out tasks such as reaching for a product at the back of a cluttered shelf, or snaking around a car’s engine parts to unscrew an oil cap.

Now MIT engineers have developed a robot designed to extend a chain-like appendage flexible enough to twist and turn in any necessary configuration, yet rigid enough to support heavy loads or apply torque to assemble parts in tight spaces. When the task is complete, the robot can retract the appendage and extend it again, at a different length and shape, to suit the next task.

The appendage design is inspired by the way plants grow, which involves the transport of nutrients, in a fluidized form, up to the plant’s tip. There, they are converted into solid material to produce, bit by bit, a supportive stem.

Likewise, the robot consists of a “growing point,” or gearbox, that pulls a loose chain of interlocking blocks into the box. Gears in the box then lock the chain units together and feed the chain out, unit by unit, as a rigid appendage.

The researchers presented the plant-inspired “growing robot” this week at the IEEE International Conference on Intelligent Robots and Systems (IROS) in Macau. They envision that grippers, cameras, and other sensors could be mounted onto the robot’s gearbox, enabling it to meander through an aircraft’s propulsion system and tighten a loose screw, or to reach into a shelf and grab a product without disturbing the organization of surrounding inventory, among other tasks.

“Think about changing the oil in your car,” says Harry Asada, professor of mechanical engineering at MIT. “After you open the engine roof, you have to be flexible enough to make sharp turns, left and right, to get to the oil filter, and then you have to be strong enough to twist the oil filter cap to remove it.”

“Now we have a robot that can potentially accomplish such tasks,” says Tongxi Yan, a former graduate student in Asada’s lab, who led the work. “It can grow, retract, and grow again to a different shape, to adapt to its environment.”

The team also includes MIT graduate student Emily Kamienski and visiting scholar Seiichi Teshigawara, who presented the results at the conference.

The last foot

The design of the new robot is an offshoot of Asada’s work in addressing the “last one-foot problem” — an engineering term referring to the last step, or foot, of a robot’s task or exploratory mission. While a robot may spend most of its time traversing open space, the last foot of its mission may involve more nimble navigation through tighter, more complex spaces to complete a task.

Engineers have devised various concepts and prototypes to address the last one-foot problem, including robots made from soft, balloon-like materials that grow like vines to squeeze through narrow crevices. But Asada says such soft extendable robots aren’t sturdy enough to support “end effectors,” or add-ons such as grippers, cameras, and other sensors that would be necessary in carrying out a task, once the robot has wormed its way to its destination.

“Our solution is not actually soft, but a clever use of rigid materials,” says Asada, who is the Ford Foundation Professor of Engineering.

Chain links

Once the team defined the general functional elements of plant growth, they looked to mimic this in a general sense, in an extendable robot.

“The realization of the robot is totally different from a real plant, but it exhibits the same kind of functionality, at a certain abstract level,” Asada says.

The researchers designed a gearbox to represent the robot’s “growing tip,” akin to the bud of a plant, where, as more nutrients flow up to the site, the tip feeds out more rigid stem. Within the box, they fit a system of gears and motors, which works to pull up a fluidized material — in this case, a bendy sequence of 3-D-printed plastic units interlocked with each other, similar to a bicycle chain.

As the chain is fed into the box, it turns around a winch, which feeds it through a second set of motors programmed to lock certain units in the chain to their neighboring units, creating a rigid appendage as it is fed out of the box.

The researchers can program the robot to lock certain units together while leaving others unlocked, to form specific shapes, or to “grow” in certain directions. In experiments, they were able to program the robot to turn around an obstacle as it extended or grew out from its base.

“It can be locked in different places to be curved in different ways, and have a wide range of motions,” Yan says.

When the chain is locked and rigid, it is strong enough to support a heavy, one-pound weight. If a gripper were attached to the robot’s growing tip, or gearbox, the researchers say the robot could potentially grow long enough to meander through a narrow space, then apply enough torque to loosen a bolt or unscrew a cap.

Auto maintenance is a good example of tasks the robot could assist with, according to Kamienski. “The space under the hood is relatively open, but it’s that last bit where you have to navigate around an engine block or something to get to the oil filter, that a fixed arm wouldn’t be able to navigate around. This robot could do something like that.”

This research was funded, in part, by NSK Ltd.

Predicting people’s driving personalities

In lane-merging scenarios, a system developed at MIT could distinguish between altruistic and egoistic driving behavior.
Image courtesy of the researchers.

Self-driving cars are coming. But for all their fancy sensors and intricate data-crunching abilities, even the most cutting-edge cars lack something that (almost) every 16-year-old with a learner’s permit has: social awareness.


While autonomous technologies have improved substantially, they still ultimately view the drivers around them as obstacles made up of ones and zeros, rather than human beings with specific intentions, motivations, and personalities.

But recently a team led by researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) has been exploring whether self-driving cars can be programmed to classify the social personalities of other drivers, so that they can better predict what different cars will do — and, therefore, be able to drive more safely among them.

In a new paper, the scientists integrated tools from social psychology to classify driving behavior with respect to how selfish or selfless a particular driver is.

Specifically, they used something called social value orientation (SVO), which represents the degree to which someone is selfish (“egoistic”) versus altruistic or cooperative (“prosocial”). The system then estimates drivers’ SVOs to create real-time driving trajectories for self-driving cars.

Testing their algorithm on the tasks of merging lanes and making unprotected left turns, the team showed that they could better predict the behavior of other cars by a factor of 25 percent. For example, in the left-turn simulations their car knew to wait when the approaching car had a more egoistic driver, and to then make the turn when the other car was more prosocial.

While not yet robust enough to be implemented on real roads, the system could have some intriguing use cases, and not just for the cars that drive themselves. Say you’re a human driving along and a car suddenly enters your blind spot — the system could give you a warning in the rear-view mirror that the car has an aggressive driver, allowing you to adjust accordingly. It could also allow self-driving cars to actually learn to exhibit more human-like behavior that will be easier for human drivers to understand.

“Working with and around humans means figuring out their intentions to better understand their behavior,” says graduate student Wilko Schwarting, who was lead author on the new paper that will be published this week in the latest issue of the Proceedings of the National Academy of Sciences. “People’s tendencies to be collaborative or competitive often spills over into how they behave as drivers. In this paper, we sought to understand if this was something we could actually quantify.”

Schwarting’s co-authors include MIT professors Sertac Karaman and Daniela Rus, as well as research scientist Alyssa Pierson and former CSAIL postdoc Javier Alonso-Mora.

A central issue with today’s self-driving cars is that they’re programmed to assume that all humans act the same way. This means that, among other things, they’re quite conservative in their decision-making at four-way stops and other intersections.

While this caution reduces the chance of fatal accidents, it also creates bottlenecks that can be frustrating for other drivers, not to mention hard for them to understand. (This may be why the majority of traffic incidents have involved getting rear-ended by impatient drivers.)

“Creating more human-like behavior in autonomous vehicles (AVs) is fundamental for the safety of passengers and surrounding vehicles, since behaving in a predictable manner enables humans to understand and appropriately respond to the AV’s actions,” says Schwarting.

To try to expand the car’s social awareness, the CSAIL team combined methods from social psychology with game theory, a theoretical framework for conceiving social situations among competing players.

The team modeled road scenarios where each driver tried to maximize their own utility and analyzed their “best responses” given the decisions of all other agents. Based on that small snippet of motion from other cars, the team’s algorithm could then predict the surrounding cars’ behavior as cooperative, altruistic, or egoistic — grouping the first two as “prosocial.” People’s scores for these qualities rest on a continuum with respect to how much a person demonstrates care for themselves versus care for others.

In the merging and left-turn scenarios, the two outcome options were to either let somebody merge into your lane (“prosocial”) or not (“egoistic”). The team’s results showed that, not surprisingly, merging cars are deemed more competitive than non-merging cars.

The system was trained to try to better understand when it’s appropriate to exhibit different behaviors. For example, even the most deferential of human drivers knows that certain types of actions — like making a lane change in heavy traffic — require a moment of being more assertive and decisive.

For the next phase of the research, the team plans to work to apply their model to pedestrians, bicycles, and other agents in driving environments. In addition, they will be investigating other robotic systems acting among humans, such as household robots, and integrating SVO into their prediction and decision-making algorithms. Pierson says that the ability to estimate SVO distributions directly from observed motion, instead of in laboratory conditions, will be important for fields far beyond autonomous driving.

“By modeling driving personalities and incorporating the models mathematically using the SVO in the decision-making module of a robot car, this work opens the door to safer and more seamless road-sharing between human-driven and robot-driven cars,” says Rus.

The research was supported by the Toyota Research Institute for the MIT team. The Netherlands Organization for Scientific Research provided support for the specific participation of Mora.

Very Narrow Aisle (VNA) Inventory Counts using Drones in Warehouses & Distribution Centers

This, in turn, is driving inventory stakeholders to move their top-tier sites from majority bulk storage to majority racking, and from traditional aisles to very narrow aisles (VNAs). Rack heights have steadily increased from 25 feet, on average, to 35 feet or more.

#297: Using Natural Language in Human-Robot Collaboration, with Brad Hayes


In this episode, we hear from Brad Hayes, Assistant Professor of Computer Science at the University of Colorado Boulder, who directs the university’s Collaborative AI and Robotics lab. The lab’s work focuses on developing systems that can learn from and work with humans—from physical robots or machines, to software systems or decision support tools—so that together, the human and system can achieve more than each could achieve on their own.

Our interviewer Audrow caught up with Dr. Hayes to discuss why collaboration may at times be preferable to full autonomy and automation, how human naration can be used to help robots learn from demonstration, and the challenges of developing collaborative systems, including the importance of shared models and safety to allow adoption of such technologies in future.

Links

Page 349 of 456
1 347 348 349 350 351 456