Page 412 of 450
1 410 411 412 413 414 450

AIA Announces The Vision Show Startup Competition Finalists and Judges

"The startups that have made it to the final competition are indicative of the levels of innovation in the machine vision and imaging industries," according to Jeff Burnstein, President, AIA. "I don't envy the job in front of our judges having to select only one winner."

#257: Learning Robot Objectives from Physical Human Interaction, with Andrea Bajcsy and Dylan P. Losey



In this interview, Audrow speaks with Andrea Bajcsy and Dylan P. Losey about a method that allows robots to infer a human’s objective through physical interaction. They discuss their approach, the challenges of learning complex tasks, and their experience collaborating between different universities.

Some examples of people working with the more typical impedance control (left) and Bajcsy and Losey’s learning method (right).


To learn more, see this post on Robohub from the Berkeley Artificial Intelligence Research (BAIR) Lab.

Andrea Bajcsy
Andrea Bajcsy is a Ph.D. student in Electrical Engineering and Computer Sciences at the University of California Berkeley. She received her B.S. degree in Computer Science at the University of Maryland and was awarded the NSF Graduate Research Fellowship in 2016. At Berkeley, she works in the Interactive Autonomy and Collaborative Technologies Laboratory researching physical human-robot interaction.

Dylan P. Losey


Dylan P. Losey received the B.S. degree in mechanical engineering from Vanderbilt University, Nashville, TN, USA, in 2014, and the M.S. degree in mechanical engineering from Rice University, Houston, TX, USA, in 2016.

He is currently working towards the Ph.D. degree in mechanical engineering at Rice University, where he has been a member of the Mechatronics and Haptic Interfaces Laboratory since 2014.  In addition, between May and August 2017, he was a visiting scholar in the Interactive Autonomy and Collaborative Technologies Laboratory at the University of California, Berkeley.  He researches physical human-robot interaction; in particular, how robots can learn from and adapt to human corrections.

Mr. Losey received an NSF Graduate Research Fellowship in 2014, and the 2016 IEEE/ASME Transactions on Mechatronics Best Paper Award as a first author.

Links

Robots in Depth with Franziska Kirstein

In this episode of Robots in Depth, Per Sjöborg speaks with Franziska Kirstein, Human-Robot Interaction Expert and Project Manager at Blue Ocean Robotics, about her experience as a linguist working with human robot interaction. We get to hear about what works and what doesn’t when non-engineer users are tasked with teaching robots different movements. Franziska also describes some of the challenges with kinestetic guidance and alternative methods that can be used. She then presents some of the projects she is involved in, including one in robot assisted health care and one involving social robots.

Thousands jam to see Jen-Hsun Huang’s keynote at GPU Developers Conference

In a 2+ hour talk that filled the Keynote Hall and spillover rooms at the San Jose McEnery Convention Center and had thousands of people in line for hours before, Nvidia’s CEO Jen-Hsun Huang, in characteristic jeans, leather jacket, and humble humor, described the world of graphics processing units (GPUs) with brilliant images and memorable one-liners:

  • “Data is the new source code”
  • “Simulation is the key to learning”
  • “AI is the turbocharger of software and deep learning is the catalyst for AI”
  • “Everything that moves will be autonomous”
  • “Robotics boosts every industry”

Most of Nvidia’s revenue comes from GPUs for gaming, super-capable ray-tracing professional graphics, and extraordinarily powerful super computers for data centers. Most of their current research and development is involved with AI-ready chips that enable clients to develop machine and deep learning models and apps. Nvidia is banking on these new development chips to be the chips of the future.

In AI-focused healthcare, this covers CLARA, a deep learning engine that uses present-day black and white sonogram, PET and MRI 2D scans and enhances the data to 3D and then color rendering. In the example on the right, a black and white ultrasound sonogram on the left is enhanced into the fully rendered baby picture on the right.

In the area of robotics this covers cramming AI, deep learning and real-time control and simulation into chips for autonomous vehicles of all types (cars, trucks, mobile robots).

Nvidia boasts 370 partners developing cars, trucks, mobility services, mapping, LiDAR, camera/radar and startups and suppliers for the autonomous vehicles market — all using various Nvidia GPUs.

Ever the salesman, Huang introduced Isaac SDK for robotics to accelerate creating autonomous machines using the Nvidia Jetson embedded platform for autonomous vehicles and simulation.

Finally, in a very convincing demonstration, Huang showed a virtual reality car on the screen. He then showed a very real human near the screen at a control console. Then that very real human placed his virtual avatar behind the wheel and proceeded to remotely drive a very real car outside the convention center around an obstacle and over to a parking lot where he parked it. Very impressive.

$1 million in awards

Nvidia encouraged 200 startups to participate in a three-segment contest to share $1 million in awards. All received hardware grants, training with deep learning experts, and marketing support. Two finalists were picked from each of three categories: healthcare, enterprise and autonomous systems. Kinema Systems, the Silicon Valley material handling company that can depalletize a mixed case pallet at full speed, won the autonomous systems award and received $333,333.

Cheetah III robot preps for a role as a first responder

Associate professor of mechanical engineering Sangbae Kim and his team at the Biomimetic Robotics Lab developed the quadruped robot, the MIT Cheetah.
Photo: David Sella

By Eric Brown

If you were to ask someone to name a new technology that emerged from MIT in the 21st century, there’s a good chance they would name the robotic cheetah. Developed by the MIT Department of Mechanical Engineering’s Biomimetic Robotics Lab under the direction of Associate Professor Sangbae Kim, the quadruped MIT Cheetah has made headlines for its dynamic legged gait, speed, jumping ability, and biomimetic design.

The dog-sized Cheetah II can run on four articulated legs at up to 6.4 meters per second, make mild running turns, and leap to a height of 60 centimeters. The robot can also autonomously determine how to avoid or jump over obstacles.

Kim is now developing a third-generation robot, the Cheetah III. Instead of improving the Cheetah’s speed and jumping capabilities, Kim is converting the Cheetah into a commercially viable robot with enhancements such as a greater payload capability, wider range of motion, and a dexterous gripping function. The Cheetah III will initially act as a spectral inspection robot in hazardous environments such as a compromised nuclear plant or chemical factory. It will then evolve to serve other emergency response needs.

“The Cheetah II was focused on high speed locomotion and agile jumping, but was not designed to perform other tasks,” says Kim. “With the Cheetah III, we put a lot of practical requirements on the design so it can be an all-around player. It can do high-speed motion and powerful actions, but it can also be very precise.”

The Biomimetic Robotics Lab is also finishing up a smaller, stripped down version of the Cheetah, called the Mini Cheetah, designed for robotics research and education. Other projects include a teleoperated humanoid robot called the Hermes that provides haptic feedback to human operators. There’s also an early stage investigation into applying Cheetah-like actuator technology to address mobility challenges among the disabled and elderly.

Conquering mobility on the land

“With the Cheetah project, I was initially motivated by copying land animals, but I also realized there was a gap in ground mobility,” says Kim. “We have conquered air and water transportation, but we haven’t conquered ground mobility because our technologies still rely on artificially paved roads or rails. None of our transportation technologies can reliably travel over natural ground or even man-made environments with stairs and curbs. Dynamic legged robots can help us conquer mobility on the ground.”

One challenge with legged systems is that they “need high torque actuators,” says Kim. “A human hip joint can generate more torque than a sports car, but achieving such condensed high torque actuation in robots is a big challenge.”

Robots tend to achieve high torque at the expense of speed and flexibility, says Kim. Factory robots use high torque actuators but they are rigid and cannot absorb energy upon the impact that results from climbing steps. Hydraulically powered, dynamic legged robots, such as the larger, higher-payload, quadruped Big Dog from Boston Dynamics, can achieve very high force and power, but at the expense of efficiency. “Efficiency is a serious issue with hydraulics, especially when you move fast,” he adds.

A chief goal of the Cheetah project has been to create actuators that can generate high torque in designs that imitate animal muscles while also achieving efficiency. To accomplish this, Kim opted for electric rather than hydraulic actuators. “Our high torque electric motors have exceeded the efficiency of animals with biological muscles, and are much more efficient, cheaper, and faster than hydraulic robots,” he says.

Cheetah III: More than a speedster

Unlike the earlier versions, the Cheetah III design was motivated more by potential applications than pure research. Kim and his team studied the requirements for an emergency response robot and worked backward.

“We believe the Cheetah III will be able to navigate in a power plant with radiation in two or three years,” says Kim. “In five to 10 years it should be able to do more physical work like disassembling a power plant by cutting pieces and bringing them out. In 15 to 20 years, it should be able to enter a building fire and possibly save a life.”

In situations such as the Fukushima nuclear disaster, robots or drones are the only safe choice for reconnaissance. Drones have some advantages over robots, but they cannot apply large forces necessary for tasks such as opening doors, and there are many disaster situations in which fallen debris prohibits drone flight.

By comparison, the Cheetah III can apply human-level forces to the environment for hours at a time. It can often climb or jump over debris, or even move it out of the way. Compared to a drone, it’s also easier for a robot to closely inspect instrumentation, flip switches, and push buttons, says Kim. “The Cheetah III can measure temperatures or chemical compounds, or close and open valves.”

Advantages over tracked robots include the ability to maneuver over debris and climb stairs. “Stairs are some of the biggest obstacles for robots,” says Kim. “We think legged robots are better in man-made environments, especially in disaster situations where there are even more obstacles.”

The Cheetah III was slowed down a bit compared to the Cheetah II, but also given greater strength and flexibility. “We increased the torque so it can open the heavy doors found in power plants,” says Kim. “We increased the range of motion to 12 degrees of freedom by using 12 electric motors that can articulate the body and the limbs.”

This is still far short of the flexibility of animals, which have over 600 muscles. Yet, the Cheetah III can compensate somewhat with other techniques. “We maximize each joint’s work space to achieve a reasonable amount of reachability,” says Kim.

The design can even use the legs for manipulation. “By utilizing the flexibility of the limbs, the Cheetah III can open the door with one leg,” says Kim. “It can stand on three legs and equip the fourth limb with a customized swappable hand to open the door or close a valve.”

The Cheetah III has an improved payload capability to carry heavier sensors and cameras, and possibly even to drop off supplies to disabled victims. However, it’s a long way from being able to rescue them. The Cheetah III is still limited to a 20-kilogram payload, and can travel untethered for four to five hours with a minimal payload.

“Eventually, we hope to develop a machine that can rescue a person,” says Kim. “We’re not sure if the robot would carry the victim or bring a carrying device,” he says. “Our current design can at least see if there are any victims or if there are any more potential dangerous events.”

Experimenting with human-robot interaction

The semiautonomous Cheetah III can make ambulatory and navigation decisions on its own. However, for disaster work, it will primarily operate by remote control.

“Fully autonomous inspection, especially in disaster response, would be very hard,” says Kim. Among other issues, autonomous decision making often takes time, and can involve trial and error, which could delay the response.

“People will control the Cheetah III at a high level, offering assistance, but not handling every detail,” says Kim. “People could tell it to go to a specific location at the map, find this place, and open that door. When it comes to hand action or manipulation, the human will take over more control and tell the robot what tool to use.”

Humans may also be able to assist with more instinctive controls. For example, if the Cheetah uses one of its legs as an arm and then applies force, it’s hard to maintain balance. Kim is now investigating whether human operators can use “balanced feedback” to keep the Cheetah from falling over while applying full force.

“Even standing on two or three legs, it would still be able to perform high force actions that require complex balancing,” says Kim. “The human operator can feel the balance, and help the robot shift its momentum to generate more force to open or hammer a door.”

The Biomimetic Robotics Lab is exploring balanced feedback with another robot project called Hermes (Highly Efficient Robotic Mechanisms and Electromechanical System). Like the Cheetah III, it’s a fully articulated, dynamic legged robot designed for disaster response. Yet, the Hermes is bipedal, and completely teleoperated by a human who wears a telepresence helmet and a full body suit. Like the Hermes, the suit is rigged with sensors and haptic feedback devices.

“The operator can sense the balance situation and react by using body weight or directly implementing more forces,” says Kim.

The latency required for such intimate real-time feedback is difficult to achieve with Wi-Fi, even when it’s not blocked by walls, distance, or wireless interference. “In most disaster situations, you would need some sort of wired communication,” says Kim. “Eventually, I believe we’ll use reinforced optical fibers.”

Improving mobility for the elderly

Looking beyond disaster response, Kim envisions an important role for agile, dynamic legged robots in health care: improving mobility for the fast-growing elderly population. Numerous robotics projects are targeting the elderly market with chatty social robots. Kim is imagining something more fundamental.

“We still don’t have a technology that can help impaired or elderly people seamlessly move from the bed to the wheelchair to the car and back again,” says Kim. “A lot of elderly people have problems getting out of bed and climbing stairs. Some elderly with knee joint problems, for example, are still pretty mobile on flat ground, but can’t climb down the stairs unassisted. That’s a very small fraction of the day when they need help. So we’re looking for something that’s lightweight and easy to use for short-time help.”

Kim is currently working on “creating a technology that could make the actuator safe,” he says. “The electric actuators we use in the Cheetah are already safer than other machines because they can easily absorb energy. Most robots are stiff, which would cause a lot of impact forces. Our machines give a little.”

By combining such safe actuator technology with some of the Hermes technology, Kim hopes to develop a robot that can help elderly people in the future. “Robots can not only address the expected labor shortages for elder care, but also the need to maintain privacy and dignity,” he says.

European digital innovation hub strengthens robotics development

Digital innovation hubs are vital to create innovative solutions and employment. ROBOTT-NET is an example of a digital innovation hub, where four leading Research Technology Organizations (RTOs) in Europe aim to strengthen robotics development and the competitiveness of European Manufacturing.

The main objective of ROBOTT-NET is to create a sustainable European infrastructure to support novel robotic technologies on their path to market.

ROBOTT-NET includes Danish Technological Institute (DTI,DK), Fraunhofer IPA (DE), Tecnalia (ES) and The Manufacturing Technology Centre (MTC,UK).

The initiative combines European competencies in state-of-the art applied robotics and enables companies to benefit from Danish, German, Spanish and British expertise, says project coordinator Kurt Nielsen of DTI’s Centre for Robot Technology.

By offering highly qualified consulting services, ROBOTT-NET contributes to an easy access to specialist expertise and helps companies of all sizes bring their ideas to the market and optimize the production.

During the project ROBOTT-NET has arranged Open Labs in Denmark, Germany, England and Spain, where companies can learn about robotics opportunities.

Any European company that wanted to use or produce robots has been invited to apply for a voucher, which is the backbone of the program. Big manufacturers, garage start-ups and everything in between were eligible as long as the idea was concrete enough.

64 projects were selected and received a voucher. The voucher entitled the companies to approximately 400 hours of free consulting with robotics experts from all over Europe at the four partner locations.

Amongst these 64 projects ROBOTT-NET has assisted Orifarm in developing vision system for identifying new medicin boxes, helped AirLiquide in developing an autonomous mobile robot for handling of high pressure containers and supported Trumpf in developing a robot gripper that can perform unstructured bin picking of large metal sheets.

Additionally, eight projects will be selected for a ROBOTT-NET pilot, that will help the companies develop their voucher work through proof of concept level and accelerate it towards commerciality.

Check out the voucher page on ROBOTT-NET.eu and get inspired by the work that already has been done between European Companies and Research Technology Organizations.

Page 412 of 450
1 410 411 412 413 414 450