Category Robotics Classification

Page 383 of 459
1 381 382 383 384 385 459

#279: Safe Robot Learning on Hardware, with Jaime Fernández Fisac



In this interview, Audrow Nash interviews Jaime Fernández Fisac, a PhD student at University of California, Berkeley, in Anca Dragan’s InterACT Lab. Fisac is interested in ensuring that autonomous systems such as self-driving cars, delivery drones, and home robots can operate and learn in the world—while satisfying safety constraints. Towards this goal, Fisac discusses different examples of his work with unmanned aerial vehicles and talks about safe robot learning in general; including, the curse of dimensionality and how it impacts control problems (including how some systems can be decomposed into simpler control problems), how simulation can be leveraged before trying learning on a physical robot, safe sets, and how a robot can modify its behavior based on how confident it is that its model is correct.

Below are two videos of work that was discussed during the interview.  The top video is on a framework for learning-based control, and the bottom video discusses adjusting the robot’s confidence about a human’s actions based on how predictably the human is behaving.

Jaime Fernández Fisac

Jaime Fernández Fisac is a final-year Ph.D. candidate in Electrical Engineering and Computer Sciences at the University of California, Berkeley. He received a B.S./M.S. degree in Electrical Engineering from the Universidad Politécnica de Madrid, Spain, in 2012, and a M.Sc. in Aeronautics from Cranfield University, U.K., in 2013. He is a recipient of the La Caixa Foundation fellowship. His research interests lie between control theory and artificial intelligence, with a focus on safety assurance for autonomous systems. He works to enable AI systems to reason explicitly about the gap between their models and the real world, so that they can safely interact with uncertain environments and human beings, even under inaccurate assumptions.

Links

Mouser Electronics – TE Connectivity HDC Dynamic Module

TE Connectivity's HDC Dynamic Module integrates the Dynamic series flexible signal and power solutions and the HDC Heavy Duty Connector series to form a solution of harsh environment connectors. TE's HDC Dynamic Module offers the top features of the two series. It uses the contact concept of the Dynamic series, with its proven performance in industrial uses and its cost effectiveness compared to legacy cutting contacts. The HDC connectors make the module a reliable solution for harsh environments. TE's HDC Dynamic Module supports 2A/32V to 40A/300V performance and 3 positions to 48 positions.

SICK is taking safety to the next level with collaborative robot systems

There is an increasing demand for collaborative robots that can work autonomously and adapt to changing production conditions. This requires reliable sensors that detect human presence and can overcome future challenges with the development of collaborative technologies.

A new machine learning based intention detection method using first-person-view camera for Exo Glove Poly II

A Korean research team has proposed a new paradigm for a wearable hand robot that can aid people with lost hand mobility. The hand robot collects user behaviors with a machine learning algorithm to determine the user's intention.

ASU’s Southwest Robotics Symposium previews the new technology guiding the next wave of human-robot interaction

“We can rely on the brain of the human and the muscles, eyes and sense of touch of the robot in places where humans cannot, or should not, be,” said Khatib. “For example, we will be able to safely repair underwater structures

A robot recreates the walk of a fossilized animal

OroBOT – Credit: Maxime Marendaz

Using the fossil and fossilized footprints of a 300-million-year-old animal, scientists from EPFL and Humboldt-Universität zu Berlin have identified the most likely gaits of extinct animals and designed a robot that can recreate an extinct animal’s walk. This study can help researchers better understand how vertebrate locomotion evolved over time.

How did vertebrates walk 300 million years ago? Could they already stand upright on their legs? Did they move in a balanced, energy-efficient way? Scientists at EPFL’s Biorobotics Laboratory – supported by the National Center of Competence in Research (NCCR) Robotics – and the Interdisciplinary Laboratory Image Knowledge Gestaltung at Humboldt-Universität zu Berlin set out to answer these questions. Using the fossilized skeleton and footprints of Orobates pabsti – a vertebrate that, on the evolutionary tree, comes between amphibians on one hand and reptiles and mammals on the other – the scientists created computer simulations and a robot. Drawing on experimental studies of four living amphibian and reptile species, they used these tools to gauge how plausible different ways of walking were for the fossilized animal.

“Orobates is an ideal candidate for understanding how land vertebrates evolved because it is in the lineage leading to modern amniotes. These animals formed in eggs laid on land and became largely independent of water,” says John Nyakatura, a professor at Humboldt-Universität. What’s more, Orobates is the oldest-known vertebrate for which scientists have been able to link a fossil with its fossilized footprints. “This combination is what enabled us to carry out our unique quantitative study, which paves the way to replicating the walk of other fossilized animals,” says NCCR Robotics Professor Auke Ijspeert. The researchers’ findings appear in Nature.

A motion-based model and then a robotic one

To better understand how Orobates walked and pinpoint just how advanced its locomotion was, the scientists at Humboldt-Universität developed a digital model of its skeleton based on the animal’s fossil and the biomechanics of modern animals with sprawling postures. They used this model to carry out the first kinematic computer simulation of Orobates’ gait as it walks on its digitalized footprints. This simulation focuses on movements (rather than forces) and identifies gaits where the animal’s bones do not collide or come out of their joints.

In parallel, two scientists at EPFL’s Biorobotics Laboratory – post-doctoral researcher Kamilo Melo and PhD student Tomislav Horvat, both members of NCCR Robotics at the time of the study – used the fossilized animal anatomy to build a robot called OroBOT. Designed and scaled to match the shape and movements of the extinct animal, OroBOT was used to calculate the physics of how Orobates walked. “We tested our hypotheses about the animal’s locomotion dynamics with our robotic model, which factors in the real-world physics of the animal’s gait,” says Melo.

Tomislav Horvat and Kamilo Melo – Credit: Maxime Marendaz

Testing hundreds of different gaits, based on contemporary animals

The interdisciplinary team of scientists tested hundreds of different gaits with their robot in order to determine which ones Orobates could have used – and those that it clearly did not. The gaits they tested were based on biomechanical principles extracted from similar modern-day animals such as caimans, salamanders, iguanas and skinks, which they analyzed through X-ray videos and force measurements. “We studied the biomechanics of their movements and determined which mechanical principles they all followed”, says Nyakatura. The research team looked at three features in particular: how erect the animal stood on its legs; how its backbone bent; and how much its elbow or shoulder joints bent as it walked. These three features determine what the researchers call the animal’s “sprawling gait space”. They created a powerful interactive website where fellow scientists – and the wider public – can explore the universe of movements that Orobates could have used.

With these results, they came up with the most likely ways that Orobates may have walked. They scored the gaits based on how much energy was required, how stable the movements were, how the leg forces compared with those of other sprawling animals, and how closely the movements aligned with the fossilized footprints. The gaits with good scores appear quite athletic and most closely resemble the movements of caimans. This suggests that Orobates probably already held itself a little upright on its legs – unlike salamanders and skinks. Its locomotion was thus more advanced — more upright, balanced and mechanically power-saving — than had been previously thought.

The study concludes that advanced locomotion, as in Orobates, may have evolved before the common ancestor of reptiles and mammals lived. The novel approach developed for this study can be applied by other scientists in their work, and it could be modified to study other evolutionary transitions, such as the origins of flight or galloping gaits in mammals. Last but not least, being able to select the most efficient gait for any given morphology is of fundamental importance for the walking robots that NCCR Robotics researchers are developing, in particular for search-and-rescue applications.

Orobates fossil – Credit: Maxime Marendaz

Orobates fossil

Literature

John A. Nyakatura, Kamilo Melo, Tomislav Horvat, Kostas Karakasiliotis, Vivian R. Allen, Amir Andikfar, Emanuel Andrada, Patrick Arnold, Jonas Lauströer, John R. Hutchinson, Martin S. Fischer and Auke J. Ijspeert. “Reverse-engineering the locomotion of a stem amniote“, Nature, 17 January 2019.

From robotic companions to third thumbs, machines can change the human brain

Cozmo robots and their corresponding tablets are being distributed to participants to take home so that they can interact with them for a week for an experiment being carried out by social robotics professor Emily Cross. Image credit – Ruud Hortensius and Emily Cross
By Frieda Klotz

People’s interactions with machines, from robots that throw tantrums when they lose a colour-matching game against a human opponent to the bionic limbs that could give us extra abilities, are not just revealing more about how our brains are wired – they are also altering them.

Emily Cross is a professor of social robotics at the University of Glasgow in Scotland who is examining the nature of human-robot relationships and what they can tell us about human cognition.

She defines social robots as machines designed to engage with humans on a social level – from online chatbots to machines with a physical presence, for example, those that check people into hotel rooms.

According to Prof. Cross, as robots can be programmed to perform and replicate specific behaviours, they make excellent tools for shedding light on how our brains work, unlike humans, whose behaviour varies.

‘The central tenets to my questions are, can we use human-robot interaction to better understand the flexibility and fundamental mechanisms of social cognition and the human brain,’ she said.

Brain imaging shows that a sad, happy or neutral robotic expression will engage the same parts of the brain as a human face with similar expressions.

Through their project called Social Robots, Prof. Cross and her team are using neural decoding techniques to probe the extent to which human feelings towards a robot change depending on how it behaves.

Tantrums

When the robots used in the project lose a game, they alternate between throwing tantrums or appearing dejected. ‘So far, people actually find it really funny when the robot gets angry,’ she said. ‘But people do respond to them quite strongly and that’s really interesting to see.’

Having robots as colleagues has been shown to affect humans in complex ways. Researchers at the University of Washington found that when soldiers used robots in bomb disposal, they developed emotional attachments towards them and felt frustration, anger or sadness if their robot was destroyed.

Prof. Cross says that from an evolutionary perspective, this doesn’t make sense. ‘We care about people and perhaps animals that might help us or hurt us,’ she said. ‘But with machines it’s a bit more of a mystery and understanding how far we can push that (to develop social relationships with machines) is a really, really fascinating question.’

It’s important to understand these dynamics since, as she points out, robots are already working as companions in nursing homes or even as tutors in early childhood education. Home care and education are prime areas of social robotics research, with R&D efforts focusing on adults suffering from dementia and young children.

Ten-hour rule

Typically, studies on such groups observe interactions over a relatively short time-span. They rarely exceed what Prof. Cross describes as a ten-hour rule, beyond which study participants tend to get bored of their robotic toys. But her team is looking at how feelings towards robots evolve over time.

As part of the project, the researchers send a palm-sized Cozmo robot home with study participants and instruct them to interact with it every day for a week by playing games or introducing it to their friends and pets. The participants’ brains are imaged at the start and end of that period to track changes.

‘If we’re going to have robots in our home environment, if they’re going to be in our schools teaching our kids across weeks, if not years, if they’re going to be peoples’ social companions, we want to know a lot more than just what happens after ten hours’ (of exposure),’ she said.

‘We want to know how people’s social bonds and relationships to robots change across many, many more hours.’

With such technologies set to become a bigger part of our future, other studies are investigating how the brain reacts to a different kind of robot – wearable robotic limbs that augment the body, providing extra abilities.

Wearables could have social and healthcare benefits. For instance, a third arm could assist surgeons to carry out procedures more safely rather than relying on human assistants, enable people to complete their household chores much faster or help construction workers.

But even as the technology capabilities develop apace, Dr Tamar Makin, a neuroscientist at University College London, UK, is exploring what it would take for the brain to accept and operate a robotic appendage as part of the body, through a five-year project called Embodied Tech.

Additional thumb

In order to understand how the brain deals with an extra body part, Dr Makin’s team asks participants to wear an additional opposable thumb for a week. Created by a designer named Dani Clode, the thumb is controlled by pressure sensors worn on the big toes.

Product designer Dani Clode created a prosthetic opposable thumb for people to wear as an extra digit. Video credit: Dani Clode

With the additional thumb, the augmented hand almost has the capabilities of two hands, giving people extra capacity to carry out actions. The question is what effect that has on the brain.

The study is still underway but preliminary results indicate that the presence of an extra thumb alters the brain’s internal map of what the biological hand looks like. Scans show that the brain represents the fingers as collapsing onto each other, away from the thumb and index finger.

This mirrors what happens in diseases like dystonia, when the representation of fingers begins to merge – for instance, when musicians use their fingers excessively – and causes cramp-like pain. The same effect could theoretically cause pain in the wearer of an extra thumb.

‘One important interim message we have is that there are potential costs, not just benefits, to using augmentation technology,’ said Dr Makin.

She believes that the newness of human augmentation means there are lots of unanswered questions but it’s vital to explore the challenges of wearable robotics in order to fully realise the promises, such as multitasking or safer working conditions.

‘I feel like we have a responsibility to gain a much better understanding of how having good control of an additional body part is going to change the representation of the body parts you already have.’

The research in this article was funded by the European Research Council.

The news from CES is there wasn’t much news from CES

The Bell Helicopter tiltrotor, ducted fan hybrid aircraft had a giant crowd when the hall was open.

My feet are aching, as usual, after 3 days on the CES show floor, and the question people always ask others there is “what have you seen that was interesting?”

I won’t say I didn’t see anything interesting, and I had a large number of rewarding conversations with all sorts of companies, making the trip very worthwhile, but I will say I saw less that was new and exciting than ever before. This may be a result of the show’s constant growth that meant in 3 days I still did not manage to get to 3 1/2 major rooms of the show, putting my focus on cars as I usually do.

A larger fraction of CES is not even remotely consumer electronics. In fact, the reason I go is largely the robocar related companies, none of whom are selling to consumers at this time, and most of which never plan to. And yes, the amazing new TVs from Samsung, LG and others are always a draw — LG always creates an astonishing video wall, and all companies were now displaying 8K TVs as a real product (even if a very expensive one for which no video sources are available, but which will be fantastic for still photos.)

My favourite part of the show, Eureka Park, even disappointed even though I gave it most of a day. The proportion of products which are poorly thought through is always high, but you search for gems that might someday change the world. I saw few Eureka moments.

Several fields, such as Iot/Smarthome were growing and maturing, with lots more companies, but at the same time, very little in smart home moves beyond the “Oh, that sounds nice” into truly appealing and useful. I’ve had computer controlled lights for decades. They are handy, and I would rather have them than not have them, but they aren’t essentials the way my phone and a dozen apps within it are, things I can’t do without any more.

There were, again, a ton of different LIDAR companies. What’s interesting about the LIDAR crowd is that they are almost all quite different, in many cases taking entirely different approaches to the technology — the output, the steering, the sensing, the band and even the core technology. I’ll have more coverage of that later. Of course these are not for consumers.

Other press were equally unexcited — I saw far less coverage of the “hot new thing” in the media this year as the show opened and progressed. This was the year of mostly incremental improvements.

Flying car style vehicles did generate excitement, but they were barely there. The most buzz was about Bell helicopter’s “Nexus” tilt-rotor hybrid vehicle on display. It’s a high-end vehicle, with 6 giant ducted fan tilt-rotors, and very much a prototype. Bell knows a lot about tilt-rotor from their work on the V-22 Osprey, a vehicle which has a rather checkered history, so it’s not sure how this will go. The few other companies doing flying vehicles (for people) had tiny booths showing nothing.

The Nexus is a hybrid-hybrid — both multirotor and fixed-wing flight, and both electric batteries and gasoline powered generator for power.

There was some interesting battery technology, including a small startup named GBatteries which claimed that by pulsing the charge voltage they could charge at vastly faster speeds — 5C, or 10 minutes from empty to 80%, and 5 minutes to 50%, without any more heat or battery damage than today’s fast charging (which takes about an hour.) Their demo was just on a drill, so it is yet to be proven in a car, but it would change some of the usability of battery devices and electric cars.

I saw other interesting battery tech but there is some irony – because of all the investment made by Tesla and others in standard lithium ion cells, these cells are now better than other, more innovative technologies, because of their mass production (driving down the cost) and the heavy R&D (improving charge times and reducing battery degradation.) The lithium ion cell was created to be lightweight for laptops where weight is absolutely crucial, but now it’s everywhere, from cars to scooters to even grid storage.

There was an uptick of mobility products (or at least I am paying the most attention to them) beyond cars, including scooters, delivery, bike riding and more.

Identifying artificial intelligence “blind spots”

By Rob Matheson

A novel model developed by MIT and Microsoft researchers identifies instances in which autonomous systems have “learned” from training examples that don’t match what’s actually happening in the real world. Engineers could use this model to improve the safety of artificial intelligence systems, such as driverless vehicles and autonomous robots.

The AI systems powering driverless cars, for example, are trained extensively in virtual simulations to prepare the vehicle for nearly every event on the road. But sometimes the car makes an unexpected error in the real world because an event occurs that should, but doesn’t, alter the car’s behavior.

Consider a driverless car that wasn’t trained, and more importantly doesn’t have the sensors necessary, to differentiate between distinctly different scenarios, such as large, white cars and ambulances with red, flashing lights on the road. If the car is cruising down the highway and an ambulance flicks on its sirens, the car may not know to slow down and pull over, because it does not perceive the ambulance as different from a big white car.

In a pair of papers — presented at last year’s Autonomous Agents and Multiagent Systems conference and the upcoming Association for the Advancement of Artificial Intelligence conference — the researchers describe a model that uses human input to uncover these training “blind spots.”

As with traditional approaches, the researchers put an AI system through simulation training. But then, a human closely monitors the system’s actions as it acts in the real world, providing feedback when the system made, or was about to make, any mistakes. The researchers then combine the training data with the human feedback data, and use machine-learning techniques to produce a model that pinpoints situations where the system most likely needs more information about how to act correctly.

The researchers validated their method using video games, with a simulated human correcting the learned path of an on-screen character. But the next step is to incorporate the model with traditional training and testing approaches for autonomous cars and robots with human feedback.

“The model helps autonomous systems better know what they don’t know,” says first author Ramya Ramakrishnan, a graduate student in the Computer Science and Artificial Intelligence Laboratory. “Many times, when these systems are deployed, their trained simulations don’t match the real-world setting [and] they could make mistakes, such as getting into accidents. The idea is to use humans to bridge that gap between simulation and the real world, in a safe way, so we can reduce some of those errors.”

Co-authors on both papers are: Julie Shah, an associate professor in the Department of Aeronautics and Astronautics and head of the CSAIL’s Interactive Robotics Group; and Ece Kamar, Debadeepta Dey, and Eric Horvitz, all from Microsoft Research. Besmira Nushi is an additional co-author on the upcoming paper.

Taking feedback

Some traditional training methods do provide human feedback during real-world test runs, but only to update the system’s actions. These approaches don’t identify blind spots, which could be useful for safer execution in the real world.

The researchers’ approach first puts an AI system through simulation training, where it will produce a “policy” that essentially maps every situation to the best action it can take in the simulations. Then, the system will be deployed in the real-world, where humans provide error signals in regions where the system’s actions are unacceptable.

Humans can provide data in multiple ways, such as through “demonstrations” and “corrections.” In demonstrations, the human acts in the real world, while the system observes and compares the human’s actions to what it would have done in that situation. For driverless cars, for instance, a human would manually control the car while the system produces a signal if its planned behavior deviates from the human’s behavior. Matches and mismatches with the human’s actions provide noisy indications of where the system might be acting acceptably or unacceptably.

Alternatively, the human can provide corrections, with the human monitoring the system as it acts in the real world. A human could sit in the driver’s seat while the autonomous car drives itself along its planned route. If the car’s actions are correct, the human does nothing. If the car’s actions are incorrect, however, the human may take the wheel, which sends a signal that the system was not acting unacceptably in that specific situation.

Once the feedback data from the human is compiled, the system essentially has a list of situations and, for each situation, multiple labels saying its actions were acceptable or unacceptable. A single situation can receive many different signals, because the system perceives many situations as identical. For example, an autonomous car may have cruised alongside a large car many times without slowing down and pulling over. But, in only one instance, an ambulance, which appears exactly the same to the system, cruises by. The autonomous car doesn’t pull over and receives a feedback signal that the system took an unacceptable action.

“At that point, the system has been given multiple contradictory signals from a human: some with a large car beside it, and it was doing fine, and one where there was an ambulance in the same exact location, but that wasn’t fine. The system makes a little note that it did something wrong, but it doesn’t know why,” Ramakrishnan says. “Because the agent is getting all these contradictory signals, the next step is compiling the information to ask, ‘How likely am I to make a mistake in this situation where I received these mixed signals?’”

Intelligent aggregation

The end goal is to have these ambiguous situations labeled as blind spots. But that goes beyond simply tallying the acceptable and unacceptable actions for each situation. If the system performed correct actions nine times out of 10 in the ambulance situation, for instance, a simple majority vote would label that situation as safe.

“But because unacceptable actions are far rarer than acceptable actions, the system will eventually learn to predict all situations as safe, which can be extremely dangerous,” Ramakrishnan says.

To that end, the researchers used the Dawid-Skene algorithm, a machine-learning method used commonly for crowdsourcing to handle label noise. The algorithm takes as input a list of situations, each having a set of noisy “acceptable” and “unacceptable” labels. Then it aggregates all the data and uses some probability calculations to identify patterns in the labels of predicted blind spots and patterns for predicted safe situations. Using that information, it outputs a single aggregated “safe” or “blind spot” label for each situation along with a its confidence level in that label. Notably, the algorithm can learn in a situation where it may have, for instance, performed acceptably 90 percent of the time, the situation is still ambiguous enough to merit a “blind spot.”

In the end, the algorithm produces a type of “heat map,” where each situation from the system’s original training is assigned low-to-high probability of being a blind spot for the system.

“When the system is deployed into the real world, it can use this learned model to act more cautiously and intelligently. If the learned model predicts a state to be a blind spot with high probability, the system can query a human for the acceptable action, allowing for safer execution,” Ramakrishnan says.

Page 383 of 459
1 381 382 383 384 385 459