Archive 30.06.2018

Page 1 of 7
1 2 3 7

Robots in Depth with Spring Berman

In this episode of Robots in Depth, Per Sjöborg speaks with Spring Berman about her extensive experience in the field of swarm robotics.

One of the underlying ideas of the area is designing robot controls similar to the ones used in nature by different types of swarms of animals, systems that work without having a leader. We get to hear how many robots can be used together to handle tasks that would not be possible using one or a small number of robots. We also get introduced to the opportunities of mixing artificial animals with real ones.

Spring describes some of the challenges within swarm robotics, which can be as diverse as mathematical modelling and regulatory issues. She also comments on the next frontier in swarm robotics and the different research areas that are needed to make progress.

This interview was recorded in 2016.

One-shot imitation from watching videos

By Tianhe Yu and Chelsea Finn

Learning a new skill by observing another individual, the ability to imitate, is a key part of intelligence in human and animals. Can we enable a robot to do the same, learning to manipulate a new object by simply watching a human manipulating the object just as in the video below?


The robot learns to place the peach into the red bowl after watching the human do so.

Read More

Personalized “deep learning” equips robots for autism therapy

An example of a therapy session augmented with humanoid robot NAO [SoftBank Robotics], which was used in the EngageMe study. Tracking of limbs/faces was performed using the CMU Perceptual Lab’s OpenPose utility.
Image: MIT Media Lab

By Becky Ham

Children with autism spectrum conditions often have trouble recognizing the emotional states of people around them — distinguishing a happy face from a fearful face, for instance. To remedy this, some therapists use a kid-friendly robot to demonstrate those emotions and to engage the children in imitating the emotions and responding to them in appropriate ways.

This type of therapy works best, however, if the robot can smoothly interpret the child’s own behavior — whether he or she is interested and excited or paying attention — during the therapy. Researchers at the MIT Media Lab have now developed a type of personalized machine learning that helps robots estimate the engagement and interest of each child during these interactions, using data that are unique to that child.

Armed with this personalized “deep learning” network, the robots’ perception of the children’s responses agreed with assessments by human experts, with a correlation score of 60 percent, the scientists report June 27 in Science Robotics.

It can be challenging for human observers to reach high levels of agreement about a child’s engagement and behavior. Their correlation scores are usually between 50 and 55 percent. Rudovic and his colleagues suggest that robots that are trained on human observations, as in this study, could someday provide more consistent estimates of these behaviors.

“The long-term goal is not to create robots that will replace human therapists, but to augment them with key information that the therapists can use to personalize the therapy content and also make more engaging and naturalistic interactions between the robots and children with autism,” explains Oggi Rudovic, a postdoc at the Media Lab and first author of the study.

Rosalind Picard, a co-author on the paper and professor at MIT who leads research in affective computing, says that personalization is especially important in autism therapy: A famous adage is, “If you have met one person, with autism, you have met one person with autism.”

“The challenge of creating machine learning and AI [artificial intelligence] that works in autism is particularly vexing, because the usual AI methods require a lot of data that are similar for each category that is learned. In autism where heterogeneity reigns, the normal AI approaches fail,” says Picard. Rudovic, Picard, and their teammates have also been using personalized deep learning in other areas, finding that it improves results for pain monitoring and for forecasting Alzheimer’s disease progression.  

Meeting NAO

Robot-assisted therapy for autism often works something like this: A human therapist shows a child photos or flash cards of different faces meant to represent different emotions, to teach them how to recognize expressions of fear, sadness, or joy. The therapist then programs the robot to show these same emotions to the child, and observes the child as she or he engages with the robot. The child’s behavior provides valuable feedback that the robot and therapist need to go forward with the lesson.

The researchers used SoftBank Robotics NAO humanoid robots in this study. Almost 2 feet tall and resembling an armored superhero or a droid, NAO conveys different emotions by changing the color of its eyes, the motion of its limbs, and the tone of its voice.

The 35 children with autism who participated in this study, 17 from Japan and 18 from Serbia, ranged in age from 3 to 13. They reacted in various ways to the robots during their 35-minute sessions, from looking bored and sleepy in some cases to jumping around the room with excitement, clapping their hands, and laughing or touching the robot.

Most of the children in the study reacted to the robot “not just as a toy but related to NAO respectfully as it if was a real person,” especially during storytelling, where the therapists asked how NAO would feel if the children took the robot for an ice cream treat, according to Rudovic.

One 4-year-old girl hid behind her mother while participating in the session but became much more open to the robot and ended up laughing by the end of the therapy. The sister of one of the Serbian children gave NAO a hug and said “Robot, I love you!” at the end of a session, saying she was happy to see how much her brother liked playing with the robot.

“Therapists say that engaging the child for even a few seconds can be a big challenge for them, and robots attract the attention of the child,” says Rudovic, explaining why robots have been useful in this type of therapy. “Also, humans change their expressions in many different ways, but the robots always do it in the same way, and this is less frustrating for the child because the child learns in a very structured way how the expressions will be shown.”

Personalized machine learning

The MIT research team realized that a kind of machine learning called deep learning would be useful for the therapy robots to have, to perceive the children’s behavior more naturally. A deep-learning system uses hierarchical, multiple layers of data processing to improve its tasks, with each successive layer amounting to a slightly more abstract representation of the original raw data.

Although the concept of deep learning has been around since the 1980s, says Rudovic, it’s only recently that there has been enough computing power to implement this kind of artificial intelligence. Deep learning has been used in automatic speech and object-recognition programs, making it well-suited for a problem such as making sense of the multiple features of the face, body, and voice that go into understanding a more abstract concept such as a child’s engagement.

“In the case of facial expressions, for instance, what parts of the face are the most important for estimation of engagement?” Rudovic says. “Deep learning allows the robot to directly extract the most important information from that data without the need for humans to manually craft those features.”

For the therapy robots, Rudovic and his colleagues took the idea of deep learning one step further and built a personalized framework that could learn from data collected on each individual child. The researchers captured video of each child’s facial expressions, head and body movements, poses and gestures, audio recordings and data on heart rate, body temperature, and skin sweat response from a monitor on the child’s wrist.

The robots’ personalized deep learning networks were built from layers of these video, audio, and physiological data, information about the child’s autism diagnosis and abilities, their culture and their gender. The researchers then compared their estimates of the children’s behavior with estimates from five human experts, who coded the children’s video and audio recordings on a continuous scale to determine how pleased or upset, how interested, and how engaged the child seemed during the session.

Trained on these personalized data coded by the humans, and tested on data not used in training or tuning the models, the networks significantly improved the robot’s automatic estimation of the child’s behavior for most of the children in the study, beyond what would be estimated if the network combined all the children’s data in a “one-size-fits-all” approach, the researchers found.

Rudovic and colleagues were also able to probe how the deep learning network made its estimations, which uncovered some interesting cultural differences between the children. “For instance, children from Japan showed more body movements during episodes of high engagement, while in Serbs large body movements were associated with disengagement episodes,” Rudovic says.

The study was funded by grants from the Japanese Ministry of Education, Culture, Sports, Science and Technology; Chubu University; and the European Union’s HORIZON 2020 grant (EngageME).

Takeaways from Automatica 2018

Automatica 2018 is one of Europe’s largest robotics and automation-related trade shows and a destination for global roboticists and business executives to view new products. It was held June 19-22 in Munich and had 890 exhibitors and 46,000 visitors (up 7% from the previous show).

The International Symposium on Robotics (ISR) was held in conjunction with Automatica with a series of robotics-related keynotes, poster presentations, talks and workshops.

The ISR also had an awards dinner in Munich on June 20th at the Hofbräuhaus, a touristy beer hall and garden with big steins of beer, plates full of Bavarian food and oompah bands on each floor.

Awards were given to:

  • The Joseph Engelberger Award was given to International Federation of Robotics (IFR) General Secretary Gudrun Litzenberger and also to Universal Robots CTO and co-founder Esben Østergaard.
  • The IFR Innovation and Entrepreneurship in Robotics and Automation (IERA) Award went to three recipients for their unique robotic creations:
    1. Lely Holding, the Dutch manufacturer of milking robots, for their Discovery 120 Manure Collector (pooper scooper)
    2. KUKA Robotics, for their new LBR Med medical robot, a lightweight robot certified for integration into medical products
    3. Perception Robotics, for their Gecko Gripper which uses a grasping technology from biomimicry observed in Geckos

IFR CEO Roundtable and President’s Message

From left: Stefan Lampa, CEO, KUKA; Prof Dr Bruno Siciliano, Dir ICAROS and PRISMALab, U of Naples Federico II; Ken Fouhy, Moderator, Editor in Chief, Innovations & Trend Research, VDI News; Dr. Kiyonori Inaba, Exec Dir, Robot Business Division, FANUC; Markus Kueckelhaus, VP Innovations & Trend Research, DHL; and Per Vegard Nerseth, Group Senior VP, ABB.

In addition to the CEO roundtable discussion, IFR President Junji Tsuda previewed the statistics that will appear in this year’s IFR Industrial Robots Annual Report covering 2017 sales data. He reported that 2017 turnover was about $50 billion, that 381,000 robots were sold, a 29% increase over 2016, and that China, which deployed 138,000 robots, was the main driver of 2017’s growth with a 58% increase over 2016 (the US rose only 6% by comparison).

Tsuda attributed the drivers for the 2017 results – and a 15% CAGR forecast for the next few years (25% for service robots) – to be the growing simplification (ease of use) for training robots; collaborative robots; progress in overall digitalization; and AI enabling greater vision and perception.

During the CEO Roundtable discussion, panel moderator Ken Fouhy asked where each CEO thought we (and his company) would be five years from now.

  • Kuka’s CEO said we would see a big move toward mobile manipulators doing multiple tasks
  • ABB’s Sr VP said that programming robots would become as easy and intuitive as using today’s iPhones
  • Fanuc’s ED said that future mobile robots wouldn’t have to wait for work as current robots often do because they would become more flexible
  • DHL’s VP forecast that perception would have access to more physics and reality than today
  • The U of Naples professor said that the tide has turned and that more STEM kids are coming into the realm of automation and robotics

In relation to jobs, all panel members remarked that the next 30 years would see dramatic changes in new jobs net yet defined as present labor retires and skilled labor shortages force governments to invest in retraining.

In relation to AI, panel members said that major impact would be felt in the following ways:

  • In logistics, particularly in the combined activities of mobility and grasping
  • In the increased use of sensors which enable new efficiencies particularly in QC and anomaly detection
  • In clean room improvements
  • And in in-line improvements, eg, spray painting

The panel members also outlined current challenges for AI:

  • Navigation perception for yard management and last-mile delivery
  • Selecting the best grasping method for quick manipulation
  • Improving human-machine interaction via speech and general assistance

Takeaways

I was at Automatica from start to finish, seeing all aspects of the show, attending a few ISR keynotes, and had interviews and talks with some very informative industry executives. Here are some of my takeaways from this year’s Automatica and those conversations:

  • Co-bots were touted throughout the show
    • Universal Robots, the originator of the co-bot, had a mammoth booth which was always jammed with visitors
    • New vendors displayed new co-bots – often very stylish – but none with the mechanical prowess of the Danish-manufactured UR robots
    • UR robots were used in many, many non-UR booths all over Automatica to demonstrate their product or service thereby indicating UR’s acceptance within the industry
    • ABB and Kawasaki announced a common interface for each of their two-armed co-bots with the hope that other companies would join and use the interface and that the group would soon add single-arm robots to the software thereby emphasizing the problem in training robots where each has their own proprietary training method
  • Bin-picking, which had as much presence and hype 10 years ago as co-bots had 5 years ago and IoT and AI had this year, is blasé now because the technology has finally become widely deployed and almost matches the original hype
  • AI and Internet-of-Things were the buzzwords for this show and vendors that offered platforms to stream, store, handle, combine, process, analyze and make predictions were plentiful
  • Better programming solutions for co-bots and even industrial robots are appearing, but better-still are needed
  • 24/7 robot monitoring is gaining favor, but access to company systems and equipment is still mostly withheld for security reasons
  • Many special-purpose exoskeletons were shown to help improve factory workers do their jobs
  • The Danish robotics cluster is every bit as good, comprehensive, supportive and successful as clusters in Silicon Valley, Boston/Cambridge and Pittsburgh
  • Vision and distancing systems – plus standards for same – are enabling cheaper automation
  • Grippers are improving (but see below for discussion of end-of-arm devices)
  • and promises (hype) about digitalization, data and AI, IoT, and machine (deep) learning was everywhere

End-of-arm devices

Plea from Dr. Michael Zürn, Daimler AG

An exec from Daimler AG, gave a talk about Mercedes Benz’s use of robotics. He said that they have 50 models and at least 500 different grippers. Yet humans with two hands could do every one of those tasks, albeit with superhuman strength in some cases. He welcomed the years of testing of YuMi’s two-armed robots because it’s the closest to what they need yet it is still nowhere near what a two-handed person can do, hence his plea to gripper makers to offer two hands in a flexible device that performs like a two-handed person, and be intuitive in how it learns to do its various jobs.

OnRobot’s goals

Enrico Krog Iversen was the CEO of Universal Robots from 2008 until 2016 when it sold to Teradyne. Since then he has invested in and cultivated three companies (OnRobot, Perception Robotics and OptoForce) which he merged together to become OnRobot A/S. Iversen is the CEO of the new entity. With this foundation of sensors, a growing business in grippers and integrating UR and MiR systems, and a promise to acquire a vision and perception component, Iversen foresees building an entity where everything that goes on a robot can be acquired from his company and it will have a single intuitive user interface. This latter aspect, a single intuitive interface for all, is a very convenient feature that users request but can’t often find.

Fraunhofer’s Hägele’s thesis

Martin Hägele, Head of the Robotics and Assistive Systems Department at Fraunhofer IPA in Stuttgart, advocated that there is a transformation coming where end-of-arm devices will increasingly include advanced sensing, more actuation, and user interaction. It seems logical. The end of the robot arm is where all the action is — the sensors, cameras, handling devices and the item to be processed. Times have changed from when robots were blind and being fed by expensive positioning systems; the end of the arm is where all the action is at.

Moves by market-leader Schunk

“We are convinced that industrial gripping will change radically in the coming years,” said Schunk CEO Henrik Schunk. “Smart grippers will interact with the user and their environment. They will continuously capture and process data and independently develop the gripping strategy in complex and changing environments and do so faster and more flexibly than man ever could.”

“As part of our digitalization initiative, we have set ourselves the target of allowing systems engineers and integrators to simulate entire assembly systems in three-dimensional spaces and map the entire engineering process from the design through to the mechanics, electrics and software right up to virtual commissioning in digitalized form, all in a single system. Even experienced designers are amazed at the benefits and the efficiency effects afforded by engineering with Mechatronics Concept Designer,” said Schunk in relation to Schunk’s OEM partnership with Siemens PLM Software, the provider of the simulation software.

Internet-of-Things

Microsoft CEO Satya Nadella said: “The world is in a massive transformation which can be seen as an intelligent cloud and an intelligent edge. The computing fabric is getting more distributed and more ubiquitous. Micro-controllers are appearing in everything from refrigerators to drills – every factory is going to have millions of sensors – thus computing is becoming ubiquitous and that means data is getting generated in large amounts. And once you have that, you use AI to reason over that data to give yourself predictive power – analytical power – power to automate things.”

Certainly the first or second thing sales people talked about at Automatica was AI, IoT and Industry 4.0. “It’s all coming together in the next few years,” they said. But they didn’t say whether businesses would open their systems to the cloud, or stream data to somebody else’s processor, or connect to an offsite analytics platform, or do it all onboard and post process the analytics.

Although the strategic goals for implementing IoT are different country by country (as can be seen in the interesting chart above from Forbes), there’s no doubt that businesses plan to spend on adding IoT. This can be seen in the black and blue chart on the right where the three big vertical bars on the left  of the chart denote Discrete Manufacturing, Transportation and Logistics.

Silly Stuff

As at any show, there were pretty girls flaunting products they knew nothing about, giveaways of snacks, food, coffees and gimmicks, and loads of talk about deep learning and AI for products not yet available for viewing of fully understood by the speaker.

Kuka, in a booth far, far away from their main booth (where they were demonstrating their industrial, mobile and collaborative robotics product line including their award-winning LBR Med robot), was showing a 5′ high concept humanoid robot with a big screen and a stylish 18″ silver cone behind the screen. It looked like an airport  or store guide. When I asked what it did I was told that it was the woofer for the sound system and the robot didn’t do anything – it was one of many concept devices they were reviewing.

Nevertheless, Kuka had a 4′ x 4′ brochure which didn’t show or even refer to any of the concept robots they showed. Instead it was all hype about what it might do sometime in the future: purify air, be a gaming console, have an “underhead projector”, HiFi speaker, camera, coffee and wellness head and “provide robotic intelligence that will enrich our daily lives.”

Front and back of 4 foot by 4 foot brochure (122cm x 122cm)

 

Don’t watch TV while safety driving

The Tempe police released a detailed report on their investigation of Uber’s fatality. I am on the road and have not had time to read it, but the big point, reported in many press was that the safety driver was, according to logs from her phone accounts, watching the show “The Voice” via Hulu on her phone just shortly before the incident.

This is at odds with earlier statements in the NTSB report, that she had been looking at the status console of the Uber self-drive system, and had not been using her phones. The report further said that Uber asked its safety drivers to observe the console and make notes on things seen on it. It appears the safety driver lied, and may have tried to implicate Uber in doing so.

Obviously attempting to watch a TV show while you are monitoring a car is unacceptable, presumably negligent behaviour. More interesting is what this means for Uber and other companies.

The first question — did Uber still instruct safety drivers to look at the monitors and make note of problems? That is a normal instruction for a software operator when there are two crew in the car, as most companies have. At first, we presumed that perhaps Uber had forgotten to alter this instruction when it went form 2 crew to 1. Perhaps the safety driver just used that as an excuse for her looking down since she felt she could not admit to watching TV. (She probably didn’t realize police would get logs from Hulu.)

If Uber still did that, it’s an error on their part, but now seems to play no role in this incident. That’s positive legal news for Uber.

It is true that if you had two people in the car, it’s highly unlikely the safety driver behind the wheel would be watching a TV show. It’s also true that if Uber had attention monitoring on the safety driver, it also would have made it harder to pull a stunt like that. Not all teams have attention monitoring, though after this incident I believe that most, including Uber, are putting it in. It might be argued that if Uber did require drivers to check the monitors, this might have somehow encouraged the safety driver’s negligent decision to watch TV, but that’s a stretch. I think any reasonable person is going to know this is not a job where you do that.

There may be some question regarding if a person with such bad judgement should have been cleared to be a safety driver. Uber may face some scrutiny for that bad choice. They may also face scrutiny if their training and job evaluation process for the safety drivers was clearly negligent. On the other hand, human employees are human, and if there’s not a pattern, it is less likely to create legal trouble for Uber.

From the standpoint of the Robocar industry, it makes the incident no less tragic, but less informative about robocar accidents. Accidents are caused every day because people allow themselves ridiculously unsafe distractions on their phones. This one is still special, but less so than we thought. While the issue of whether today’s limited systems (like the Tesla) generate too much driver complacency is still there, this was somebody being paid not to be complacent. The lessons we already knew — have 2 drivers, have driver attention monitoring — are still the same.

“Disabled the emergency braking.”

A number of press stories on the event have said that Uber “disabled” the emergency braking, and this also played a role in the fatality. That’s partly true but is very misleading vocabulary. The reality appears to be that Uber doesn’t have a working emergency braking capability in their system, and as such it is not enabled. That’s different from the idea that they have one and disabled it, which sounds much more like an ill act.

Uber’s system, like all systems, sometimes decides suddenly that there is an obstacle in front of the car for which it should brake when that obstacle is not really there. This is called a “false positive” or “ghost.” When this happens well in advance, it’s OK to have the car apply the brakes in a modest way, and then release them when it becomes clear it’s a ghost. However, if the ghost is so close that it would require full-hard braking, this creates a problem. If a car frequently does full-hard braking for ghosts, it is not only jarring, it can be dangerous, both for occupants of the car, and for cars following a little too closely behind — which sadly is the reality of driving.

As such, an emergency braking decision algorithm which hard brakes for ghosts is not a working system. You can’t turn it on safety, and so you don’t. Which is different from disabling it. While the Uber software did decide 2 seconds out that there was an obstacle that required a hard brake, it decides that out of the blue too often to be trusted with that decision. The decision is left to the safety driver — who should not be watching TV.

That does not mean Uber could not have done this much better. The car should still have done moderate braking, which would reduce the severity of any real accident and also wake up any inattentive safety driver. An audible alert should also have been present. Earlier, I speculated that if the driver was looking at the console, this sort of false positive incident would very likely have been there, so it was odd she did not see it, but it turns out she was not looking there.

The Volvo also has an emergency braking system. That system was indeed disabled — it is normally for any ADAS functions built into the cars to be disabled when used as prototype robocars. You are building something better, and you can’t have them competing. The Volvo system does not brake too often for ghosts, but that’s because it also doesn’t brake for real things far too often for a robocar system. Any ADAS system will be tuned that way because the driver is still responsible for driving. Teslas have been notoriously plowing into road barriers and trucks due to this ADAS style of tuning. It’s why a real robocar is much harder than the Tesla autopilot.

Other news

I’ve been on the road, so I have not reported on it, but the general news has been quite impressive. In particular, Waymo announced the order of 63,000 Chrysler minivans of the type they use in their Phoenix area tests. They are going beyond a pilot project to real deployment, and soon. Nobody else is close. This will add to around 20,000 Jaguar electric vehicles presumably aimed at a more luxury ride — though I actually think the minivan with its big doors, large interior space and high ride may well be more pleasant for most trips. The electric Jaguar will be more efficient.

Growing Confidence in Robotics Led to US$2.7 Billion in VC Funding in 2017, With Appetite for More

In analyzing the geography of the 152 companies that received investment, there were striking differences to 2016. While in terms of the number of investments were largely similar, with the United States retaining over half the number of individual investments.

Machine learning will redesign, not replace, work

The conversation around artificial intelligence and automation seems dominated by either doomsayers who fear robots will supplant all humans in the workforce, or optimists who think there's nothing new under the sun. But MIT Sloan professor Erik Brynjolfsson and his colleagues say that debate needs to take a different tone.
Page 1 of 7
1 2 3 7