Page 512 of 562
1 510 511 512 513 514 562

Takeaways from Automatica 2018

Automatica 2018 is one of Europe’s largest robotics and automation-related trade shows and a destination for global roboticists and business executives to view new products. It was held June 19-22 in Munich and had 890 exhibitors and 46,000 visitors (up 7% from the previous show).

The International Symposium on Robotics (ISR) was held in conjunction with Automatica with a series of robotics-related keynotes, poster presentations, talks and workshops.

The ISR also had an awards dinner in Munich on June 20th at the Hofbräuhaus, a touristy beer hall and garden with big steins of beer, plates full of Bavarian food and oompah bands on each floor.

Awards were given to:

  • The Joseph Engelberger Award was given to International Federation of Robotics (IFR) General Secretary Gudrun Litzenberger and also to Universal Robots CTO and co-founder Esben Østergaard.
  • The IFR Innovation and Entrepreneurship in Robotics and Automation (IERA) Award went to three recipients for their unique robotic creations:
    1. Lely Holding, the Dutch manufacturer of milking robots, for their Discovery 120 Manure Collector (pooper scooper)
    2. KUKA Robotics, for their new LBR Med medical robot, a lightweight robot certified for integration into medical products
    3. Perception Robotics, for their Gecko Gripper which uses a grasping technology from biomimicry observed in Geckos

IFR CEO Roundtable and President’s Message

From left: Stefan Lampa, CEO, KUKA; Prof Dr Bruno Siciliano, Dir ICAROS and PRISMALab, U of Naples Federico II; Ken Fouhy, Moderator, Editor in Chief, Innovations & Trend Research, VDI News; Dr. Kiyonori Inaba, Exec Dir, Robot Business Division, FANUC; Markus Kueckelhaus, VP Innovations & Trend Research, DHL; and Per Vegard Nerseth, Group Senior VP, ABB.

In addition to the CEO roundtable discussion, IFR President Junji Tsuda previewed the statistics that will appear in this year’s IFR Industrial Robots Annual Report covering 2017 sales data. He reported that 2017 turnover was about $50 billion, that 381,000 robots were sold, a 29% increase over 2016, and that China, which deployed 138,000 robots, was the main driver of 2017’s growth with a 58% increase over 2016 (the US rose only 6% by comparison).

Tsuda attributed the drivers for the 2017 results – and a 15% CAGR forecast for the next few years (25% for service robots) – to be the growing simplification (ease of use) for training robots; collaborative robots; progress in overall digitalization; and AI enabling greater vision and perception.

During the CEO Roundtable discussion, panel moderator Ken Fouhy asked where each CEO thought we (and his company) would be five years from now.

  • Kuka’s CEO said we would see a big move toward mobile manipulators doing multiple tasks
  • ABB’s Sr VP said that programming robots would become as easy and intuitive as using today’s iPhones
  • Fanuc’s ED said that future mobile robots wouldn’t have to wait for work as current robots often do because they would become more flexible
  • DHL’s VP forecast that perception would have access to more physics and reality than today
  • The U of Naples professor said that the tide has turned and that more STEM kids are coming into the realm of automation and robotics

In relation to jobs, all panel members remarked that the next 30 years would see dramatic changes in new jobs net yet defined as present labor retires and skilled labor shortages force governments to invest in retraining.

In relation to AI, panel members said that major impact would be felt in the following ways:

  • In logistics, particularly in the combined activities of mobility and grasping
  • In the increased use of sensors which enable new efficiencies particularly in QC and anomaly detection
  • In clean room improvements
  • And in in-line improvements, eg, spray painting

The panel members also outlined current challenges for AI:

  • Navigation perception for yard management and last-mile delivery
  • Selecting the best grasping method for quick manipulation
  • Improving human-machine interaction via speech and general assistance

Takeaways

I was at Automatica from start to finish, seeing all aspects of the show, attending a few ISR keynotes, and had interviews and talks with some very informative industry executives. Here are some of my takeaways from this year’s Automatica and those conversations:

  • Co-bots were touted throughout the show
    • Universal Robots, the originator of the co-bot, had a mammoth booth which was always jammed with visitors
    • New vendors displayed new co-bots – often very stylish – but none with the mechanical prowess of the Danish-manufactured UR robots
    • UR robots were used in many, many non-UR booths all over Automatica to demonstrate their product or service thereby indicating UR’s acceptance within the industry
    • ABB and Kawasaki announced a common interface for each of their two-armed co-bots with the hope that other companies would join and use the interface and that the group would soon add single-arm robots to the software thereby emphasizing the problem in training robots where each has their own proprietary training method
  • Bin-picking, which had as much presence and hype 10 years ago as co-bots had 5 years ago and IoT and AI had this year, is blasé now because the technology has finally become widely deployed and almost matches the original hype
  • AI and Internet-of-Things were the buzzwords for this show and vendors that offered platforms to stream, store, handle, combine, process, analyze and make predictions were plentiful
  • Better programming solutions for co-bots and even industrial robots are appearing, but better-still are needed
  • 24/7 robot monitoring is gaining favor, but access to company systems and equipment is still mostly withheld for security reasons
  • Many special-purpose exoskeletons were shown to help improve factory workers do their jobs
  • The Danish robotics cluster is every bit as good, comprehensive, supportive and successful as clusters in Silicon Valley, Boston/Cambridge and Pittsburgh
  • Vision and distancing systems – plus standards for same – are enabling cheaper automation
  • Grippers are improving (but see below for discussion of end-of-arm devices)
  • and promises (hype) about digitalization, data and AI, IoT, and machine (deep) learning was everywhere

End-of-arm devices

Plea from Dr. Michael Zürn, Daimler AG

An exec from Daimler AG, gave a talk about Mercedes Benz’s use of robotics. He said that they have 50 models and at least 500 different grippers. Yet humans with two hands could do every one of those tasks, albeit with superhuman strength in some cases. He welcomed the years of testing of YuMi’s two-armed robots because it’s the closest to what they need yet it is still nowhere near what a two-handed person can do, hence his plea to gripper makers to offer two hands in a flexible device that performs like a two-handed person, and be intuitive in how it learns to do its various jobs.

OnRobot’s goals

Enrico Krog Iversen was the CEO of Universal Robots from 2008 until 2016 when it sold to Teradyne. Since then he has invested in and cultivated three companies (OnRobot, Perception Robotics and OptoForce) which he merged together to become OnRobot A/S. Iversen is the CEO of the new entity. With this foundation of sensors, a growing business in grippers and integrating UR and MiR systems, and a promise to acquire a vision and perception component, Iversen foresees building an entity where everything that goes on a robot can be acquired from his company and it will have a single intuitive user interface. This latter aspect, a single intuitive interface for all, is a very convenient feature that users request but can’t often find.

Fraunhofer’s Hägele’s thesis

Martin Hägele, Head of the Robotics and Assistive Systems Department at Fraunhofer IPA in Stuttgart, advocated that there is a transformation coming where end-of-arm devices will increasingly include advanced sensing, more actuation, and user interaction. It seems logical. The end of the robot arm is where all the action is — the sensors, cameras, handling devices and the item to be processed. Times have changed from when robots were blind and being fed by expensive positioning systems; the end of the arm is where all the action is at.

Moves by market-leader Schunk

“We are convinced that industrial gripping will change radically in the coming years,” said Schunk CEO Henrik Schunk. “Smart grippers will interact with the user and their environment. They will continuously capture and process data and independently develop the gripping strategy in complex and changing environments and do so faster and more flexibly than man ever could.”

“As part of our digitalization initiative, we have set ourselves the target of allowing systems engineers and integrators to simulate entire assembly systems in three-dimensional spaces and map the entire engineering process from the design through to the mechanics, electrics and software right up to virtual commissioning in digitalized form, all in a single system. Even experienced designers are amazed at the benefits and the efficiency effects afforded by engineering with Mechatronics Concept Designer,” said Schunk in relation to Schunk’s OEM partnership with Siemens PLM Software, the provider of the simulation software.

Internet-of-Things

Microsoft CEO Satya Nadella said: “The world is in a massive transformation which can be seen as an intelligent cloud and an intelligent edge. The computing fabric is getting more distributed and more ubiquitous. Micro-controllers are appearing in everything from refrigerators to drills – every factory is going to have millions of sensors – thus computing is becoming ubiquitous and that means data is getting generated in large amounts. And once you have that, you use AI to reason over that data to give yourself predictive power – analytical power – power to automate things.”

Certainly the first or second thing sales people talked about at Automatica was AI, IoT and Industry 4.0. “It’s all coming together in the next few years,” they said. But they didn’t say whether businesses would open their systems to the cloud, or stream data to somebody else’s processor, or connect to an offsite analytics platform, or do it all onboard and post process the analytics.

Although the strategic goals for implementing IoT are different country by country (as can be seen in the interesting chart above from Forbes), there’s no doubt that businesses plan to spend on adding IoT. This can be seen in the black and blue chart on the right where the three big vertical bars on the left  of the chart denote Discrete Manufacturing, Transportation and Logistics.

Silly Stuff

As at any show, there were pretty girls flaunting products they knew nothing about, giveaways of snacks, food, coffees and gimmicks, and loads of talk about deep learning and AI for products not yet available for viewing of fully understood by the speaker.

Kuka, in a booth far, far away from their main booth (where they were demonstrating their industrial, mobile and collaborative robotics product line including their award-winning LBR Med robot), was showing a 5′ high concept humanoid robot with a big screen and a stylish 18″ silver cone behind the screen. It looked like an airport  or store guide. When I asked what it did I was told that it was the woofer for the sound system and the robot didn’t do anything – it was one of many concept devices they were reviewing.

Nevertheless, Kuka had a 4′ x 4′ brochure which didn’t show or even refer to any of the concept robots they showed. Instead it was all hype about what it might do sometime in the future: purify air, be a gaming console, have an “underhead projector”, HiFi speaker, camera, coffee and wellness head and “provide robotic intelligence that will enrich our daily lives.”

Front and back of 4 foot by 4 foot brochure (122cm x 122cm)

 

Don’t watch TV while safety driving

The Tempe police released a detailed report on their investigation of Uber’s fatality. I am on the road and have not had time to read it, but the big point, reported in many press was that the safety driver was, according to logs from her phone accounts, watching the show “The Voice” via Hulu on her phone just shortly before the incident.

This is at odds with earlier statements in the NTSB report, that she had been looking at the status console of the Uber self-drive system, and had not been using her phones. The report further said that Uber asked its safety drivers to observe the console and make notes on things seen on it. It appears the safety driver lied, and may have tried to implicate Uber in doing so.

Obviously attempting to watch a TV show while you are monitoring a car is unacceptable, presumably negligent behaviour. More interesting is what this means for Uber and other companies.

The first question — did Uber still instruct safety drivers to look at the monitors and make note of problems? That is a normal instruction for a software operator when there are two crew in the car, as most companies have. At first, we presumed that perhaps Uber had forgotten to alter this instruction when it went form 2 crew to 1. Perhaps the safety driver just used that as an excuse for her looking down since she felt she could not admit to watching TV. (She probably didn’t realize police would get logs from Hulu.)

If Uber still did that, it’s an error on their part, but now seems to play no role in this incident. That’s positive legal news for Uber.

It is true that if you had two people in the car, it’s highly unlikely the safety driver behind the wheel would be watching a TV show. It’s also true that if Uber had attention monitoring on the safety driver, it also would have made it harder to pull a stunt like that. Not all teams have attention monitoring, though after this incident I believe that most, including Uber, are putting it in. It might be argued that if Uber did require drivers to check the monitors, this might have somehow encouraged the safety driver’s negligent decision to watch TV, but that’s a stretch. I think any reasonable person is going to know this is not a job where you do that.

There may be some question regarding if a person with such bad judgement should have been cleared to be a safety driver. Uber may face some scrutiny for that bad choice. They may also face scrutiny if their training and job evaluation process for the safety drivers was clearly negligent. On the other hand, human employees are human, and if there’s not a pattern, it is less likely to create legal trouble for Uber.

From the standpoint of the Robocar industry, it makes the incident no less tragic, but less informative about robocar accidents. Accidents are caused every day because people allow themselves ridiculously unsafe distractions on their phones. This one is still special, but less so than we thought. While the issue of whether today’s limited systems (like the Tesla) generate too much driver complacency is still there, this was somebody being paid not to be complacent. The lessons we already knew — have 2 drivers, have driver attention monitoring — are still the same.

“Disabled the emergency braking.”

A number of press stories on the event have said that Uber “disabled” the emergency braking, and this also played a role in the fatality. That’s partly true but is very misleading vocabulary. The reality appears to be that Uber doesn’t have a working emergency braking capability in their system, and as such it is not enabled. That’s different from the idea that they have one and disabled it, which sounds much more like an ill act.

Uber’s system, like all systems, sometimes decides suddenly that there is an obstacle in front of the car for which it should brake when that obstacle is not really there. This is called a “false positive” or “ghost.” When this happens well in advance, it’s OK to have the car apply the brakes in a modest way, and then release them when it becomes clear it’s a ghost. However, if the ghost is so close that it would require full-hard braking, this creates a problem. If a car frequently does full-hard braking for ghosts, it is not only jarring, it can be dangerous, both for occupants of the car, and for cars following a little too closely behind — which sadly is the reality of driving.

As such, an emergency braking decision algorithm which hard brakes for ghosts is not a working system. You can’t turn it on safety, and so you don’t. Which is different from disabling it. While the Uber software did decide 2 seconds out that there was an obstacle that required a hard brake, it decides that out of the blue too often to be trusted with that decision. The decision is left to the safety driver — who should not be watching TV.

That does not mean Uber could not have done this much better. The car should still have done moderate braking, which would reduce the severity of any real accident and also wake up any inattentive safety driver. An audible alert should also have been present. Earlier, I speculated that if the driver was looking at the console, this sort of false positive incident would very likely have been there, so it was odd she did not see it, but it turns out she was not looking there.

The Volvo also has an emergency braking system. That system was indeed disabled — it is normally for any ADAS functions built into the cars to be disabled when used as prototype robocars. You are building something better, and you can’t have them competing. The Volvo system does not brake too often for ghosts, but that’s because it also doesn’t brake for real things far too often for a robocar system. Any ADAS system will be tuned that way because the driver is still responsible for driving. Teslas have been notoriously plowing into road barriers and trucks due to this ADAS style of tuning. It’s why a real robocar is much harder than the Tesla autopilot.

Other news

I’ve been on the road, so I have not reported on it, but the general news has been quite impressive. In particular, Waymo announced the order of 63,000 Chrysler minivans of the type they use in their Phoenix area tests. They are going beyond a pilot project to real deployment, and soon. Nobody else is close. This will add to around 20,000 Jaguar electric vehicles presumably aimed at a more luxury ride — though I actually think the minivan with its big doors, large interior space and high ride may well be more pleasant for most trips. The electric Jaguar will be more efficient.

Growing Confidence in Robotics Led to US$2.7 Billion in VC Funding in 2017, With Appetite for More

In analyzing the geography of the 152 companies that received investment, there were striking differences to 2016. While in terms of the number of investments were largely similar, with the United States retaining over half the number of individual investments.

Machine learning will redesign, not replace, work

The conversation around artificial intelligence and automation seems dominated by either doomsayers who fear robots will supplant all humans in the workforce, or optimists who think there's nothing new under the sun. But MIT Sloan professor Erik Brynjolfsson and his colleagues say that debate needs to take a different tone.

#263: ICRA 2018 Exhibition, with Juxi Leitner, Nicholas Panitz, Ben Wilson and James Brett

In this episode, Audrow Nash interviews Juxi Leitner, a Postdoctoral Research Fellow at QUT; and Nicholas Panitz, Ben Wilson, and James Brett, from CSIRO.

Leitner speaks about the Amazon Picking challenge, a challenge to advance the state of robotic grasping, and their robot which won the challenge in 2017. Their robot is similar to a cartesian 3D printer in form and uses either a suction cup or a pinch gripper for grabbing objects. Their robot has a depth camera and uses a digital scale to determine if an object has been picked up successfully. Leitner discusses what their team did differently from other teams that helped them win the competition.

Panitz, Wilson, and Brett speak about their hexapod robots. Their hexapods are for several purposes, such as environmental monitoring and remote inspection. They choose to use hexapods because they are statically stable. They discuss the design of their hexapods and how research works at Commonwealth Scientific and Industrial Research Organization, or CSIRO.

Links

#263: ICRA 2018 Exhibition, with Juxi Leitner, Nicholas Panitz, Ben Wilson and James Brett

In this episode, Audrow Nash interviews Juxi Leitner, a Postdoctoral Research Fellow at QUT; and Nicholas Panitz, Ben Wilson, and James Brett, from CSIRO.

Leitner speaks about the Amazon Picking challenge, a challenge to advance the state of robotic grasping, and their robot which won the challenge in 2017. Their robot is similar to a cartesian 3D printer in form and uses either a suction cup or a pinch gripper for grabbing objects. Their robot has a depth camera and uses a digital scale to determine if an object has been picked up successfully. Leitner discusses what their team did differently from other teams that helped them win the competition.

Panitz, Wilson, and Brett speak about their hexapod robots. Their hexapods are for several purposes, such as environmental monitoring and remote inspection. They choose to use hexapods because they are statically stable. They discuss the design of their hexapods and how research works at Commonwealth Scientific and Industrial Research Organization, or CSIRO.

A video of the robot Leitner discusses, #Cartman:

 

An example of CSIRO’s hexapod robots for inspection:

 

Links

Page 512 of 562
1 510 511 512 513 514 562