Archive 08.07.2018

Page 22 of 49
1 20 21 22 23 24 49

#264: Bio-inspired Soft Robots for Healthcare, with Yong-Lae Park

In this episode, Marwa Mohammed Alaa Eldean Eldiwiny interviews Yong-Lae Park, Associate Professor at Seoul National University in South Korea, about the bio-inspired design and manufacture of soft robots and microrobots for healthcare. Park’s research goal is to analyze the design and dynamics of biological systems and transform them into robotic/mechatronic systems for human life. Some of the his projects include development of artificial skin sensors, soft Muscle Actuators, and wearable robots for human rehabilitation.

Yong-Lae Park

Yong-Lae Park is an Associate Professor in the Department of Mechanical and Aerospace Engineering at Seoul National University.  Previously, Park was an Assistant Professor at Carnegie Mellon University in the Robotics Institute and the School of Computer Science.  Park received his Doctoral and Master’s Degrees in Mechanical Engineering from Stanford. He earned his Batchelor’s Degree from Kansas State University in Manufacturing Systems Engineering and a Batchelor’s of Engineering in Industrial Engineering at Korea University.

 

 

Links

Illinois’ crop-counting robot earns top recognition at leading robotics conference

Today's crop breeders are trying to boost yields while also preparing crops to withstand severe weather and changing climates. To succeed, they must locate genes for high-yielding, hardy traits in crop plants' DNA. A robot developed by the University of Illinois to find these proverbial needles in the haystack was recognized by the best systems paper award at Robotics: Science and Systems, the preeminent robotics conference held last week in Pittsburgh.

Musica Automata

Musica Automata is my new project and upcoming album, containing music written for the biggest robot orchestra in the world. These robots are more than sixty acoustic instruments (part of Logos Foundation) which receive digital MIDI messages that contain precise informations for their performance.

We’re aiming to fund the project through Kickstarter, where you can buy the album in vinyl, CD and digital download. A ticket for an exclusive concert with the robot orchestra is also available to pre-order.

Musica Automata is a multi-sensorial experience where the listener can hear and view where the sounds come from; one can observe the robots and find a precise correlation between the movement of the instrument and the sound perceived. The robot music performance, due to its extreme precision, can often surpass the ability of a human performer and, therefore, introduce completely new expressive possibilities. However next to the precise digital controls, the real performance and acoustic sound of instruments playing in a real acoustic space is preserved. Despite this particular instrumentation, it’s the music that leads the instruments and not the opposite; the emotional impact of the music is still there due to the versatility of the robots, which are not limited to mere artificial reproduction. It’s a performance that comes from a human idea, however devoid of direct human contact with the instrument. This means that the conceived musical idea, once processed and translated into MIDI language, is executed by robots without losing its artistic value in any way.

Here’s an audio preview of the music

Fundings, acquisitions and IPOs, June 2018

Twenty-seven startups raised money in June to the tune of $2.1 billion, another great month for robotics! Also during June there were ten acquisitions and two IPOs. See below for details.

The top fundings were:

  1. Rockwell (NYSE:ROK) made a $1 billion equity investment in PTC (NASDAQ:PTC), an automation control software provider to government and industry.
  2. Google (NASDAQ:GOOG) is investing $500 million in JD (JingDong) (NASDAQ:JD), the Chinese equivalent to Amazon and China’s 2nd largest e-commerce provider.
  3. Yitu Technology, a Chinese vision systems and AI startup, raised $200 million in a Series C funding.
  4. CMR Surgical, the Cambridge, UK developer of the Versius surgical robotic system, raised $100 million in a Series B funding.

This month’s $2.1 billion in fundings brings the year-to-date total to $7.1 billion!

Fundings

  1. PTC (NASDAQ:PTC), a IoT, Industry 4.0 and control software provider to government and industry, has partnered with Rockwell Automation (NYSE:ROK), the world’s largest company dedicated to industrial automation and information, “to accelerate growth for both companies and enable them to be the partner of choice for customers around the world who want to transform their physical operations with digital technology.” Rockwell is making a $1 billion equity investment in PTC as part of the deal in which the two agreed to align their respective smart factory technologies and industrial automation platforms.
  2. JD (JingDong) (NASDAQ:JD), the Chinese equivalent to Amazon and China’s 2nd largest e-commerce provider, raised $500 million from Google/Alphabet (NASDAQ:GOOG). Like Amazon, JD is heavy in the automated logistics business and now, with this investment, JD plans to open restaurants staffed by robots starting this August and ramping up to 1,000 by 2020. (The restaurants will serve 40 or fewer dishes with customers ordering and paying by smartphone.) JD is also involved in last mile deliveries and just launched 20 mobile robot carts in the Beijing area (each robot can deliver up to 30 different parcels). Google has been partnering and investing in smart logistics, online grocery shopping, virtual assistant shopping and same day delivery and this investment in JD adds to that effort.
  3. Yitu Technology, a Chinese vision systems and AI startup, raised $200 million in a Series C funding from ICBC International Holdings, SPDB International and Gaocheng Capital.
  4. CMR Surgical, the Cambridge, UK developer of the Versius surgical robot system, raised $100 million in a Series B funding round led by Zhejiang Silk Road Fund and included existing investors Escala Capital Investments, LGT, Cambridge Innovation Capital and Watrium. CMR employs over 200 people and is close to submitting its surgical robotic system for regulatory approval.
  5. Quantum Surgical, a French surgical robotics startup, raised $50 million in a Series A funding round led by Ally Bridge Group with participation from China’s Lifetech Scientific through its joint venture with Ally Bridge Group.
  6. Bossa Nova Robotics, a San Francisco inventory management AI-enhanced robot maker, raised $29-million funding in a Series B-1 round led by Cota Capital, with participation from China Walden Ventures, LG Electronics, and Intel Capital, Lucas Venture Group, and WRV Capital.
  7. Ceres Imaging, an Oakland, CA-based aerial spectral imagery and analytics company for the ag industry, raised $25 million in a Series B funding led by Insight Venture Partners and joined by Romulus Capital.
  8. DroneDeploy, a San Francisco provider of software solutions for drones (automated safety checks, workflows and real-time mapping), raised $25 million in a Series C round led by Invenergy Future Fund with participation by Scale Venture Partners, AngelPad, Emergence Capital, AirTree Ventures and Uncork Capital.
  9. Starship Technologies, the Estonian mobile robot delivery startup, raised $25 million in a seed round from existing investors Matrix Partners and Morpheus Ventures, along with extra funding from Airbnb co-founder Nathan Blecharczyk and Skype founding engineer Jaan Tallinn. The money will go toward expanding the fleet which they forecast to exceed 1,000 robots across 20 work and academic campuses, as well as various neighborhoods, in the next year.
  10. Silexica, a provider of AI-on-a-chip for ADAS vehicle applications, raised $18 million in a Series B round led by EQT Ventures Fund with previous investors Merus Capital, Paua Ventures, Seed Fonds Aachen, and DSA Invest.
  11. Verity Studios, a Swiss indoor entertainment drone startup from one of the co-founders of Kiva Systems, raised $18 million in a Series A round led by Fontinalis Partners with Airbus Ventures, Sony Innovation Fund, and Kitty Hawk.
  12. Matternet, a Silicon Valley developer of drone logistics solutions, raised $16 million in Series A funding. Boeing HorizonX Ventures led the round and was joined by Swiss Post, Sony Innovation Fund and Levitate Capital.
  13. Andrew Alliance, a Swiss developer of a line of bench-top lab pipetting robots, raised $14 million in a Series C funding round from Tecan Group, the Waters Corporation, Inpeco, Rancilio Cube, Sam Eletr Trust and Omega Funds. Andrew Alliance has supplied 18 of the top 20 pharmaceutical companies, the top four diagnostic companies, and 16 of the top 20 of the world’s leading academic research institutions with lab robots.
  14. Savioke, the Silicon Valley hospitality robot maker, raised $13.4 million in a Series B funding from Brain Corp, Swisslog Healthcare, NESIC and Recruit. The addition of Swisslog as an investor opens a new market for Savioke: hospital point-to-point delivery helping nurses, lab techs and other healthcare workers deliver essential items throughout the hospital.
  15. NextInput, a Silicon Valley developer of MEMS-based force-sensor solutions, raised $13 million in a Series B funding round from Sierra Ventures, Cota Capital and UMC Capital.
  16. Hailo Technologies, an Israeli chip making startup developing deep learning capabilities on edge devices, raised $12.5 million in a Series A round from Ourcrowd.com, Maniv Mobility, Next Gear; Zohar Zisapel and Gil Agmon.
  17. Sphero (Orbotix), a Colorado-based robotic toymaker (think Star Wars, Spider Man and Lightning Mcqueen), raised $12.1 million as the first part of a $20 million fundraising led by Mercato Partners. Sphero spun out Misty Robotics to handle new robot toy business, readjusted its staff after a lackluster holiday sales season, and is remaking itself into an education-first robotics company.
  18. WaterBit, a Silicon Valley precision ag irrigation system provider, raised $11.4 million in Series A funding led by New Enterprise Associates and including TJ Rodgers and Heuristic Capital.
  19. Chowbotics, the Silicon Valley salad-making robot, raised $11 million in a Series A-1 funding round led by the Foundry Group and Techstars. They will use the funding to develop grain, breakfast, poke, açai and yogurt bowls.
  20. Box Bot, a Berkeley-based developer of autonomous delivery robots, raised $7.5 million in seed funding. Artiman Ventures led the round and was joined by Pear Ventures, Afore Capital, Ironfire Ventures and The House Fund.
  21. Kittyhawk, a San Francisco-based drone innovation company, raised $5 million in a Series A funding round led by Bonfire Ventures and joined by Boeing HorizonX Ventures and Freestyle Capital.
  22. Smart Ag, an Iowa developer of an aftermarket kit for driverless tractors, and AutoCart, a plug-and-play system that automates existing grain cart tractors, raised $5 million from Stine See Farm.
  23. CyPhy Works, sans founder Helen Greiner, raised $4.5 million from unknown sources in a Series C funding round. CyPhy provides “persistent” tethered drone platforms for defense and public safety. The company previously raised ~$35 million. Its backers include Bessemer Venture Partners, Draper Nexus, Lux Capital, and investment arms of UPS and Motorola. Greiner is now working with the U.S. Army on robotics and artificial intelligence initiatives.
  24. Chasing Innovation Technology, a Shenzhen startup making underwater drone products, raised $3 million in a Seed round from Shenzhen Capital Group.
  25. InterModalics/Pick-it, a Belgian vision system provider for co-bots, raised $2.9 million from Urbain Vandeurzen and PMV to provide growth capital for the Pick-it vision and distancing device.
  26. Acryl, a Korean voice and emotion recognition AI startup, raised around $934,000 from LG Electronics (which equated to a 10% stake in the venture).
  27. Centaurs Tech (Chewrobot), a Chinese and American voice processing startup, raised an undisclosed Series A amount from Zongton Capital, Leaguer Venture Capital and Boyaa Interactive.

Acquisitions

  1. Bonsai AI, a Berkeley software and AI startup, was acquired by Microsoft (in what might be called an acqui-talent grab) for an undisclosed amount. Bonsai’s 45+ employees will be used by Microsoft to build the machine learning model for autonomous systems of all types.
  2. Carter Control Systems, a Maryland integrator of material handling logistics systems for high-volume mail handlers and postal automation, was acquired by Systems Solutions of Kentucky, a wholly-owned subsidiary of Lummus, a legacy provider of machinery and parts for cotton gin companies, for an undisclosed amount. Systems Solutions is an integrator of letter, parcel, baggage and cargo sortation devices and conveyor equipment. Carter offers a full range of robotic solutions for picking, packing, machine tending, assembly and palletizing.
  3. ESYS Automation, a Michigan industrial robotics integrator, was acquired for an undisclosed amount by JR Automation Technologies, also an industrial robotics integrator.
  4. FFT Production Systems, a German integrator of industrial robots, was acquired by Chinese conglomerate Fosun International for an undisclosed amount. FFT provides complete vehicles and production plants for Tier 1 equipment makers in Germany, the USA, Japan, China and other countries. In 2017, FFT recorded revenues of over $984 million and employs over 2,600 people.
  5. HEXAGON (STO:HEXA-B), the Swedish conglomerate integrating sensors and software into precision measuring technologies, acquired American AutonomouStuff, a developer and supplier of autonomous vehicle solutions, for an undisclosed amount estimated to be around $160 million. Hexagon has ~18,000 employees and net sales of ~$4.2 billion. During 2017 Hexagon acquired MSC, Vires, Catavolt and Luciad to enhance their autonomous, visualization and mobile capabilities. AutonomouStuff joined Baidu’s Apollo project team working on autonomous vehicle solutions earlier this year and had 2017 sales of $45 million.
  6. MyStemKits, an Atlanta-based STEM learning kit that uses 3D printed items, was acquired by Robo 3D, a San Diego 3D printing equipment and supplies provider, for an undisclosed amount.
  7. OnFarm Systems, a Fresno, CA-based SaaS for farmers, was acquired by Swiim, a Denver, CO irrigation system provider, for an undisclosed amount. The plan for the acquisition is to integrate SWIIM’s water balance monitoring and reporting data into the OnFarm dashboard thereby creating a more user-friendly product for SWIIM’s clients.
  8. On Robot, a Danish gripper maker startup, has become the remaining name in the 3-way merger/acquisition of On Robot, OptoForce, a Hungarian force sensor provider and Perception Robotics, a Los Angeles gripper and tactile sensor developer. No financial information was provided.
  9. RedZone Robotics, a Pittsburgh-based multi-sensor inspection provider for wastewater pipeline systems founded 30 years ago by famed roboticist “Red” Whittaker, was acquired by a group of investors led by Milestone Partners and including ABS Capital Partners, for an undisclosed purchase price.
  10. Scott Technology (NZ:SCT), a NZ-based food handling robotics provider, has acquired the assets and IP of Transbotics (OTCMKTS:TNSB), an American AGV manufacturer. No financial details were provided.

IPOs

  1. Albert Analytical Technology (TYO:3906), a Japanese analytics firm developing AI for self-driving vehicles, issued shares to Toyota Motor in return for $3.6 million of cash “For technological innovation as in the development of automated driving technologies with advanced analytical capacity centered on AI and machine learning.”
  2. Odico Formwork Robotics (CPH:ODICO), a Danish construction robotics provider, issued 10 million shares to trade on the Nasdaq First North Denmark Exchange.

A robotics roadmap for Australia

VISION: Robots as a tool to unlock human potential, modernise the economy, and build national health, well-being and sustainability.

Australia has released its first Robotics Roadmap following the example of many other countries. The roadmap, launched at Australia’s Parliament House on June 18, is a guide to how Australia can harness the benefits of a new robot economy.

Building on Australia’s strengths in robot talent and technologies in niche application areas, the roadmap acts a guide to how Australia can support a vibrant robotics industry that supports automation across all sectors of the Australian economy, and it is here that it shows some differences from other roadmaps. While many of the recommendations are similar to peer nation roadmaps, the drivers of the Australian economy are unique and we set the foundations of the roadmap on 5 key principles:

To develop the roadmap, the Australian Centre for Robotic Vision, an ARC Centre of Excellence, partnered with industry, researchers and government agencies across the country. Our national consultation process was modelled on Professor Henrik Christensen’s US Robotics Roadmap process and during late 2017 and early 2018 we held a series of workshops, in different Australian capital cities, focussing on areas of economic significance to Australia (see Figure).

Australia’s continued standard of living depends on us improving our productivity 2.5% every year. This is impossible to achieve through labour productivity alone, which over the five years to 2015-16 remained at 1.8%. According to Australia’s Productivity Commission, the productivity gap can be narrowed by new technology – robotics and automation. Automation is thought to be able to boost Australia’s productivity and national income by (up to) $AU2.2 trillion by 2030 (AlphaBeta report, 2017). $AU1 trillion from accelerating the rate of automation and $AU1.2 trillion – from transitioning our workforce to higher skilled occupations. Yet currently Australia is lagging peer nations in robotics ad automation, ranking 18th in the International Federation of Robotics 2017 assessment of industrial robot population density.

To encourage both the uptake and development of robotics and automation technologies, we developed 18 key recommendations, which can be broadly grouped into 5 categories:

The Australian robotics industry is diverse and hard to define, existing as either service businesses within major corporations or small-medium sized enterprises meeting niche market needs. There are no industry associations that collect data and represent the interests of robotics and related companies. Through the roadmapping process we discovered many great examples of Australian companies either developing robotic technologies or implementing them. We conservatively estimate that there are more than 1,100 robotics companies in Australia. These companies employ at least 50,000 people and generating more than $AU12b in revenue. It is an industry worthy of recognition in its own right.

If Australia can implement the roadmap’s recommendations, we believe that robotics and automation will maintain our living standards, help protect the environment, provide services to remote communities, reduce healthcare costs, provide safer more fulfilling jobs, prepare the next generation for the future, encourage investment and reshore jobs back to Australia.

Australia’s first robotics roadmap is a living document, symbiotic with a dynamic industry. Its emphasis will shift as the industry develops but always with the intention of navigating a path to prosperity for our nation. By describing what is possible and what is desirable, the roadmap aims to create the grounds for the necessary co-operation to allow robots to help unlock human potential, modernise the economy and build national health, well-being and sustainability.

Robots in Depth with Spring Berman

In this episode of Robots in Depth, Per Sjöborg speaks with Spring Berman about her extensive experience in the field of swarm robotics.

One of the underlying ideas of the area is designing robot controls similar to the ones used in nature by different types of swarms of animals, systems that work without having a leader. We get to hear how many robots can be used together to handle tasks that would not be possible using one or a small number of robots. We also get introduced to the opportunities of mixing artificial animals with real ones.

Spring describes some of the challenges within swarm robotics, which can be as diverse as mathematical modelling and regulatory issues. She also comments on the next frontier in swarm robotics and the different research areas that are needed to make progress.

This interview was recorded in 2016.

One-shot imitation from watching videos

By Tianhe Yu and Chelsea Finn

Learning a new skill by observing another individual, the ability to imitate, is a key part of intelligence in human and animals. Can we enable a robot to do the same, learning to manipulate a new object by simply watching a human manipulating the object just as in the video below?


The robot learns to place the peach into the red bowl after watching the human do so.

Read More

Personalized “deep learning” equips robots for autism therapy

An example of a therapy session augmented with humanoid robot NAO [SoftBank Robotics], which was used in the EngageMe study. Tracking of limbs/faces was performed using the CMU Perceptual Lab’s OpenPose utility.
Image: MIT Media Lab

By Becky Ham

Children with autism spectrum conditions often have trouble recognizing the emotional states of people around them — distinguishing a happy face from a fearful face, for instance. To remedy this, some therapists use a kid-friendly robot to demonstrate those emotions and to engage the children in imitating the emotions and responding to them in appropriate ways.

This type of therapy works best, however, if the robot can smoothly interpret the child’s own behavior — whether he or she is interested and excited or paying attention — during the therapy. Researchers at the MIT Media Lab have now developed a type of personalized machine learning that helps robots estimate the engagement and interest of each child during these interactions, using data that are unique to that child.

Armed with this personalized “deep learning” network, the robots’ perception of the children’s responses agreed with assessments by human experts, with a correlation score of 60 percent, the scientists report June 27 in Science Robotics.

It can be challenging for human observers to reach high levels of agreement about a child’s engagement and behavior. Their correlation scores are usually between 50 and 55 percent. Rudovic and his colleagues suggest that robots that are trained on human observations, as in this study, could someday provide more consistent estimates of these behaviors.

“The long-term goal is not to create robots that will replace human therapists, but to augment them with key information that the therapists can use to personalize the therapy content and also make more engaging and naturalistic interactions between the robots and children with autism,” explains Oggi Rudovic, a postdoc at the Media Lab and first author of the study.

Rosalind Picard, a co-author on the paper and professor at MIT who leads research in affective computing, says that personalization is especially important in autism therapy: A famous adage is, “If you have met one person, with autism, you have met one person with autism.”

“The challenge of creating machine learning and AI [artificial intelligence] that works in autism is particularly vexing, because the usual AI methods require a lot of data that are similar for each category that is learned. In autism where heterogeneity reigns, the normal AI approaches fail,” says Picard. Rudovic, Picard, and their teammates have also been using personalized deep learning in other areas, finding that it improves results for pain monitoring and for forecasting Alzheimer’s disease progression.  

Meeting NAO

Robot-assisted therapy for autism often works something like this: A human therapist shows a child photos or flash cards of different faces meant to represent different emotions, to teach them how to recognize expressions of fear, sadness, or joy. The therapist then programs the robot to show these same emotions to the child, and observes the child as she or he engages with the robot. The child’s behavior provides valuable feedback that the robot and therapist need to go forward with the lesson.

The researchers used SoftBank Robotics NAO humanoid robots in this study. Almost 2 feet tall and resembling an armored superhero or a droid, NAO conveys different emotions by changing the color of its eyes, the motion of its limbs, and the tone of its voice.

The 35 children with autism who participated in this study, 17 from Japan and 18 from Serbia, ranged in age from 3 to 13. They reacted in various ways to the robots during their 35-minute sessions, from looking bored and sleepy in some cases to jumping around the room with excitement, clapping their hands, and laughing or touching the robot.

Most of the children in the study reacted to the robot “not just as a toy but related to NAO respectfully as it if was a real person,” especially during storytelling, where the therapists asked how NAO would feel if the children took the robot for an ice cream treat, according to Rudovic.

One 4-year-old girl hid behind her mother while participating in the session but became much more open to the robot and ended up laughing by the end of the therapy. The sister of one of the Serbian children gave NAO a hug and said “Robot, I love you!” at the end of a session, saying she was happy to see how much her brother liked playing with the robot.

“Therapists say that engaging the child for even a few seconds can be a big challenge for them, and robots attract the attention of the child,” says Rudovic, explaining why robots have been useful in this type of therapy. “Also, humans change their expressions in many different ways, but the robots always do it in the same way, and this is less frustrating for the child because the child learns in a very structured way how the expressions will be shown.”

Personalized machine learning

The MIT research team realized that a kind of machine learning called deep learning would be useful for the therapy robots to have, to perceive the children’s behavior more naturally. A deep-learning system uses hierarchical, multiple layers of data processing to improve its tasks, with each successive layer amounting to a slightly more abstract representation of the original raw data.

Although the concept of deep learning has been around since the 1980s, says Rudovic, it’s only recently that there has been enough computing power to implement this kind of artificial intelligence. Deep learning has been used in automatic speech and object-recognition programs, making it well-suited for a problem such as making sense of the multiple features of the face, body, and voice that go into understanding a more abstract concept such as a child’s engagement.

“In the case of facial expressions, for instance, what parts of the face are the most important for estimation of engagement?” Rudovic says. “Deep learning allows the robot to directly extract the most important information from that data without the need for humans to manually craft those features.”

For the therapy robots, Rudovic and his colleagues took the idea of deep learning one step further and built a personalized framework that could learn from data collected on each individual child. The researchers captured video of each child’s facial expressions, head and body movements, poses and gestures, audio recordings and data on heart rate, body temperature, and skin sweat response from a monitor on the child’s wrist.

The robots’ personalized deep learning networks were built from layers of these video, audio, and physiological data, information about the child’s autism diagnosis and abilities, their culture and their gender. The researchers then compared their estimates of the children’s behavior with estimates from five human experts, who coded the children’s video and audio recordings on a continuous scale to determine how pleased or upset, how interested, and how engaged the child seemed during the session.

Trained on these personalized data coded by the humans, and tested on data not used in training or tuning the models, the networks significantly improved the robot’s automatic estimation of the child’s behavior for most of the children in the study, beyond what would be estimated if the network combined all the children’s data in a “one-size-fits-all” approach, the researchers found.

Rudovic and colleagues were also able to probe how the deep learning network made its estimations, which uncovered some interesting cultural differences between the children. “For instance, children from Japan showed more body movements during episodes of high engagement, while in Serbs large body movements were associated with disengagement episodes,” Rudovic says.

The study was funded by grants from the Japanese Ministry of Education, Culture, Sports, Science and Technology; Chubu University; and the European Union’s HORIZON 2020 grant (EngageME).

Takeaways from Automatica 2018

Automatica 2018 is one of Europe’s largest robotics and automation-related trade shows and a destination for global roboticists and business executives to view new products. It was held June 19-22 in Munich and had 890 exhibitors and 46,000 visitors (up 7% from the previous show).

The International Symposium on Robotics (ISR) was held in conjunction with Automatica with a series of robotics-related keynotes, poster presentations, talks and workshops.

The ISR also had an awards dinner in Munich on June 20th at the Hofbräuhaus, a touristy beer hall and garden with big steins of beer, plates full of Bavarian food and oompah bands on each floor.

Awards were given to:

  • The Joseph Engelberger Award was given to International Federation of Robotics (IFR) General Secretary Gudrun Litzenberger and also to Universal Robots CTO and co-founder Esben Østergaard.
  • The IFR Innovation and Entrepreneurship in Robotics and Automation (IERA) Award went to three recipients for their unique robotic creations:
    1. Lely Holding, the Dutch manufacturer of milking robots, for their Discovery 120 Manure Collector (pooper scooper)
    2. KUKA Robotics, for their new LBR Med medical robot, a lightweight robot certified for integration into medical products
    3. Perception Robotics, for their Gecko Gripper which uses a grasping technology from biomimicry observed in Geckos

IFR CEO Roundtable and President’s Message

From left: Stefan Lampa, CEO, KUKA; Prof Dr Bruno Siciliano, Dir ICAROS and PRISMALab, U of Naples Federico II; Ken Fouhy, Moderator, Editor in Chief, Innovations & Trend Research, VDI News; Dr. Kiyonori Inaba, Exec Dir, Robot Business Division, FANUC; Markus Kueckelhaus, VP Innovations & Trend Research, DHL; and Per Vegard Nerseth, Group Senior VP, ABB.

In addition to the CEO roundtable discussion, IFR President Junji Tsuda previewed the statistics that will appear in this year’s IFR Industrial Robots Annual Report covering 2017 sales data. He reported that 2017 turnover was about $50 billion, that 381,000 robots were sold, a 29% increase over 2016, and that China, which deployed 138,000 robots, was the main driver of 2017’s growth with a 58% increase over 2016 (the US rose only 6% by comparison).

Tsuda attributed the drivers for the 2017 results – and a 15% CAGR forecast for the next few years (25% for service robots) – to be the growing simplification (ease of use) for training robots; collaborative robots; progress in overall digitalization; and AI enabling greater vision and perception.

During the CEO Roundtable discussion, panel moderator Ken Fouhy asked where each CEO thought we (and his company) would be five years from now.

  • Kuka’s CEO said we would see a big move toward mobile manipulators doing multiple tasks
  • ABB’s Sr VP said that programming robots would become as easy and intuitive as using today’s iPhones
  • Fanuc’s ED said that future mobile robots wouldn’t have to wait for work as current robots often do because they would become more flexible
  • DHL’s VP forecast that perception would have access to more physics and reality than today
  • The U of Naples professor said that the tide has turned and that more STEM kids are coming into the realm of automation and robotics

In relation to jobs, all panel members remarked that the next 30 years would see dramatic changes in new jobs net yet defined as present labor retires and skilled labor shortages force governments to invest in retraining.

In relation to AI, panel members said that major impact would be felt in the following ways:

  • In logistics, particularly in the combined activities of mobility and grasping
  • In the increased use of sensors which enable new efficiencies particularly in QC and anomaly detection
  • In clean room improvements
  • And in in-line improvements, eg, spray painting

The panel members also outlined current challenges for AI:

  • Navigation perception for yard management and last-mile delivery
  • Selecting the best grasping method for quick manipulation
  • Improving human-machine interaction via speech and general assistance

Takeaways

I was at Automatica from start to finish, seeing all aspects of the show, attending a few ISR keynotes, and had interviews and talks with some very informative industry executives. Here are some of my takeaways from this year’s Automatica and those conversations:

  • Co-bots were touted throughout the show
    • Universal Robots, the originator of the co-bot, had a mammoth booth which was always jammed with visitors
    • New vendors displayed new co-bots – often very stylish – but none with the mechanical prowess of the Danish-manufactured UR robots
    • UR robots were used in many, many non-UR booths all over Automatica to demonstrate their product or service thereby indicating UR’s acceptance within the industry
    • ABB and Kawasaki announced a common interface for each of their two-armed co-bots with the hope that other companies would join and use the interface and that the group would soon add single-arm robots to the software thereby emphasizing the problem in training robots where each has their own proprietary training method
  • Bin-picking, which had as much presence and hype 10 years ago as co-bots had 5 years ago and IoT and AI had this year, is blasé now because the technology has finally become widely deployed and almost matches the original hype
  • AI and Internet-of-Things were the buzzwords for this show and vendors that offered platforms to stream, store, handle, combine, process, analyze and make predictions were plentiful
  • Better programming solutions for co-bots and even industrial robots are appearing, but better-still are needed
  • 24/7 robot monitoring is gaining favor, but access to company systems and equipment is still mostly withheld for security reasons
  • Many special-purpose exoskeletons were shown to help improve factory workers do their jobs
  • The Danish robotics cluster is every bit as good, comprehensive, supportive and successful as clusters in Silicon Valley, Boston/Cambridge and Pittsburgh
  • Vision and distancing systems – plus standards for same – are enabling cheaper automation
  • Grippers are improving (but see below for discussion of end-of-arm devices)
  • and promises (hype) about digitalization, data and AI, IoT, and machine (deep) learning was everywhere

End-of-arm devices

Plea from Dr. Michael Zürn, Daimler AG

An exec from Daimler AG, gave a talk about Mercedes Benz’s use of robotics. He said that they have 50 models and at least 500 different grippers. Yet humans with two hands could do every one of those tasks, albeit with superhuman strength in some cases. He welcomed the years of testing of YuMi’s two-armed robots because it’s the closest to what they need yet it is still nowhere near what a two-handed person can do, hence his plea to gripper makers to offer two hands in a flexible device that performs like a two-handed person, and be intuitive in how it learns to do its various jobs.

OnRobot’s goals

Enrico Krog Iversen was the CEO of Universal Robots from 2008 until 2016 when it sold to Teradyne. Since then he has invested in and cultivated three companies (OnRobot, Perception Robotics and OptoForce) which he merged together to become OnRobot A/S. Iversen is the CEO of the new entity. With this foundation of sensors, a growing business in grippers and integrating UR and MiR systems, and a promise to acquire a vision and perception component, Iversen foresees building an entity where everything that goes on a robot can be acquired from his company and it will have a single intuitive user interface. This latter aspect, a single intuitive interface for all, is a very convenient feature that users request but can’t often find.

Fraunhofer’s Hägele’s thesis

Martin Hägele, Head of the Robotics and Assistive Systems Department at Fraunhofer IPA in Stuttgart, advocated that there is a transformation coming where end-of-arm devices will increasingly include advanced sensing, more actuation, and user interaction. It seems logical. The end of the robot arm is where all the action is — the sensors, cameras, handling devices and the item to be processed. Times have changed from when robots were blind and being fed by expensive positioning systems; the end of the arm is where all the action is at.

Moves by market-leader Schunk

“We are convinced that industrial gripping will change radically in the coming years,” said Schunk CEO Henrik Schunk. “Smart grippers will interact with the user and their environment. They will continuously capture and process data and independently develop the gripping strategy in complex and changing environments and do so faster and more flexibly than man ever could.”

“As part of our digitalization initiative, we have set ourselves the target of allowing systems engineers and integrators to simulate entire assembly systems in three-dimensional spaces and map the entire engineering process from the design through to the mechanics, electrics and software right up to virtual commissioning in digitalized form, all in a single system. Even experienced designers are amazed at the benefits and the efficiency effects afforded by engineering with Mechatronics Concept Designer,” said Schunk in relation to Schunk’s OEM partnership with Siemens PLM Software, the provider of the simulation software.

Internet-of-Things

Microsoft CEO Satya Nadella said: “The world is in a massive transformation which can be seen as an intelligent cloud and an intelligent edge. The computing fabric is getting more distributed and more ubiquitous. Micro-controllers are appearing in everything from refrigerators to drills – every factory is going to have millions of sensors – thus computing is becoming ubiquitous and that means data is getting generated in large amounts. And once you have that, you use AI to reason over that data to give yourself predictive power – analytical power – power to automate things.”

Certainly the first or second thing sales people talked about at Automatica was AI, IoT and Industry 4.0. “It’s all coming together in the next few years,” they said. But they didn’t say whether businesses would open their systems to the cloud, or stream data to somebody else’s processor, or connect to an offsite analytics platform, or do it all onboard and post process the analytics.

Although the strategic goals for implementing IoT are different country by country (as can be seen in the interesting chart above from Forbes), there’s no doubt that businesses plan to spend on adding IoT. This can be seen in the black and blue chart on the right where the three big vertical bars on the left  of the chart denote Discrete Manufacturing, Transportation and Logistics.

Silly Stuff

As at any show, there were pretty girls flaunting products they knew nothing about, giveaways of snacks, food, coffees and gimmicks, and loads of talk about deep learning and AI for products not yet available for viewing of fully understood by the speaker.

Kuka, in a booth far, far away from their main booth (where they were demonstrating their industrial, mobile and collaborative robotics product line including their award-winning LBR Med robot), was showing a 5′ high concept humanoid robot with a big screen and a stylish 18″ silver cone behind the screen. It looked like an airport  or store guide. When I asked what it did I was told that it was the woofer for the sound system and the robot didn’t do anything – it was one of many concept devices they were reviewing.

Nevertheless, Kuka had a 4′ x 4′ brochure which didn’t show or even refer to any of the concept robots they showed. Instead it was all hype about what it might do sometime in the future: purify air, be a gaming console, have an “underhead projector”, HiFi speaker, camera, coffee and wellness head and “provide robotic intelligence that will enrich our daily lives.”

Front and back of 4 foot by 4 foot brochure (122cm x 122cm)

 

Don’t watch TV while safety driving

The Tempe police released a detailed report on their investigation of Uber’s fatality. I am on the road and have not had time to read it, but the big point, reported in many press was that the safety driver was, according to logs from her phone accounts, watching the show “The Voice” via Hulu on her phone just shortly before the incident.

This is at odds with earlier statements in the NTSB report, that she had been looking at the status console of the Uber self-drive system, and had not been using her phones. The report further said that Uber asked its safety drivers to observe the console and make notes on things seen on it. It appears the safety driver lied, and may have tried to implicate Uber in doing so.

Obviously attempting to watch a TV show while you are monitoring a car is unacceptable, presumably negligent behaviour. More interesting is what this means for Uber and other companies.

The first question — did Uber still instruct safety drivers to look at the monitors and make note of problems? That is a normal instruction for a software operator when there are two crew in the car, as most companies have. At first, we presumed that perhaps Uber had forgotten to alter this instruction when it went form 2 crew to 1. Perhaps the safety driver just used that as an excuse for her looking down since she felt she could not admit to watching TV. (She probably didn’t realize police would get logs from Hulu.)

If Uber still did that, it’s an error on their part, but now seems to play no role in this incident. That’s positive legal news for Uber.

It is true that if you had two people in the car, it’s highly unlikely the safety driver behind the wheel would be watching a TV show. It’s also true that if Uber had attention monitoring on the safety driver, it also would have made it harder to pull a stunt like that. Not all teams have attention monitoring, though after this incident I believe that most, including Uber, are putting it in. It might be argued that if Uber did require drivers to check the monitors, this might have somehow encouraged the safety driver’s negligent decision to watch TV, but that’s a stretch. I think any reasonable person is going to know this is not a job where you do that.

There may be some question regarding if a person with such bad judgement should have been cleared to be a safety driver. Uber may face some scrutiny for that bad choice. They may also face scrutiny if their training and job evaluation process for the safety drivers was clearly negligent. On the other hand, human employees are human, and if there’s not a pattern, it is less likely to create legal trouble for Uber.

From the standpoint of the Robocar industry, it makes the incident no less tragic, but less informative about robocar accidents. Accidents are caused every day because people allow themselves ridiculously unsafe distractions on their phones. This one is still special, but less so than we thought. While the issue of whether today’s limited systems (like the Tesla) generate too much driver complacency is still there, this was somebody being paid not to be complacent. The lessons we already knew — have 2 drivers, have driver attention monitoring — are still the same.

“Disabled the emergency braking.”

A number of press stories on the event have said that Uber “disabled” the emergency braking, and this also played a role in the fatality. That’s partly true but is very misleading vocabulary. The reality appears to be that Uber doesn’t have a working emergency braking capability in their system, and as such it is not enabled. That’s different from the idea that they have one and disabled it, which sounds much more like an ill act.

Uber’s system, like all systems, sometimes decides suddenly that there is an obstacle in front of the car for which it should brake when that obstacle is not really there. This is called a “false positive” or “ghost.” When this happens well in advance, it’s OK to have the car apply the brakes in a modest way, and then release them when it becomes clear it’s a ghost. However, if the ghost is so close that it would require full-hard braking, this creates a problem. If a car frequently does full-hard braking for ghosts, it is not only jarring, it can be dangerous, both for occupants of the car, and for cars following a little too closely behind — which sadly is the reality of driving.

As such, an emergency braking decision algorithm which hard brakes for ghosts is not a working system. You can’t turn it on safety, and so you don’t. Which is different from disabling it. While the Uber software did decide 2 seconds out that there was an obstacle that required a hard brake, it decides that out of the blue too often to be trusted with that decision. The decision is left to the safety driver — who should not be watching TV.

That does not mean Uber could not have done this much better. The car should still have done moderate braking, which would reduce the severity of any real accident and also wake up any inattentive safety driver. An audible alert should also have been present. Earlier, I speculated that if the driver was looking at the console, this sort of false positive incident would very likely have been there, so it was odd she did not see it, but it turns out she was not looking there.

The Volvo also has an emergency braking system. That system was indeed disabled — it is normally for any ADAS functions built into the cars to be disabled when used as prototype robocars. You are building something better, and you can’t have them competing. The Volvo system does not brake too often for ghosts, but that’s because it also doesn’t brake for real things far too often for a robocar system. Any ADAS system will be tuned that way because the driver is still responsible for driving. Teslas have been notoriously plowing into road barriers and trucks due to this ADAS style of tuning. It’s why a real robocar is much harder than the Tesla autopilot.

Other news

I’ve been on the road, so I have not reported on it, but the general news has been quite impressive. In particular, Waymo announced the order of 63,000 Chrysler minivans of the type they use in their Phoenix area tests. They are going beyond a pilot project to real deployment, and soon. Nobody else is close. This will add to around 20,000 Jaguar electric vehicles presumably aimed at a more luxury ride — though I actually think the minivan with its big doors, large interior space and high ride may well be more pleasant for most trips. The electric Jaguar will be more efficient.

Page 22 of 49
1 20 21 22 23 24 49