Fundings
- LeddarTech, the Canadian developer of sensors and LiDAR distancing systems for ADAS and other mobile systems, raised $101 million in a Series C funding led by Osram with participation by Delphi, Magneti Marelli, Integrated Device Technology, Fonds de solidarité FTQ, BDC Capital and GO Capital. This round of funding will allow LeddarTech to enhance its ASIC development efforts, expand its R&D team, and accelerate ongoing LiDAR development programs with select Tier-1 automotive customers for rapid market deployment.
-
Innoviz Technologies, the Israeli solid-state LiDAR startup, raised $65 million in a Series B funding. Delphi Automotive PLC and Magna International participated in the round, along with additional new investors including 360 Capital Partners, Glory Ventures, Naver and others. All Series A investors also participated in the round.
-
Roobo, the Chinese startup and manufacturer of the Domgy consumer robot, raised $53 million in a Series B round led by Seven Seas Partners and IFlyTek, a Chinese developer of self-driving technologies, speech recognition for human-machine and human-human communication and related software and chips.
- JingChi, a Sunnyvale self-driving car vision systems startup, raised $52 million in a seed round. Although the lead investor was Qiming Venture Partners, the company did not disclose the identity of any additional investors in the round.
-
Five AI, a Bristol, UK self-driving technology and ride-sharing startup, raised $35 million in a Series A funding round led by Lakestar Capital, with Amadeus Capital Partners, Notion Capital and Kindred (which all previously invested in its seed round) also participating.
-
Airobotics, the Israeli autonomous drone platform for the mining, utilities and gas industry, raised $32.5 million in a series C funding round led by BlueRun Ventures. With the funding, Airobotics is starting a new Homeland Security and Defense division, as well as the “Airobotics Safe Cities” initiative, which uses fully automated drones to perform emergency operations in cities.
-
Cambridge Medical Robotics, a UK startup developing a next-generation robotic surgical system closed a Series A funding round of $26 million from Watrium and existing investors Cambridge Innovation Capital, LGT Global Invest, Escala Capital and ABB Technology Ventures.
- Kinova Robotics, a Canadian provider of robotics for the disabled, has raised $20 million to transition into three new areas of service robotics: collaborative robots for inspection and pick and place operations, manipulators for mobile platforms, and medical robots for research and therapies. Funding came from four major contributors, including lead investor Fonds Manufacturier Québécois; and KTB Network (South Korea), Foxconn (Taiwan); and BDC Capital (Canada).
-
Humatics, a Cambridge, Mass.-based developer of sensors, software, and control systems that enable robots to work within human environments, raised $18 million in a Series A funding. Fontinalis Partners led the round, and was joined by investors including Airbus Ventures, Lockheed Martin Ventures, Intact Ventures, Tectonic Ventures, Presidio Ventures, Blue Ivy Ventures, Ray Stata, and Andy Youmans.
-
Lighthouse AI, a Silicon Valley startup developing a deep learning, 3D sensing, interactive home assistant, raised $17 million (in May) led by Eclipse, Felicis Ventures, Andy Rubin’s Playground Ventures, SignalFire and StartX. Their new home security device can accurately distinguish between adults, children, pets and objects, known and unknown faces, and actions and report upon and play back based on what it finds.
-
Tonbo Imaging, an Indian defense vision systems startup, raised $17 million in a Series B funding round led by Walden Riverwood Ventures with Artiman Ventures, Edelweiss, and Qualcomm Ventures.
-
Drive.AI, a Silicon Valley self-driving startup, raised another $15 million (after their $50 million Series B round earlier this year) from Grab, an Uber rival Asian on-demand transportation and mobile payments platform, and unnamed others. Drive CEO Sameep Tandon said: “We look at Singapore as a technological juggernaut. When innovations happen in the region, basically they start in Singapore and then move out to other places within the region, whether it’s Indonesia, Vietnam or China. What’s also really interesting to us about Singapore is they have this sort of existential problem here – for them autonomous driving is not a matter of ‘if,’ it’s a matter of ‘when.’”
-
Ushr Inc., a Livonia, Mich.-based startup developing high-definition mapping technology and software for autonomous and semi-autonomous vehicles, raised $10 million in a Series A funding round led by Forte Ventures and including EnerTech Capital, Emerald Technology Ventures, and GM Ventures.
-
Agrible, an Illinois startup offering a suite of software tools for connected farmers, raised $9.7 million of a $15.7 million Series B round of funding led by Maumee Ventures, iSelect Fund, and existing investors Flyover Capital, Archer Daniels Midland, and Serra Ventures.
- Bonsai AI, a Berkeley, CA AI startup, raised $7.6 million (in May) in a Series A round led by Microsoft Ventures and NEA, with participation from Samsung, Siemens, and ABB Technology Ventures.
-
Metawave, a Palo Alto self-driving perception spin-off from PARC, raised $7 million in seed funding. Backers included Khosla Ventures, Motus Ventures, and Thyra Global Management.
- Ori Systems, a Boston startup with a novel interior space robotic furniture system, raised $6 million in a Series A funding round led by Khosla Ventures.
- Specim Spectral Imaging, the Finnish company providing imaging systems to Zen Robotics for waste sorting and management, raised $4.2 million from Bocap SME Achievers Fund II Ky.
-
OpenSpace, a San Francisco machine vision startup, raised $3 million in seed funding. Lux Capital led the round, and was joined by investors includingFoundation Capital, National Science Foundation, the Box Group, AngelList, Goldcrest, Sterling Capital and Comet Labs.
- Furhat Robotics, the Swedish startup developing social robots, raised $2.5 million in a seed funding round from Balderton Capital and LocalGlobe. The company is currently working with Swedish public services as well as companies like Honda, Intel, Merck, Toyota, and KPMG to develop apps on the platform, eg: A Swedish employment agency is using the conversational robot to prepare people for job interviews and to train teachers; Honda is using Furhat to develop a conversational tool for the elderly in a smart home setting; KPMG is designing a Furhat-enabled financial advisor interface. A recent Forbes article reports that both Disney and Intel are customers of this 50-person startup. Watch this fascinating Bloomberg video:
- Reactive Robotics, a Munich startup developing rehab robotics for hospitals with ICUs for mechanically ventilated, neurological or trauma patients, raised an amount estimated to be around $2.5 million led by MTIP MedTech Innovation Partners AG, High-Tech Gründerfonds, Bayern Kapital, TQ-Group, and Dr. Doll Holding GmbH. Reactive Robotics said it expects to deliver its 1st clinical test product by the 1st quarter of 2018.
- Betterview, a San Francisco-based software startup that can analyze detailed aerial footage captured by drones, raised $2 million. Compound Venture Capital led the round, and was joined by investors Maiden Re, 645 Ventures, Arab Angel, Winklevoss Capital, Chestnut Street Ventures, Pierre Valade, Haystackand MetaProp.
- Sea Machines Robotics, a Boston startup developing unmanned marine systems, raised $1.5 million (in May) in a round led by Connecticut-based LaunchCapital with participation from Cambridge-based venture capital firm Accomplice, Techstars, LDV Capital, and the Geekdom Fund. Sea Machines provides software and hardware to turn existing boats into autonomous vehicles.
Fundings (amount unknown)
-
SharkNinja, a home products distributor, raised an undisclosed sum from CDH investments, a large private equity fund, who said they purchased “a significant equity interest.” No amounts were disclosed. SharkNinja launched a Roomba-like robot vacuum to their line of products — at half the price of iRobot’s Roomba. Analysts are saying that SharkNinja “is a credible threat to iRobot” given its knack for marketing, as well as engineering high-quality products at value price points — two strengths that helped it successfully take market share from Dyson in recent years in the upright-vacuum market.
-
Acutronic Robotics, a Swiss company providing multi-axis motion simulators, has received Series A funding from the Sony Innovation Fund. No financial details were given. Funds will be used to enable Acutronic to accelerate the development of their Hardware Robot Operating System (H-ROS), to compete with ROS-I and legacy software from robot manufacturers. “H-ROS aims to change the landscape of robotics by creating an ecosystem where hardware components can be reused among different robots, regardless of the original manufacturer. We strongly believe that the future of robotics will be about modular robots that can be easily repaired and reconfigured. H-ROS aims to shape this future.”
-
Ocean Aero, a San Diego unmanned marine systems startup, raised an undisclosed amount from Lockheed Martin Ventures. “Ocean Aero represents the next generation of environmentally powered, autonomous ocean systems. Our investment will allow us to better respond to customers’ maritime needs with technology solutions for a diverse set of missions,” said Chris Moran, ED and GM of Lockheed Martin Ventures.
Acquisitions
- John Deere, the farm equipment manufacturer, acquired Blue River Technology, a Silicon Valley AI and farm equipment startup for $305 million. Blue River has honed their See & Spray and Sense & Decide devices to analyze every plant in a field and apply herbicides only to weeds and overly crowded plants needing thinning thereby dramatically reducing the amount of chemicals used. Their robots are towed behind a tractor similar to conventional spraying equipment but Blue River’s towed implements have onboard cameras that use machine-learning software to distinguish between crops and weeds, and automated sprayers to target and spray the unwanted plants. Further, Blue River devices have a second set of cameras to automatically check its work as it operates and to gather data on the tens of thousands of plants in each field so that its analytics software can continue improving the devices and the process. Daniel Theobald, Founder and Chief Innovation Officer at Vecna, a Cambridge, MA provider of mobile robots, said:“It’s a smart move by Deere. They realize the time window in which ag industry execs will continue to buy dumb equipment is rapidly coming to a close. The race to automate is on and traditional equipment manufacturers who don’t embrace automation will face extinction. Agriculture is ripe for the benefits that robotics has to offer. Automation allows farmers to decrease water use, reduce the use of pesticides and other methods that are no longer sustainable, and helps solve ever worsening labor shortages.”
-
OMRON, the Japanese company that acquired robot maker Adept Technology last year, has just acquired Microscan Systems, the Renton, WA-based barcode reading and machine vision systems company, for $157 million. Microscan was a wholly owned subsidiary of UK-based Spectris Plc.
-
Neato Robotics, the California maker of home robot vacuums, was acquired by German appliance maker Vorwerk. Financial terms were not disclosed. Vorwerk invested in Neato back in 2010 but now has completely acquired Neato outright and fully owns its business and technology, which could help the international operation expand into the growing robotic vacuum industry.
-
Siemens, the German conglomerate, acquired Tass International for an undisclosed amount. Tass develops software that simulates traffic scenarios, validates autonomous driving and replicates ADAS (advanced driver assistance systems) in crash testing. It has 200 employees and annual revenue of around $32 million.
-
Precision Planting, a developer and reseller of mechanical, monitoring and control systems for precision ag applications, was acquired by AGCO, a global manufacturer and distributor of ag equipment, for an undisclosed amount. Precision Planting was a subsidiary of The Climate Corporation (a subsidiary of Monsanto).
-
Nabors Industries, an oil and gas drilling company, has acquired Robotic Drilling Systems, a Norwegian provider of a system for unmanned drill-floor operations. No figures were disclosed regarding the transaction.
IPOs
- Restoration Robotics, a Silicon Valley FDA-approved robotic hair transplant startup, has filed to be listed on NASDAQ under the symbol HAIR. They plan to offer 3.125 million shares priced at around $8 per share — a $25 million IPO. It is expected to price during the week of October 9, 2017. If that price holds, it would establish a market value of $225 million for the company.
RoboBusiness 2017: What’s cooking in robotics?
Mike Toscano, the former president of the Association for Unmanned Vehicle Systems International, emphatically declared at the September RobotLab forum that “anyone who claims to know the future of the [robotics] industry is lying, I mean no one could’ve predicted the computing mobile revolution.” These words acted as a guiding principle when walking around RoboBusiness in Silicon Valley last week.
The many keynotes, pitches and exhibits in the Santa Clara Convention Center had the buzz of an industry racing towards mass adoption, similar to the early days of personal computing. The inflection point in the invention that changed the world, the PC, was 1995. During that year, Sun Microsystems released Java to developers with promise of “write once, publish anywhere,” followed weeks later by Microsoft’s consumer software package, Windows ’95. Greater accessibility led to full ubiquity and applications unthinkable by the original engineers. In many ways, the robot market is standing a few years before its own watershed moment.
In my last post, I highlighted mechanical musicians and painters, this week it is time to see what is cooking, literally, in robotics. Next year, startup Moley plans to introduce the “first fully-automated and integrated intelligent cooking robot,” priced under $100,000. It already has a slick video that is reminiscent of Lily’s Robotics’ rise to the headlines; needless to say Moley has created quite a stir in the culinary community.
Austin Gresham, executive chef at The Kitchen by Wolfgang Puck is very skeptical, “Professional chefs have to improvise constantly as they prepare dishes. If a recipe says to bake a potato for 25 minutes and the potatoes are more or less dense than the previous batch, then cooking times will vary. I would challenge any machine to make as good a mashed potato (from scratch).” Gresham’s challenge is really the crux of the matter, creativity is driven by human’s desire for food, without taste could a robot chef have the intuition to improvise?
Acting as a judge of the RoboBusiness Pitch Fire Competition, I met entrepreneurs undiscouraged by the market challenges ahead. In addition, throughout my Valley visit, I encountered five startups building commercial and consumer culinary applications. Any time this happens within such a short timespan, I stop and take notice. Automated restaurants seem to be a growing trend across the nation with a handful of upstarts on both coasts. Eatsa is a chain of quinoa-salad restaurants sans cashiers and servers. Customers order via mobile devices or on-site kiosks, picking up their ready dishes through an automated floor-to-ceiling lockbox fixture. However, behind the wall Eatsa has hourly workers manually preparing the salad bowls. Cafe X in San Francisco offers a completely automated experience with a robot-arm barista preparing, brewing and serving espressos, cappuccinos, and Americanos. After raising $5 million from venture investors, Cafe X plans to expand with robot kiosks throughout the city. Probably the most end-to-end automated restaurant concept I visited can be found tucked away on Berkeley University’s Global Campus called BBox by Nourish. BBox is currently running a trial on campus and planning to open its first store next year to conquer the multi-billion dollar breakfast market with egg sandwiches and gourmet coffee (see video below).
According to Nourish’s CEO Greg Becker, BBox will “reengineer the food ecosystem, from farm to mouth.” Henry Hu, Cafe X’s founder, also aims to revolutionize “the supply chain, recipes, maintenance, and customer support.” To date, the most successful robotic concept is Zume Pizza. Founder Julia Collins made headlines last year with her groundbreaking spin on the traditional pizzeria. Today she is taking on Dominos dollar for dollar in the San Francisco area, delivering pies in under 22 minutes. Collins, a former Chief Financial Officer of a Mexican restaurant chain, challenges the food industry, “Why don’t we just re-write the rules— forget about everything we learned about running a restaurant?” Already, Zume is serving hundreds of satisfied customers daily, proving at least with pizza it is possible to innovate.
“We realized we could automate more of the unsafe repetitive tasks of operating a kitchen using flexible, dynamic robots,” explains Collins, who currently employees over 50 human workers that do everything from software engineering to supervising the robots to delivering the pizza. “The humans that work at Zume are making dough from scratch, working with farmers to source products, recipe development—more collaborative, creative human tasks. [We have] lower rent costs because we don’t have a storefront; delivery only and lower labor costs. We reinvest those savings into locally sourced, responsibly farmed food.” Collins also boasts that her human workforce has access to free vision, dental, and health insurance due to the cost savings.
Even Shake Shack could have competition very soon as Google Ventures-backed Momentum Machines is launching an epicurean robot bistro in San Francisco’s chic SoMa district later next year. The machine that has been clocked at 400 burgers an hour, guarantees “to slice toppings, grill a patty, assemble, and bag the burger without any help from humans,” at prices that “everyone can afford.” Momentum’s proposition prompted former McDonald’s CEO Ed Rensi to controversially state that “it’s cheaper to buy a $35,000 robotic arm than it is to hire an employee who’s inefficient making $15 an hour bagging french fries.” Comments like Rensi’s do not further the industry, in fact it probably led to the controversy last month with the launch of Bodega, an automated convenience store that even enraged Lin-Manuel Miranda below.
The bad press was multiplied further by Elizabeth Segran’s article in Fast Company, which read, “the major downside to this concept — should it take off — is that it would put a lot of mom-and-pop stores out of business.” Founder Paul McDonald responded on Medium, “Rather than take away jobs, we hope Bodega will help create them. We see a future where anyone can own and operate a Bodega — delivering relevant items and a great retail experience to places no corner store would ever open.” While Bodega is not exactly a robotic concept, it is similar to the automated marketplace of AmazonGo with 10 computer vision sensors tracking the consumer and inventory management via a mobile checkout app. “We’re shrinking the store and putting it in a box,” said McDonald. The founder has publicly declared war on 7-Eleven’s 5,000 stores, in addition to the 4 million vending machines across the US. Realizing the pressures to innovate, last year 7-Eleven made history with the first drone Slurpee delivery. “Drone delivery is the ultimate convenience for our customers and these efforts create enormous opportunities to redefine convenience,” said Jesus H. Delgado-Jenkins, 7-Eleven EVP and Chief Merchandising Officer. “This delivery marks the first time a retailer has worked with a drone delivery company to transport immediate consumables from store to home. In the future, we plan to make the entire assortment in our stores available for delivery to customers in minutes. Our customers have demanding schedules, are on-the-go 24/7 and turn to us to help navigate the challenges of their daily lives. We look forward to working with Flirtey to deliver to our customers exactly what they need, whenever and wherever they need it.”
As mom & pop stores compete for market share, one wonders with more Kitchen OS concepts if home cooked meals will join the list of outdated cultural trends. Serenti Kitchen in San Francisco plans to bring the Keurig pod revolution to food with its proprietary machine that includes prepared culinary recipe pods that are dropped into a bowl and whipped to perfection by a robotic arm (see above). Serenti Founder Tim Chen was featured last year at the Smart Kitchen Summit, which reconvenes later this month in Seattle. Chen said, “We’re building something that’s quite hard, mechanically, so it’s more from a vision where we wanted to initially develop a machine that could cook, and make cooking easier and automate cooking for the home.” Initially Chen plans to target business catering, “In the near term, we need to focus on placing these machines where there’s the highest amount of density, which is right in the offices,” but long-term Serenti plans to join the appliance counter. Chen explained his inspiration, “Our Mom is a great cook, so they’ve watched her execute the meals. Then realized a lot of it is repetitive, and what recipes are, is essentially just a machine language.” Chen’s observations are shared by many in the IoT and culinary space, as this year’s finalists in the Smart Kitchen Summit include more robotic of inventions, such as Crepe Robot that automatically dispense, cook and flavors France’s favorite snack and GammaChef, a robotic appliance that promises like Serenti to whip up anything in a bowl. Clearly, these inventions will eventually lead to a redesign of the physical home kitchen space that is already crowded with appliances. Some innovators are even using robotic arms tucked away in cabinets and specialized drawers, ovens and refrigeration units that communicate seamlessly to serve up dinner.
The automated kitchen illuminated by Moley and others might be coming sooner than anyone expects; then again it could be a rotten egg. In almost every Sci-Fi movie and television show the kitchen is reduced to a replicator that synthesizes food to the wishes of the user. Three years ago, it was rumored that food-powerhouse Nestle was working on a machine that could produce nutritional supplements on demand, code name Iron Man. While Iron Man has yet to be released to the public, it does illustrate the convergence of 3D printing, robotics and kitchen appliances. While the Consumer Electronics Show is still months away, my appetite has just been whetted for more automated culinary treats, stay tuned!
Global robot growth causing shortages in critical components
Two reputable research resources are reporting that the robotics industry is growing more rapidly than expected. BCG (Boston Consulting Group) is conservatively projecting that the market will reach $87 billion by 2025; Tractica, incorporating the robotic and AI elements of the emerging self-driving industry, is forecasting the market will reach $237 billion by 2022.
Both research firms acknowledge that yesterday’s robots — which were blind, big, dangerous and difficult to program and maintain — are being replaced and supplemented with newer, more capable ones. Today’s new – and future robots will – have voice and language recognition, access to super-fast communications, data and libraries of algorithms, learning capability, mobility, portability and dexterity. These new precision robots can sort and fill prescriptions, pick and pack warehouse orders, sort, inspect, process and handle fruits and vegetables, plus a myriad of other industrial and non-industrial tasks, most faster than humans, yet all the while working safely along side them.
Boston Consulting Group (BCG)
Gaining Robotic Advantage, June 2017, 13 pages, free
BCG suggests that business executives be aware of ways robots are changing the global business landscape and think and act now. They see robotics-fueled changes coming in retail, logistics, transportation healthcare, food processing, mining and agriculture.
BCG cites the following drivers:
- Private investment in the robotic space has continued to amaze with exponential year-over-year funding curves and sensational billion dollar acquisitions.
- Prices continue to fall on robots, sensors, CPUs and communications while capabilities continue to increase.
- Robot programming is being transformed by easier interfaces, GUIs and ROS.
- The prospect of a self-driving vehicles industry disrupting transportation is propelling a talent grab and strategic acquisitions by competing international players with deep pockets.
- 40% of robotic startups have been in the consumer sector and will soon augment humans in high-touch fields such as health and elder care.
BCG also cites the following example as an example of paying close attention to gain advantage:
“Amazon gained a first-mover advantage in 2012 when it bought Kiva Systems, which makes robots for warehouses. Once a Kiva customer, Amazon acquired the robot maker to improve the productivity and margins of its network of warehouses and fulfillment centers. The move helped Amazon maintain its low costs and expand its rapid delivery capabilities. It took five years for a Kiva alternative to hit the market. By then, Amazon had a jump on its rivals and had developed an experienced robotics team, giving the company a sustainable edge.”
Tractica
Robotics Market Forecast – June 2017, 26 pages, $4,200
Drones for Commercial Applications – June 2017, 196 pages, $4,200
AI for Automotive Applications – May 2017, 63 pages, $4,200
Consumer Robotics – May 2017, 130 pages, $4,200
The key story is that industrial robotics, the traditional pillar of the robotics market, dominated by Japanese and European robotics manufacturers, has given way to non-industrial robot categories like personal assistant robots, UAVs, and autonomous vehicles, with the epicenter shifting toward Silicon Valley, which is now becoming a hotbed for artificial intelligence (AI), a set of technologies that are, in turn, driving a lot of the most significant advancements in robotics. Consequently, Tractica forecasts that the global robotics market will grow rapidly between 2016 and 2022, with revenue from unit sales of industrial and non-industrial robots rising from $31 billion in 2016 to $237.3 billion by 2022. The market intelligence firm anticipates that most of this growth will be driven by non-industrial robots.
Tractica is headquartered in Boulder and analyzes global market trends and applications for robotics and related automation technologies within consumer, enterprise, and industrial marketplaces and related industries.
General Research Reports
- Global autonomous mobile robots market, June 2017, 95 pages, TechNavio, $2,500
TechNavio forecasts that the global autonomous mobile robots market will grow at a CAGR of more than 14% through 2021. - Global underwater exploration robots, June 2017, 70 pages, TechNavio, $3,500
TechNavio forecasts that the global underwater exploration robots market will grow at a CAGR of 13.92 % during the period 2017-2021. - Household vacuum cleaners market, March 2017, 134 pages, Global Market Insights, $4,500
Global Market Insights forecasts that household vacuum cleaners market size will surpass $17.5 billion by 2024 and global shipments are estimated to exceed 130 million units by 2024, albeit at a low 3.0% CAGR. Robotic vacuums show a slightly higher growth CAGR. - Global unmanned surface vehicle market, June 2017, Value Market Research, $3,950
Value Market Research analyzed drivers (security and mapping) versus restraints such as AUVs and ROVs and made their forecasts for the period 2017-2023. - Innovations in Robotics, Sensor Platforms, Block Chain, and Artificial Intelligence for Homeland Security, May 2017, Frost & Sullivan, $6,950
This Frost & Sullivan report covers recent developments such as co-bots for surveillance applications, airborne sensor platforms for border security, blockchain tech, AI as first responder, and tech for detecting nuclear threats. - Top technologies in advanced manufacturing and automation, April 2017, Frost & Sullivan, $4,950
This Frost & Sullivan report focuses on exoskeletons, metal and nano 3D printing, co-bots and agile robots – all of which are in the top 10 technologies covered. - Mobile robotics market, December 2016, 110 pages, Zion Market Research, $4,199
Global mobile robotics market will reach $18.8 billion by end of 2021, growing at a CAGR of slightly above 13.0% between 2017 and 2021. - Unmanned surface vehicle (USV) market, May 2017, MarketsandMarkets, $5,650
MarketsandMarkets forecasts the unmanned surface vehicle (USV) market to grow from $470.1 Million in 2017 to $938.5 Million by 2022, at a CAGR of 14.83%. - Military/Civil UAS markets, May 2017, 608 pages, Teal Group
The Teal Group’s 2016 world military market study estimates that UAV production will soar from current worldwide UAV production of $2.8 billion annually in 2016 to $9.4 billion in 2025, a 15.4% CAGR and that civil UAS production will soar from $2.6 billion worldwide in 2016 to $10.9 billion in 2025, a 15.4% CAGR.
Agricultural Research Reports
- Global agricultural robots market, May 2017, 70 pages, TechNavio, $2,500
Forecasts the global agricultural robots market will grow steadily at a CAGR of close to 18% through 2021. - Agriculture robots market, June 2017, TMR Research, $3,716
Robots are poised to replace agricultural hands. They can pluck fruits, sow and reap crops, and milk cows. They carry out the tasks much faster and with a great degree of accuracy. This coupled with mandates on higher minimum pay being levied in most countries, have spelt good news for the global market for agriculture robots. - Agricultural Robots, December 2016, 225 pages, Tractica, $4,200
Forecasts that shipments of agricultural robots will increase from 32,000 units in 2016 to 594,000 units annually in 2024 and that the market is expected to reach $74.1 billion in annual revenue by 2024. Report, done in conjunction with The Robot Report, profiles over 165 companies involved in developing robotics for the industry.
Bottom Line
The disparity between the projections of these research reports is wide but the CAGRs are mostly all double digit. It is easy to conclude as BCG did – that the robotics industry is growing faster than expected
Robohub Podcast #244: Robot Pediatric Coach, with Ayanna Howard
In this episode, Audrow Nash interviews Ayanna Howard, Professor at the Georgia Institute of Technology, about her work to help children with the movement disorder cerebral palsy. Howard discusses how robots and tablet can be used to “gamify” pediatric therapy. The idea is that if therapy is fun and engaging children are more likely to do it, and thus, they are more likely to see the long-term benefits of the therapy. Howard discusses how therapy is “gamified,” how a small humanoid robot is used to coach children, and how they work with pediatricians.
Ayanna Howard
Ayanna Howard, Ph.D. is Professor and Linda J. and Mark C. Smith Endowed Chair in Bioengineering in the School of Electrical and Computer Engineering at the Georgia Institute of Technology. She also holds the position of Associate Chair for Faculty Development in ECE. She received her B.S. in Engineering from Brown University, her M.S.E.E. from the University of Southern California, and her Ph.D. in Electrical Engineering from the University of Southern California.
Her area of research is centered around the concept of humanized intelligence, the process of embedding human cognitive capability into the control path of autonomous systems. This work, which addresses issues of autonomous control as well as aspects of interaction with humans and the surrounding environment, has resulted in over 200 peer-reviewed publications in a number of projects – from scientific rover navigation in glacier environments to assistive robots for the home. To date, her unique accomplishments have been highlighted through a number of awards and articles, including highlights in USA Today, Upscale, and TIME Magazine, as well as being named a MIT Technology Review top young innovator and recognized as one of the 23 most powerful women engineers in the world by Business Insider.
In 2013, she also founded Zyrobotics, which is currently licensing technology derived from her research and has released their first suite of therapy and educational products for children with differing needs. From 1993-2005, Dr. Howard was at NASA’s Jet Propulsion Laboratory, California Institute of Technology. She has also served a term as the Associate Director of Research for the Georgia Tech Institute for Robotics and Intelligent Machines and a term as Chair of the multidisciplinary Robotics Ph.D. program at Georgia Tech.
Links
Energy, enthusiasm and spirit of cooperation: Award winners of ERL Emergency Robots 2017 announced
The European Robotics League (ERL) announced the winners of ERL Emergency Robots 2017 major tournament, during the awards ceremony held on Saturday, 23rd September at Giardini Pro Patria, in Piombino, Italy.
The ERL Emergency Robots 2017 competition consisted of four scenarios, inspired by the nuclear accident of Fukushima (Japan, 2011) and designed specifically for multi-domain human-robot teams. The first scenario is The Grand Challenge made up of three domains – sea, air, land, and the other three scenarios are made of only two domains.
The Awards, given for each scenario to the best performing teams, were introduced by Alan Winfield from Bristol Robotics Laboratory and ERL Emergency Coordinator. “The energy, enthusiasm and spirit of cooperation among the teams competing in ERL Emergency was amazing. We witnessed not only great performances from the teams and their robots, but also the drama and excitement of last minute field repairs and workarounds to the robots”, said Alan Winfield.
The Grand Challenge (Scenario 1: land, sea, and air)
After a nuclear power plant has been struck by a potent earthquake and a tsunami, it’s time for the emergency response team to act. Due to high radiation levels, the cooperation of land, sea and air robots is essential. The robots have to find as soon as possible three missing workers and deploy an emergency kit next to them. Secondly, the robots check for any structural damage of the building and to the pipes connecting the reactor to the sea for cooling purposes. In case of damaged or leaking pipes, the corresponding valves are to be closed both in the machine room and underwater to avoid radioactive contamination. Closing the wrong valves may cause a reduction in the amount of seawater available for cooling down the reactor.
1st Prize: Telerob, Germany (land) + Universitat de Girona, Spain (sea) + INESTEC/ISEP Aerial Robotics, Portugal (air)
“Our underwater robot Sparus II AUV was used to create maps of the underwater environment and to autonomously detect some targets. The algorithms developed by master and PhD students and the robustness of the platform allowed us to obtain good results even in the challenging conditions of the competition. The multi-domain competition required the coordination with the other robots (land and air), which offered us a unique opportunity for testing our communication capabilities.”, said Marc Carreras from the University of Girona.
“Autonomy was showing its advantages as well as a good situational awareness. The advanced mission documentation, which was requested in the competition, enables the first responders to get a fast and reliable situational understanding to finally reach the necessary situational ownership. The ERL Emergency competition should further focus on improving the robot-human-teaming”, said Andreas Ciossek from Telerob.
“The INESC TEC participation in ERL Emergency 2017 allowed us to validate our robotics technology in a real-world scenario with a relevant social and economical impact. Furthermore, it helped raising public awareness of the role that advanced robots can play in disaster scenarios aiding human teams in critical operations, and it confirmed once again the leading role of European robotics research”, said Eduardo Silva from ISEP/INESC TEC.
Read more about their experience here.
2nd Prize: IIS Piombino CVP, Italy (air) + Robdos, Spain, + IMM, Poland (land and sea)
3rd Prize: Raptors, Poland (air & land) + Oubot, Hungary (sea)
Survey the building and search for missing workers (Scenario 2: land and sea)
The ground and aerial robots perform a reconnaissance mission of the area and create a map of the surrounding area in order to increase the awareness of the emergency response team. Additionally, the robots find two missing workers outdoors and deploy first-aid kits near them.
1st Prize: IMM, Poland + IIS Piombino CVP, Italy
2nd Prize: Raptors Team, Poland
3rd Prize: Telerob, Germany + INESTEC/ISEP Aerial Robotics, Portugal
Pipe inspection and search for missing workers (Scenario 3: sea and air)
After the earthquake and tsunami, the pipes connecting the reactor to the sea might be leaking radioactive substances, therefore the emergency team has to find the damaged ones on land or underwater. Robots must find two missing workers: one outside the building, to whom an emergency kit should be deployed, and another one dragged by the tsunami to the sea, expected to be a casualty.
1st Prize: Universitat de Girona, Spain + INESTEC/ISEP Aerial Robotics, Portugal
2nd Prize: Tuscany Robotics Team, Italy
3rd Prize: AUV Tomkyle, Germany + HSR Search and Rescue Team, Switzerland
“Aerial robots have shown great improvements with respect to euRathlon 2015, most of them being fully operational from the first day of the competition. They have been able to quickly provide information about inaccessible areas, structural damages or other possible threats. As it happened with robots from the other two domains, aerial teams also struggled with communication issues to properly command and control their platforms. This further confirms the need for autonomous capabilities onboard the aerial robots, to become even more powerful tools for emergency response teams”, said Francisco Javier Pérez Grau from the Advanced Center for Aerospace Technologies.
Stem the leak (Scenario 4: land and sea)
The land robots have to inspect the pipes in the building’s machine room and marine robots the underwater pipes in order to close the correct valves and prevent leakage. Land and marine robots must cooperate to identify the valves and synchronize the process of closing them, by communicating directly or via their operators.
1st Prize: Telerob, Germany + Universitat de Girona, Spain
2nd Prize: Raptors, Poland + Oubot, Hungary
3rd Prize: bebot, Switzerland + AUV Tomkyle Team, Germany
Winning teams were given a diploma, prize money and in-kind, sponsored by the Platinum Sponsor IEEE Oceanic Engineering Society, silver sponsors SBG systems and Texas Instruments.
In addition, Marta Palau Franco, Bristol Robotics Laboratory, ERL Emergency project manager introduced the referees’ special awards. Find out who the winners are here!
“This great event has been possible thanks to the work and effort of an amazing local organising team. Special thanks to Fausto Ferreira, the ERL Emergency 2017 Deputy Director, for his continuous support. I want to thank all the sponsors, especially our platinum sponsor IEEE OES, the project partners, referees, local associations and schools for their support. Huge thanks go to the participating teams, which were the heart of this great event. Their competitiveness pushed the robots to accomplish great results, nevertheless the competition has always been accompanied by fair play. I believe this is the perfect formula for team members to improve their professional and human skills”, said Gabriele Ferri, ERL Emergency 2017 Director.
Watch the ERL Emergency 2017 Awards Ceremony video
More info
The European Robotics League is funded by the European Union’s Horizon 2020 research and innovation programme under grant agreement n° 688441.
The ERL Emergency Robots 2017 competition was organised locally by the NATO STO-Centre for Maritime Research and Experimentation (CMRE) of La Spezia, Italy.
The European Robotics League is part of the SPARC public-private partnership set up by the European Commission and euRobotics to extend Europe’s leadership in civilian robotics. SPARC’s €700 million of funding from the Commission in 2014̶20 is being combined with €1.4 billion of funding from European industry. www.eu-robotics.net/sparc
euRobotics is a European Commission-funded non-profit organisation which promotes robotics research and innovation for the benefit of Europe’s economy and society. It is based in Brussels and has more than 250-member organisations.
The referees’ special awards ERL Emergency Robots 2017
The European Robotics League (ERL) announced the winners of ERL Emergency Robots 2017 major tournament, during the awards ceremony held on Saturday, 23rd September at Giardini Pro Patria, in Piombino, Italy.
In addition to the Competition Awards, Marta Palau Franco from Bristol Robotics Laboratory and ERL Emergency project manager introduced the referees’ special awards.
“Behind a multi-domain competition there is always a large technical committee, I feel privileged to have worked with such an amazing team of volunteer referees, technical assistants and safety pilots and divers. We were delighted to give these awards to recognise teams’ effort, fair play and hard work. The experience of participating in this robotics competition will prove beneficial for team members to develop further their professional career”, said Marta Palau Franco.
Mapping Award, handed by Vladimir Djapic from AFAK, for good quality georeferenced undersea mapping.
Winner: AUV Team Tomkyle, Germany (sea)
Navigation Award, handed by Pino Casalino from the University of Genova, for the effort to change and adapt algorithms to navigate without a Doppler Velocity Log (DVL) sensor, important for the AUV (autonomous Undersea Vehicle) navigation.
Winner: Oubot Team, Hungary (sea)
Fair Play Award, handed by Marta Palau Franco from the University of the West of England, Bristol, for lending to the Tuscany Robotics Team a wheel platform for their new robot and for lending the batteries of their aerial robot to ISEP/INESC TEC aerial team.
Winners: ENSTA Team and ENSTA Bretagne, France (land, air, sea)
Creativity Award, handed by Bernd Bruggermann from Fraunhofer FKIE, for building a land robot from scratch in less than two days when their ground platform broke.
Winner: Tuscany Robotics, Italy (land, air, sea)
Multi-domain Cooperation Award, handed by Fausto Ferreira from CMRE, for cooperation between domains. The teams used a graphical interface in which each robot from sea and air domain reported its findings in the competition arena in real-time.
Winner: Universitat de Girona, Spain (sea) + ISEP/INESC TEC, Portugal (air)
Perseverance Award, handed by Francisco Javier Perez Grau from FADA-CATEC, for hard work on the development and integration of the aerial robot. The team competed in the Grand Challenge the day after their aerial platform suffered a severe crash, working overnight to fix it.
Winner: HSR Search and Rescue Team, Switzerland (air)
Piloting Award, handed by Stjepan Bogdan from University of Zagreb-FER , for outstanding UAV piloting skills. The team was able to recover the aerial robot after an unintentional landing without incurring manual intervention.
Winner: Raptors, Poland (land & air)
Autonomy Award, handed by Frank Schneider from Fraunhofer FKIE, for the best autonomy of land robots. Outstanding autonomous navigation and automatic object detection.
Winner: IMM, Poland (land)
SAUC-E Student Award – handed by Bill Kirkwood (IEEE OES), Kelly Cooper (ONR) and Hitesh Patel (AUVSI) to the best student marine team.
Winner: AUV Team Tomkyle, Germany (sea)
Teams were given a diploma and set of eZ430-Chronos development tool sponsored by Texas Instruments.
Watch the ERL Emergency 2017 Awards Ceremony video
More info
The European Robotics League is funded by the European Union’s Horizon 2020 research and innovation programme under grant agreement n° 688441.
The ERL Emergency Robots 2017 competition was organised locally by the NATO STO-Centre for Maritime Research and Experimentation (CMRE) of La Spezia, Italy.
A language to design control algorithms for robot swarms
Designing and representing control algorithms is challenging in swarm robotics, where the collective swarm performance depends on interactions between robots and with their environment. The currently available modeling languages, such as UML, cannot fully express these interactions. The Behaviour-Data Relations Modeling Language (BDRML) explicitly represents robot behaviours and data that robots utilise, as well as relationships between them. This allows BDRML to express control algorithms where robots cooperate and share information with each other while interacting with the environment. Here’s the work I presented this week at #IROS2017.
BDRML primitives
Primitives are the basic blocks of BDRML. They include:
- Behaviour: A set of processes that deal with a particular situation a robot finds itself in, for example “Scout”
- Internal data structure: Information that is stored in a robot’s memory
- External data structure: Information that is stored in a non-robot entity, i.e., in the robot’s environment
BDRML relations
The following relations between entities can exist:
- Transition: The robot transitions from one behavioural mode to another
- Read \ Write: Internal data is used \ stored by the robot engaged in particular behaviour
- Receive \ Send: External data is used \ stored by the robot. In the case of the Send relation, a robot may also send the data to another robot that stores it in its own internal data structure
- Copy: Information is copied from one data structure to another
- Update: The value of a data structure is updated from that in the previous time step by a subroutine not visualised in the BDRML diagram (for example, a pheromone level may spontaneously decrease over time).
The write and send relations can optionally define the new data structure value or a function that updates the value, indicated by a dashed line extending from the end of the relation arrow in a visual description, and written before a colon proceeding the data structure name in a textual description. The \textit{update} relation always must specify the new value or the value update function.
BRML relation conditions
Each relation or operation occurs under a specific set of conditions. A condition is visually represented as an annotated triangle at the beginning of a relation or operation arrow. In a textual representation, a condition set follows a relation signature and is separated from it by a colon. Unless otherwise specified, the “or” logical operator is used when multiple conditions affect a single relation.
Example
A full BDRML representation consists of both visual and textual specification. A set of behaviours, B, internal data structures, Di and external data structures, De, are first defined, followed by a list of relations between them. Each box, circle and arrow in the visual representation must have a corresponding element or line in the textual representation and vice versa.
An example is shown in the picture. The described algorithm allows robots to search for worksites and recruit each other to perform work and it can be applied for decentralised task allocation. A robot performs the “Scout” behaviour by searching the environment for worksites that can be found with a probability p(F). A successful Scout, that finds a worksite, performs the “Work” behaviour, during which it reads from and writes into its internal data structure, “Worksite location”, to keep track of where the worksite is located. Additionally, a working robot sends Worksite location to any Scout that it encounters in order to recruit it.
Note how the condition that allows a robot to transition from the Scout to the Work behaviour can be triggered by both p(F) or by recruitment, i.e., by existence of the internal data structure in the Scout’s memory. Also note that the condition of recruitment, “scout encountered” signifies that the two robots have to be at a similar place at a similar time for recruitment to occur. The BDRML diagram fully and unambiguously describes when recruitment is performed, what information is exchanged between robots and how it affects robot behaviour.
Publication:
Pitonakova, L., Crowder, R. & Bullock, S. (in press). Behaviour-Data Relations Modelling Language For Multi-Robot Control Algorithms. Proceedings of the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2017), IEEE.
“Superhero” robot wears different outfits for different tasks
From butterflies that sprout wings to hermit crabs that switch their shells, many animals must adapt their exterior features in order to survive. While humans don’t undergo that kind of metamorphosis, we often try to create functional objects that are similarly adaptive — including our robots.
Despite what you might have seen in “Transformers” movies, though, today’s robots are still pretty inflexible. Each of their parts usually has a fixed structure and a single defined purpose, making it difficult for them to perform a wide variety of actions.
Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) are aiming to change that with a new shape-shifting robot that’s something of a superhero: It can transform itself with different “outfits” that allow it to perform different tasks.
Dubbed “Primer,” the cube-shaped robot can be controlled via magnets to make it walk, roll, sail, and glide. It carries out these actions by wearing different exoskeletons, which start out as sheets of plastic that fold into specific shapes when heated. After Primer finishes its task, it can shed its “skin” by immersing itself in water, which dissolves the exoskeleton.
“If we want robots to help us do things, it’s not very efficient to have a different one for each task,” says Daniela Rus, CSAIL director and principal investigator on the project. “With this metamorphosis-inspired approach, we can extend the capabilities of a single robot by giving it different ‘accessories’ to use in different situations.”
Primer’s various forms have a range of advantages. For example, “Wheel-bot” has wheels that allow it to move twice as fast as “Walk-bot.” “Boat-bot” can float on water and carry nearly twice its weight. “Glider-bot” can soar across longer distances, which could be useful for deploying robots or switching environments.
Primer can even wear multiple outfits at once, like a Russian nesting doll. It can add one exoskeleton to become “Walk-bot,” and then interface with another, larger exoskeleton that allows it to carry objects and move two body lengths per second. To deploy the second exoskeleton, “Walk-bot” steps onto the sheet, which then blankets the bot with its four self-folding arms.
“Imagine future applications for space exploration, where you could send a single robot with a stack of exoskeletons to Mars,” says postdoc Shuguang Li, one of the co-authors of the study. “The robot could then perform different tasks by wearing different ‘outfits.’”
The project was led by Rus and Shuhei Miyashita, a former CSAIL postdoc who is now director of the Microrobotics Group at the University of York. Their co-authors include Li and graduate student Steven Guitron. An article about the work appears in the journal Science Robotics on Sept. 27.
Robot metamorphosis
Primer builds on several previous projects from Rus’ team, including magnetic blocks that can assemble themselves into different shapes and centimeter-long microrobots that can be precisely customized from sheets of plastic.
While robots that can change their form or function have been developed at larger sizes, it’s generally been difficult to build such structures at much smaller scales.
“This work represents an advance over the authors’ previous work in that they have now demonstrated a scheme that allows for the creation of five different functionalities,” says Eric Diller, a microrobotics expert and assistant professor of mechanical engineering at the University of Toronto, who was not involved in the paper. “Previous work at most shifted between only two functionalities, such as ‘open’ or ‘closed’ shapes.”
The team outlines many potential applications for robots that can perform multiple actions with just a quick costume change. For example, say some equipment needs to be moved across a stream. A single robot with multiple exoskeletons could potentially sail across the stream and then carry objects on the other side.
“Our approach shows that origami-inspired manufacturing allows us to have robotic components that are versatile, accessible, and reusable,” says Rus, the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science at MIT.
Designed in a matter of hours, the exoskeletons fold into shape after being heated for just a few seconds, suggesting a new approach to rapid fabrication of robots.
“I could envision devices like these being used in ‘microfactories’ where prefabricated parts and tools would enable a single microrobot to do many complex tasks on demand,” Diller says.
As a next step, the team plans to explore giving the robots an even wider range of capabilities, from driving through water and burrowing in sand to camouflaging their color. Guitron pictures a future robotics community that shares open-source designs for parts much the way 3-D-printing enthusiasts trade ideas on sites such as Thingiverse.
“I can imagine one day being able to customize robots with different arms and appendages,” says Rus. “Why update a whole robot when you can just update one part of it?”
This project was supported, in part, by the National Science Foundation.
#ERLEmergency2017 in tweets
The ERL Emergency Robots 2017 (#ERLemergency2017) major tournament in Piombino, Italy, gathered 130 participants from 16 universities and companies from 8 European countries. Participating teams designed robots able to bring the first relief to survivors in disaster-response scenarios. The #ERLemergency2017 scenarios were inspired by the Fukushima 2011 nuclear accident. The robotics competition took place from 15-23 September 2017 at Enel’s Torre del Sale, and saw sea, land and air robots collaborating.
Welcome to #ERLemergency village pic.twitter.com/wQlIHDfhJg
— Alexandre Chapoutot (@YATheChap) 15 September 2017
Robot da tutto il mondo si danno appuntamento all’#ERLemergency #Robots. A Piombino da oggi al 23/09 @ERLrobotleague https://t.co/4h1btIlln1 pic.twitter.com/7L0Lo4wTjz
— Enel Group (@EnelGroupIT) 15 September 2017
Teams worked very hard during the practice and competition days:
First tests completed by #ENSTATeam in the @ERLrobotleague pic.twitter.com/rcWQdRHAfw
— Servane Courtaux (@servane_crtx) 16 September 2017
Tuscany Robotics Team pic.twitter.com/BFo2mnb7B7
— Benedetto Allotta (@BenAllotta) 22 September 2017
Some field repairs for team Raptors @ERLrobotleague #ERLemergency while teams Oubot and Raptors continue air and sea operations. pic.twitter.com/LJ6Zphsu6Y
— Alan Winfield (@alan_winfield) 23 September 2017
Great to see many more women roboticists in #ERLemergency2017 teams than #eurathlon2015. See e.g. team #Piombino https://t.co/FsnC3e5ijN
— Alan Winfield (@alan_winfield) 25 September 2017
Land + Air day out here at #ERLemergency! @ERLrobotleague pic.twitter.com/DzZlcmnNWf
— AUVSI Foundation (@AUVSIFoundation) 20 September 2017
Robots could be found also in the exhibition area:
SWARMs at ERL 2017! #erlemergency #Piombino @SantAnnaPisahttps://t.co/ly8ciQQgh6 pic.twitter.com/anQ9Ck7knI
— SWARMs Project (@SWARMS_Project) 21 September 2017
Or enjoying the sea:
ETH Robot ANYmal likes the sea in Italy. pic.twitter.com/UsU3hUlKTD
— Roland Siegwart (@rsiegwart) 15 de septiembre de 2017
Robotics experts held presentations and demos for the general public during the Opening Ceremony in Piazza Bovio.
The Opening ceremony of #Erlemergency 2017 @ERLrobotleague has started in Piombino #robots pic.twitter.com/SGqVLGL2Xn
— euRobotics (@eu_Robotics) 17 September 2017
.@mady_delvaux: European #robotics research is performing well, robots will impact human life, #erlemergency2017 is a positive example pic.twitter.com/gciEjcS0XW
— SPARC robotics (@SPARCrobotics) 17 September 2017
The public got to know the teams,
Let’s support the #erlemergency2017 teams! pic.twitter.com/5XbsegCATB
— SPARC robotics (@SPARCrobotics) 17 September 2017
And to see some emergency robots in action with the demo of TRADR project:
Robots will complement people, not take their jobs, says TRADR project #erlemergency2017 https://t.co/K3KOP8umVN
— SPARC robotics (@SPARCrobotics) 17 September 2017
The competition site benefitted from the visit of some personalities:
Il sindaco di Piombino @massimogiu a #ERLemergency 2017 a Tor del Sale “Manifestazione positiva per la città che vuole guardare al futuro” pic.twitter.com/xMOW9zs1Hw
— Comune di Piombino (@ComunePiombino) 22 September 2017
We are delighted to have Bill Kirkwood from our Platinum Sponsor @ieeeoes at #ERLemergency2017. Many thanks for sponsoring us! pic.twitter.com/27zsP66Xsz
— ERL robotics league (@ERLrobotleague) 23 September 2017
Since they are the new generation of roboticists, children were not forgotten either: they enjoyed the free classes given by Scuola di Robotica.
Enjoy the #robotics class for children in #Piombino #ERLemergency2017 @EmanueleMicheli pic.twitter.com/1nEB4LDcEZ
— euRobotics (@eu_Robotics) 17 September 2017
At the Piombino Castle, the public attended more #robotics presentations.
Anne Bajart @RoboticsEU presented EU funded #robotics projects @CLsPlusPlus #erlemergency2017 #Piombino pic.twitter.com/RnTbCeIIl9
— ERL robotics league (@ERLrobotleague) 18 September 2017
After days of hard work, passion and enjoyment, the winners of the Grand challenge were announced:
The winners of #ERLemergency2017 Grand Challenge are Telerob+Universitat de Girona + ISEP/INESC TEC. Congratulations! pic.twitter.com/Y89BvQhJ5x
— ERL robotics league (@ERLrobotleague) 23 September 2017
Drones can almost see in the dark
We want to share with you our recent breakthrough teaching drones to fly using an eye-inspired camera, which opens the door to them performing fast, agile maneuvers and flying in low-light environments, where all commercial drones fail. Possible applications include supporting rescue teams with search missions at dusk or dawn. We submitted this work to the IEEE Robotics and Automation Letters.
How it works
Event cameras are bio-inspired vision sensors that output pixel-level brightness changes instead of standard intensity frames. They do not suffer from motion blur and have a very high dynamic range, which enables them to provide reliable visual information during high speed motions or in scenes characterized by high dynamic range. However, event cameras output only little information when the amount of motion is limited, such as in the case of almost still motion. Conversely, standard cameras provide instant and rich information about the environment most of the time (in low-speed and good lighting scenarios), but they fail severely in case of fast motions or difficult lighting such as high dynamic range or low light scenes. In this work, we present the first state estimation pipeline that leverages the complementary advantages of these two sensors by fusing in a tightly-coupled manner events, standard frames, and inertial measurements. We show that our hybrid pipeline leads to an accuracy improvement of 130% over event-only pipelines, and 85% over standard-frames only visual-inertial systems, while still being computationally tractable. Furthermore, we use our pipeline to demonstrate—to the best of our knowledge—the first autonomous quadrotor flight using an event camera for state estimation, unlocking flight scenarios that were not reachable with traditional visual inertial odometry, such as low-light environments and high dynamic range scenes: we demonstrate how we can even fly in low light (such as after switching completely off the light in a room) or scenes characterized by a very high dynamic range (one side of the room highly illuminated and another side of the room dark).
Paper:
T. Rosinol Vidal, H.Rebecq, T. Horstschaefer, D. Scaramuzza
Hybrid, Frame and Event based Visual Inertial Odometry for Robust, Autonomous Navigation of Quadrotors, submitted to IEEE Robotics and Automation Letters PDF
#IROS2017 Live Coverage
The 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2017) is being held in Vancouver Canada this week. The theme of IROS 2017 is “Friendly People, Friendly Robots”. Robots and humans are becoming increasingly integrated in various application domains. We work together in factories, hospitals and households, and share the road. This collaborative partnership of humans and robots gives rise to new technological challenges and significant research opportunities in developing friendly robots that can work effectively with, for, and around people.
And it’s also IROS’s 30th birthday this year! The occasion for much celebration.
Many Robohubbers will be at IROS, watch out for Sabine, Audrow, Andra, Hallie, and AJung. We’re also looking for new members to join our community. If you’re interested, email sabine.hauert@robohub.org, and we’ll make sure to meet during the conference!
Udacity Robotics video series: Interview with Chris Anderson from 3D Robotics
Mike Salem from Udacity’s Robotics Nanodegree is hosting a series of interviews with professional roboticists as part of their free online material.
This week we’re featuring Mike’s interview with Chris Anderson, Co-Founder and CEO of 3D Robotics. Chris is a former Wired magazine editor turned robotics company co-founder and CEO. Learn about Chris’s amazing journey into the world of unmanned aerial vehicles.
You can find all the interviews here. We’ll be posting them regularly on Robohub.
Talking Machines: The long view and learning in person, with John Quinn
In episode nine of season three we chat about the difference between models and algorithms, take a listener question about summer schools and learning in person as opposed to learning digitally, and we chat with John Quinn of the United Nations Global Pulse lab in Kampala, Uganda and Makerere University’s Artificial Intelligence Research group.
If you enjoyed this episode, you may also want to listen to:
- Talking Machines: Machine Learning in the Field and Bayesian Baked Goods, with Ernest Mwebaze
- Talking Machines: Data Science Africa, with Dina Machuve
- Talking Machines: The church of Bayes and collecting data, with Katherine Heller
- Talking Machines: Getting a start in ML and applied AI at Facebook, with Joaquin Quiñonero Candela
- Talking Machines: Bias variance dilemma for humans and the arm farm, with Jeff Dean
- Talking Machines: Overfitting and asking ecological questions, with Tom Dietterich
- Talking Machines: Restricted Boltzmann Machines, with Eric Lander
- Talking Machines: Automatic Translation and t-SNE, with Hal Daume
- Talking Machines: Generative art and Hamiltonian Monte Carlo, with Doug Eck
- Talking Machines: Machine learning and the Flint water crisis, with Jake Abernethy
- Talking Machines: Gaussian processes and OpenAI, with IIya Sutskever
See all the latest robotics news on Robohub, or sign up for our weekly newsletter.
Robotics, the traditional path and new approaches
Robotics, like many other technologies, suffered from an inflated set of expectations resulting in a decrease of the developments and results during the 90s. Over the last years, several groups thought that flying robots, commonly known as drones, would address these limitations however it seems unlikely that the popularity of these flying machines will drive and push the robot growth as expected. This article aims to summarize traditional techniques used to build and program robots together with new trends that aim to simplify and enhance the progress in the field.
Building robots
It’s a rather popular thought that building a robot and programing its behavior remain two highly complicated tasks. Recent advances in adopting ROS as a standardized software framework for developing robot applications helped with the latter, however building a robot remains a challenge. The lack of compatible systems in terms of hardware, the non existing marketplace of reusable modules, or the expertise required to develop the most basic behaviors are some of the few listed hurdles.
The integration-oriented approach
Robots are typically built by following the step-by-step process described below:
1. Buy parts: We typically decide on what components our robot will need. A mobile base, a range finder, a processing device, etc. Once decided we fetch those that match our requirements and proceed towards integration.
2. Integration: Making different components speak to each other and cooperate towards achieving the end goal of the robot. Surprisingly, that’s where most of our time is spent.
3. “build the robot”: Assembling all of the components into joints and mechanically linking them. This step might also get executed together with step
4. Programming the robot: Making the robot do what it’s meant to do.
5. Test & adapt: Robots are typically programmed in predictable scenarios. Testing the pre-programmed behavior in real scenarios is always critical. Generally, these tests deliver results that indicate where adaptations are needed, which in many cases pushes de process of building a robot down to step 2 again, integration.
6. Deploy: Ship it!
It’s well understood that building a robot is a technically challenging task. Engineers often face situations where the integration effort of the robot, generally composed by diverse sub-components, supersedes many other tasks. Furthermore, every hardware modification/adaptation while programming or building the robot demands further integration.
This method for building robots produces results that become obsolete within a short period.
Moreover, modules within the robots aren’t reusable in most of the cases since the integration effort makes reusability an incredibly expensive (manpower-wise) and time-consuming task.
The modular approach
The existing trend in robotics is producing a significant number of hardware devices. Although there’s an existing trend towards using the Robot Operating System (ROS), when compared to each other, these components typically consist of incompatible electronic components with different software interfaces.
Now, imagine building robots by connecting interoperable modules together. Actuator, sensors, communication modules, UI devices, provided everything interoperates together, the whole integration effort could be eliminated. The overall process of building robots could be simplified and the development effort and time will be reduced significantly.
Modular components could be reused among robots and that’s exactly what we’re working on with H-ROS, the Hardware Robot Operating System.
H-ROS is a vendor-agnostic infrastructure for the creation of robot modules that interoperate and can be exchanged between robots. H-ROS builds on top of ROS, the Robot Operating System, which is used to define a set of standardized logical interfaces that each physical robot component must meet if compliant with H-ROS.
Programming robots
The robotics control pipeline
Traditionally, the process of programming a robot for a given task T is described as follows:
Observation: Robot’s sensors produce measurements. All these measurements receive the name of “observations” and are the inputs that the robot receives to execute task T.
State estimation: Given the observations of step 1, we describe the robot’s motion over time by inferring a set of characteristics of the robot such as its position, its orientation or its velocity. Obviously, mistakes in the observations will lead to errors in the state estimation.
Modeling & Prediction: Determine the dynamics of the robot (rules for how to move it around) using a) the robot model (typically the URDF of the robot in the ROS world) and b) the state estimation. Similarly to what happened with the previous step, errors in “state estimation” will impact the results obtained in this step.
Planning: this step determines the actions required to execute task T and uses both the state estimation and the dynamical model from previous steps in the pipeline.
Low level control: the final step in the pipeline consists of transforming the “plan” into low level control commands that steer the robot actuators.
Bio-inspired techniques
Artificial Intelligence methods and, particularly, bio-inspired techniques such as artificial neural networks (ANNs) are becoming more and more relevant in robotics. Starting from 2009, ANNs gained popularity and started delivering good results in the fields of computer vision (2012) or machine translation (2014). Nowadays, these fields are completely filled by techniques that simulate the neural/synaptic activity of the brain of a living organism.
During the last years we have seen how these techniques have been translated to robotics for tasks such as robotic grasping (2016). Our team has been putting resources into exploring these techniques
that enable to train a robotic device in a manner conceptually similar to the mode in which one goes about training a domesticated animal such as a dog or cat.
Training robots end-to-end for a given task. This integrated and bio-inspired approach conflicts with the traditional robotics pipeline, however it’s already showing promising results of behaviors that generalize.
We are excited to share that it’s within our expectations to see more active use of these bio-inspired techniques. We are confident that its use will drive innovations with high impact for robotics and we hope to contribute by opening part of our work and results.
The roboticist matrix
All these new approaches for both building and programming robots bring a dilemma to roboticists. What should they focus on? Which approach should they follow for each particular use case? Let’s analyze the different combinations:
Integration-oriented + robotics control pipeline:
This combination represents the “traditional approach” in all senses. It’s the process that most robot solutions use nowadays in industry. Integrated robots that typically belong to a single manufacturer. Such robots are programmed in a structured way to execute a well defined task. Typically achieving high levels of accuracy and repeatability. However, any uncertainty in the environment will typically drive the robot to fail on its task. Expenses related to develop such systems are typically in the range of 10.000–100.000 € for the simplest behaviors and an order of magnitud above for the more complex tasks.
Integration-oriented + bio-inspired:
Behaviors that evolve, but with strong hardware constraints and limitations. Traditional robots enhanced with bio-inspired approaches. Robots using this combination will be able to learn by themselves and adapt to changes in the environment however any modification, repurpose or extension within the robot hardware will require big integration efforts. The expenses for developing these robots are similar to the ones presented for the “traditional approach”.
Modular + robotics control pipeline:
Flexible hardware with structured behaviors. These robots will be built in a modular way. Building, repairing and/or repurposing these robots will be extremely affordable when compared to traditional robots (built with the integration-oriented approach), we estimate an order of magnitude less (1.000–10.000 €). Furthermore, modularity will introduce new opportunities for these robots.
Modular + bio-inspired:
This combination represents the most innovative one and has the potential to disrupt the whole robotics market changing both the way we build and program/train robots. Yet it’s also the most immature one.
Similar to the previous approach group, our team foresees that the expenses for putting together these robots can also be reduced when compared to the more traditional approaches. We estimate that building and training these robots should range in terms of expenses from 1.000 to 10.000 € for simple scenarios and up to 50.000 € for the more elaborated ones.
Our path towards the future: modular robots and H-ROS
The team behind Erle Robotics is proud to announce that together with Acutronic Robotics (Switzerland), Sony (Japan) is now also pushing the development of H-ROS, the Hardware Robot Operating System. A technology that aims to change the landscape of robotics by creating an ecosystem where hardware components can be reused among different robots, regardless of the original manufacturer. Our team strongly believe that the future of robotics will be about modular robots that can be easily repaired and reconfigured. H-ROS aims to shape this future. Sony’s leadership and vision in robotics is widely recognized in the community. We are confident that, with the addition of Sony as a supporter, our present innovations will spread even more rapidly.
Our team is focused in exploring these new opportunities and will introduce some results this week in Vancouver during ROSCon. Show your interest and join us in Canada!
The importance of research reproducibility in robotics
As highlighted in a previous post, despite the fact that robotics is increasingly regarded as a ‘Science’, as shown by the launch of new journals such as Science Robotics, reproducibility of experiments is still difficult or entirely lacking.
This is quite unfortunate as the possibility of reproducing experimental results is a cornerstone of the scientific method. This situation pushes serious discussions (What’s ‘soft robotics’? Is it needed? What has to be ’soft’?) and paradigm clashes (Good Old Fashioned Artificial Intelligence vs. Deep Learning vs. Embodied Cognition) into the realm of literary controversy or even worse religious territory fights, with quite little experimental evidence supporting the claims of the different parties. Not even wrong, as they say (following Peter Woit’s arguments on String Theory)?
The robotics community has been aware of these issues for a long time and more and more researchers in recent years have published datasets, code and other valuable information to allow others to reproduce their results. We are heading in the right direction, but we probably need to do more.
I think we should therefore welcome the fact that for the first time ever, IEEE R&A Mag. will start accepting R-articles (i.e., papers that report experiments aiming to be fully reproducible) beginning this September. Actually, they will also accept short articles reporting on the replication of R-Article results, and author replies are solicited and will be published after peer-review. The result will be a two-stage high-quality review process. The first stage will be the ordinary rigorous review process of a top-tier publishing venue. The second stage will be the replication of the experiments by the community (which is the core of the scientific method).
This seems like a historical improvement, doesn’t it?
There is more information on this in the column I wrote in the September issue of IEEE Robotics and Automation.