Top 3 Robotic Applications in Primary Food Processing
Primary processing involves handling raw food products, which are cleaned, sorted, chopped, packaged, etc. Some foods, like raw vegetables, will only undergo primary processing before they are packaged for the consumer. Other foods will undergo secondary processing before packaging.
Up until quite recently, robotic processing at this stage has been limited or non-existent. Raw foods are variable in size, weight and shape. This makes it difficult for robots to handle them. However, recent developments in sensing and soft gripping has made it possible for robots to handle many raw foods.
1. Robotic Butchery
Butchery is a very difficult task to automate. Every animal carcass is different. A skilled butcher will adapt each cut to the shape and position of bones and meat. Some butchery tasks are simpler to automate than others. For example, high-volume chicken leg deboning is an established part of the meat processing industry.
Beef butchery has traditionally been very difficult to automate. Recently,beef manufacturer JBS has started looking for ways to introduce robots into their factories. Parts of the process are very dangerous for human workers. Rib cutting, for example, involves operating a high-speed circular saw for several hours. JBS has managed to automate this action using robot manipulators and various vision sensors. The application has improved safety and product consistency.
2. Fruit and Vegetable Pick and Place
Fruits and vegetables are challenging to handle with a robot due to their variable sizes and shapes. They also require delicate handling to avoid damage. For these reasons they have traditionally been handled by human workers. However, recent developments in gripping technologies look to change all that. Soft Robotics Inc has introduced a flexible gripper which can handle very delicate foods, even individual lettuce leaves!
Another example is Lacquey’s gripper, which uses paddles to lift soft fruits and vegetables.
3. Robotic Cutting and Slicing
Some cutting and slicing tasks are easy to automate. For example, even kitchen food processors can slice vegetables into uniform shapes. Robots are not needed for this type of simple automation.
For more advanced cutting and slicing, however, the food industry has relied on human workers but robotics is starting to make its way into the industry. Fish cutting, for example, involves detecting and removing defects from the fish as well as cutting fillets to uniform shapes and sizes.
Top 3 Robotic Applications in Secondary Food Processing
Secondary processing involves handling products which have already undergone primary processing. Robots have been used for several applications for a long time, particularly pick and place. However, recent developments have opened the door to even more advanced applications.
1. Product Pick and Place
You may be familiar with the high speed delta robots which are used to move food products around a production line. If not, here is a video:
This is an example of secondary processing pick and place. It is distinct from the vegetable pick and place mentioned above because the products are more uniform in shape and size. Uniform foods are much easier to handle robotically, so this application has been available in the food industry for many years.
2. Cake Decorating
One impressive application is robotic cake decoration. This involves using a robotic arm much like a 3D printer to pipe icing onto a cake. The Deco-Bot from Unifiller can pipe hand-drawn decorations onto cakes on a moving conveyor.
Cake cutting can also be done robotically, like the Katana waterjet cutting robot which can cut out intricate shapes in cakes using high pressure water.
3. Pizza Making
Artisan food producers sometimes worry that adding robots their process will make their products less “hand-made.” However, Silicon Valley pizza producer Zume is showing how robots can be produced to look like they have the human touch. Their pizzeria uses two robots: a delta robot to spread the tomato sauce and a ABB manipulator to tend the pre-baking ovens. While their system is far from fully automated, their goal is to make the pizza delivery industry a labor-free business.
Finally… Washing Up!
Contaminated food causes 48 million people in the USA to become sick annually. Robotic food processing has the potential to reduce this, by removing human workers from parts of the process, but this is only possible if the robots themselves do not cause contamination.
One of the more challenging issues for food automation is the fact that every piece of machinery must be thoroughly cleaned to avoid contamination. Robot manufacturers have been working to make their robot casings smoother, with better ingress ratings and no loose wires. This allows them to be thoroughly washed down at the end of each cycle.
In this system from JMP Automation, the two robots wash down the workcell with high powered water, and even wash down each other:
More than 50 million Americans suffer from sleep disorders, and diseases including Parkinson’s and Alzheimer’s can also disrupt sleep. Diagnosing and monitoring these conditions usually requires attaching electrodes and a variety of other sensors to patients, which can further disrupt their sleep.
To make it easier to diagnose and study sleep problems, researchers at MIT and Massachusetts General Hospital have devised a new way to monitor sleep stages without sensors attached to the body. Their device uses an advanced artificial intelligence algorithm to analyze the radio signals around the person and translate those measurements into sleep stages: light, deep, or rapid eye movement (REM).
“Imagine if your Wi-Fi router knows when you are dreaming, and can monitor whether you are having enough deep sleep, which is necessary for memory consolidation,” says Dina Katabi, the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science, who led the study. “Our vision is developing health sensors that will disappear into the background and capture physiological signals and important health metrics, without asking the user to change her behavior in any way.”
Katabi worked on the study with Matt Bianchi, chief of the division of sleep medicine at MGH, and Tommi Jaakkola, the Thomas Siebel Professor of Electrical Engineering and Computer Science and a member of the Institute for Data, Systems, and Society at MIT. Mingmin Zhao, an MIT graduate student, is the paper’s first author, and Shichao Yue, another MIT graduate student, is also a co-author.
The researchers will present their new sensor at the International Conference on Machine Learning on Aug. 9.
Remote sensing
Katabi and members of her group in MIT’s Computer Science and Artificial Intelligence Laboratory have previously developed radio-based sensors that enable them to remotely measure vital signs and behaviors that can be indicators of health. These sensors consist of a wireless device, about the size of a laptop computer, that emits low-power radio frequency (RF) signals. As the radio waves reflect off of the body, any slight movement of the body alters the frequency of the reflected waves. Analyzing those waves can reveal vital signs such as pulse and breathing rate.
“It’s a smart Wi-Fi-like box that sits in the home and analyzes these reflections and discovers all of these changes in the body, through a signature that the body leaves on the RF signal,” Katabi says.
Katabi and her students have also used this approach to create a sensor called WiGait that can measure walking speed using wireless signals, which could help doctors predict cognitive decline, falls, certain cardiac or pulmonary diseases, or other health problems.
After developing those sensors, Katabi thought that a similar approach could also be useful for monitoring sleep, which is currently done while patients spend the night in a sleep lab hooked up to monitors such as electroencephalography (EEG) machines.
“The opportunity is very big because we don’t understand sleep well, and a high fraction of the population has sleep problems,” says Zhao. “We have this technology that, if we can make it work, can move us from a world where we do sleep studies once every few months in the sleep lab to continuous sleep studies in the home.”
To achieve that, the researchers had to come up with a way to translate their measurements of pulse, breathing rate, and movement into sleep stages. Recent advances in artificial intelligence have made it possible to train computer algorithms known as deep neural networks to extract and analyze information from complex datasets, such as the radio signals obtained from the researchers’ sensor. However, these signals have a great deal of information that is irrelevant to sleep and can be confusing to existing algorithms. The MIT researchers had to come up with a new AI algorithm based on deep neural networks, which eliminates the irrelevant information.
“The surrounding conditions introduce a lot of unwanted variation in what you measure. The novelty lies in preserving the sleep signal while removing the rest,” says Jaakkola. Their algorithm can be used in different locations and with different people, without any calibration.
Using this approach in tests of 25 healthy volunteers, the researchers found that their technique was about 80 percent accurate, which is comparable to the accuracy of ratings determined by sleep specialists based on EEG measurements.
“Our device allows you not only to remove all of these sensors that you put on the person, and make it a much better experience that can be done at home, it also makes the job of the doctor and the sleep technologist much easier,” Katabi says. “They don’t have to go through the data and manually label it.”
Sleep deficiencies
Other researchers have tried to use radio signals to monitor sleep, but these systems are accurate only 65 percent of the time and mainly determine whether a person is awake or asleep, not what sleep stage they are in. Katabi and her colleagues were able to improve on that by training their algorithm to ignore wireless signals that bounce off of other objects in the room and include only data reflected from the sleeping person.
The researchers now plan to use this technology to study how Parkinson’s disease affects sleep.
“When you think about Parkinson’s, you think about it as a movement disorder, but the disease is also associated with very complex sleep deficiencies, which are not very well understood,” Katabi says.
The sensor could also be used to learn more about sleep changes produced by Alzheimer’s disease, as well as sleep disorders such as insomnia and sleep apnea. It may also be useful for studying epileptic seizures that happen during sleep, which are usually difficult to detect.
Instead of worrying so much about robots taking away jobs, maybe we should worry more about wages being too low for robots to even get a chance. Seasonal labor for harvesting agricultural products, particularly fruits and vegetables, is dependent on human labor from a diminishing universe of willing workers.
Robots that can supplement or replace human workers in the harvesting process are being developed and tested in startups and academia, but almost all are not yet ready for prime time.
In a NY Times article written by Neil Irwin entitled Rethinking Low Productivity, productivity growth has been on a downward path since the Financial Crisis. Irwin, who writes about economic trends, asks whether the downward trend is the cause of low growth, or the result, a troubling question in the dynamics of the agriculture industry.
“Inventors and business innovators are always developing better ways to do things, but it takes a labor shortage and high wages to coax firms to deploy the investment it takes to actually put those innovations into widespread use.”
Those two phenomena are happening today in the U.S. agricultural industry. Labor rates are rising while willing laborers are diminishing — yet farmers are not yet investing in robotics. In fact, they are accommodating to the labor crisis by planting less and changing the crops they grow to be less labor intensive.
J.W. Mason, the author of a paper by the Roosevelt Institute cited as the basis for the thesis of Irwin’s article, says:
“If the labor market tightens and wages rise, that will be the impetus to get companies to consider more big-ticket innovations that generate productivity growth so that we don’t perpetuate the present conundrum in which both arguments cannot be true:
“On Mondays and Wednesdays, economists argue that wages are low because robots are taking people’s jobs.
On Tuesdays and Thursdays, it’s that we can’t have wages rise because productivity growth is low.”
This situation cannot continue without radical changes else we will become buyers of foreign fruits and vegetables paying prices set by foreign providers and dependent on them for availability and quality.
A number of factors are causing a reset in the ag industry in addition to the declining availability of farm workers: the challenges and complexities of employing and retaining farm labor; the rising cost of farm workers; changing farmlands; climate change; the growth of indoor farming; and the broader automation of the agriculture industry. All are propelling farmers to make changes in how they farm (or change downward the product mix they farm (which is often a path toward failure)). Market challenges for the sector include unclear value propositions, limited awareness of robotic systems among growers, insufficient robotic solutions, the difficulty of matching human-like dexterity with machines, fragmented technology development, and weak support.
Tractica, in their report on agricultural robotics, forecast an optimistic resolution to this temporary conundrum in the very near term and that that shipments of agricultural robots will increase significantly in the years ahead, rising from 32,000 units in 2016 to 594,000 units annually by 2024, by which time the market is expected to reach $74 billion in annual revenue. Certainly the time is ripe and, like Mason said, “Both arguments can’t be true.”
If you would like to receive the Weekly Roundup in your inbox, please subscribe at the bottom of the page.
At the Center for the Study of the Drone
In an interview with OpenGov, Center for the Study of the Drone co-director Arthur Holland Michel discusses the main trendlines in the ongoing evolution of drone technology.
News
A U.S. drone strike in Somalia reportedly killed a member of al-Shabab. In a statement, the U.S. Africa Command said that the strike took place in the Lower Shabelle region, an al-Shabab stronghold. (Associated Press)
The Trump administration is reviewing a drone exports policy established by the Obama administration. According to an official who spoke to DefenseNews, the review is part of a broader effort to find “smarter new approaches to U.S. defense trade policy.” The Obama administration placed controls on drone technology exports to U.S. allies in 2015.
A small tethered drone will help the U.S. Secret Service provide perimeter security during President Trump’s visit to the Trump National Golf Club in Bedminster, New Jersey this month. In an announcement, the agency said that the test is part of an initiative to explore new technologies for security operations. (Reuters)
Citing cyber vulnerabilities, the U.S. Army has instructed its units to discontinue the use of all drones made by DJI, the popular Chinese manufacturer. The decision appears to have been based on a classified study and a Navy memo on security issues in DJI products. (Reuters)
At an event hosted by the Center for Strategic and International Studies and the U.S. Naval Institute, Adm. Paul Zukunft discussed the Coast Guard’s history with drone acquisition and operations. (USNI News)
The China Aerospace Science and Technology Corporation has made a number of upgrades to its CH-4 Rainbow surveillance and strike drone. (IHS Jane’s International Defence Review)
A team of security researchers has demonstrated that sonic blasts can be used to hack a number of electronic devices, including drones. (Fox News)
Researchers at the Swiss Federal Institute of Technology and Zurich University of the Arts have developed a hexacopter with independently rotating propellers that is capable of flying in far more acrobatic ways than traditional multirotor drones. (Yanko Design)
The University of Michigan has announced that it is developing an outdoor flight testing facility for drones. (Unmanned Systems Technology)
A team at the China Aerospace Science and Technology Corporation is developing an app-based management system for large military drones. (IHS Jane’s International Defence Review)
Following a 10-month definition process, the Organisation for Joint Armament Cooperation has decided that its European Medium-Altitude Long-Endurance drone will have a twin turboprop design. (IHS Jane’s International Defence Review)
Ukraine’s SpetsTechnoExport has conducted a weapons test of its Fantom-2 unmanned ground vehicle. (IHS Jane’s International Defence Review)
The Federal Aviation Administration is investigating an incident in which a drone was spotted near a runway at Newark International Airport. (USA Today)
China’s People’s Liberation Army confirmed that it is now operating the CH-901, a loitering munition drone. (Popular Science) For more on loitering munitions, click here.
The Duluth Fire Department in Minnesota is testing EMILY, an unmanned surface vehicle designed for saving stranded swimmers, with an eye to possibly acquiring the system to use on Lake Superior. (Grand Forks Herald)
Sen. Sheldon Whitehouse (D-RI) and Rep. Jim Langevin (D-RI) introduced a bill that makes it illegal to fly drones near airports without permission. (The Hill)
The Fort Wayne Police Department in Indiana has acquired two drones for a range of operations, including emergency response and environmental surveys. (News Sentinel)
Kentucky governor Matt Bevin has accused a local news channel of invading his privacy after it flew a drone over his private property. (CNET)
The U.S. Army and the Army of the Republic of Macedonia are constructing a 300m-long runway for drones in the Krivolak Training Area. (IHS Jane’s Defense Weekly)
The North Dakota Air National Guard took delivery of its first new MQ-9 Reaper drone. (Associated Press)
Snap is reportedly negotiating to buy Zero Zero Robotics, the China-based manufacturer of the Hover Camera selfie drone, for between $150 and $200 million. (TechCrunch)
The U.S. Navy awarded Northrop Grumman Systems a $19.9 million contract to identify solutions to “near-term emergent obsolescence issues” for the MQ-4C Triton. (DoD)
The U.S. Navy awarded Northrop Grumman Systems a $2.94 million contract for MQ-8 Fire Scout logistics and training sustainment. (FBO)
A quick, hassle-free way to stay on top of robotics news, our robotics digest is released on the first Monday of every month. Sign up to get it in your inbox.
Robots in action
From wacky talking Einsteins to clumsy security ‘bots, from speedy drones to the underwater operations at Fukushima, it’s been another busy month. So let’s kick off our July review with a look at robots in action!
Up in the air
You’d be forgiven for missing this first one. Zipping by at a record speed of 179.3 mp/h (288.6 km/h), The Drone Racing League’s new RacerX drone lays claim to the “fastest ground speed by a battery-powered remote-controlled quadcopter.” The team behind the drone buzzed into the Guinness Book of World Records while performing a number of drag races along a 100-meter course in upstate New York. The aircraft, which weighs in at 0.8 kg, recorded an average top speed of 163.5 mp/h (263.1 km/h).
Flying a lot higher and—technically—a lot faster, this next innovation takes the form of a cute, spheroid camera drone that predictably drew comparisons to everyone’s favorite interstellar droid-ball, BB-8. Released aboard the International Space Station (ISS), the Japanese made Int-Ball will not only save crewmembers time by snapping pictures of experiments, but could also improve robotic-human cooperation in future space expeditions according to a statement from the Japanese Aerospace Exploration Agency (JAXA).
But if you really don’t want flying robots in your airspace (and you don’t have a trained sparrowhawk on-hand), then check out this testing of anti-drone weapons designed to blow prying eyes out of the sky. Could we be on the cusp of an anti-drone arms race?
In the lab
July saw more researchers challenging the traditional anthropomorphic vision of robot motion. Take a look at these new innovations and you’ll see that the concept of a rickety, two legged android helper is beginning to look rather dated.
For example: a new type of vine-inspired robot created by mechanical engineers at Stanford University could soon be squirming through the rubble of collapsed buildings. Like natural organisms that cover distance by growing—such as fungi and nerve cells—the researchers have made a proof of concept of their soft, growing robot and have run it through some challenging tests. From one end of the cylinder, a tendril can extend into a mass of stones or dirt, like a fast-climbing vine. And a camera at the tip of the tendril can potentially offer rescuers a view of otherwise unreachable places.
Meanwhile, a team of researchers at the Wyss Institute for Biologically Inspired Engineering and the John A. Paulson School of Engineering and Applied Sciences (SEAS) at Harvard University has created battery-free folding robots that are capable of complex, repeatable movements powered and controlled through a wireless magnetic field. There are many applications for this kind of minimalist robotic technology. For example: rather than having an uncomfortable endoscope put down their throat to assist a doctor with surgery, a patient could just swallow a micro-robot that could move around and perform simple tasks like holding tissue or filming—all powered by a coil outside the patient’s body. Furthermore, using large source coils could enable wireless, battery-free communication between multiple “smart” objects in an entire home.
And check this out: a pair of new computational methods developed by a team of researchers from Massachusetts Institute of Technology (MIT), University of Toronto and Adobe Research has taken steps towards automating the design of the dynamic mechanisms behind jumping movements in robots. Their methods generate simulations that match the real-world behaviors of flexible devices at rates 70-times faster than previously possible and provide critical improvements in the accuracy of simulated collisions and rebounds. These methods are both fast and accurate enough to be used to automate the design process used to create dynamic mechanisms for controlled jumping.
Health and rehabilitation
One of the most inspiring and positive applications of robotics and AI is within the field of medicine and rehabilitation. Just take the following story for example. With the assistance of its human handlers, Toyota’s Human Support Robot wheeled into a paralyzed military veteran’s home on a mission: to support the quadriplegic patient and, in the process, pave the way for truly useful care robots. Check out the video below.
Leveraging similar technology to that applied in their self-driving golf buggies and autonomous electric cars, MIT and Singaporean researchers have developed and deployed a self-driving wheelchair at a hospital. Spearheaded by Daniela Rus and Erna Viterbi (professor of Electrical Engineering and Computer Science and director of MIT’s Computer Science and Artificial Intelligence Laboratory), this autonomous wheelchair is an extension of the self-driving scooter that launched at MIT last year.
Meanwhile, July also brought some fantastic new developments in robotic driven harness devices designed to improve mobility in disabled people. A team led by Sunil Agrawal, professor of mechanical engineering and of rehabilitation and regenerative medicine at Columbia Engineering has published a pilot study in Science Robotics that demonstrates a robotic training method that improves posture and walking in children with crouch gait—caused by cerebral palsy—by enhancing their muscle strength and coordination.
Similarly, a recent paper published in Science Translational Medicine by a team lead by Courtine-Lab addresses the issue of regaining movement in spinal chord injury (SPI) and stroke patients. The research focuses on multi-directional gravity assist harnesses to aid in rehabilitation. Check out the video below.
Mapping and exploration
An underwater drone, nicknamed “Little Sunfish”, has captured previously unseen shots of underwater damage at the crippled Fukushima nuclear plant. The RC marine vehicle, which is about the size of a loaf of bread, has been sent into the primary containment vessel of the Unit 3 reactor in an attempt to locate melted fuel. Tepco spokesman Takahiro Kimoto told the Japan Times that video taken by the robot over three days shows clumps of what is likely to be melted fuel. “This means something of high temperature melted some structural objects and came out. So it is natural to think that melted fuel rods are mixed with them,” he said.
Exemplified by the operation at Fukushima, the need for fast, accurate 3D mapping solutions has quickly become a reality for many industries. As such, Clearpath Robotics decided this month to team up with Mandala Robotics to demonstrate how easily you might implement 3D mapping on a Clearpath robot.
Fun and games
Comedy is subjective. One person’s epic fail is another person’s fit of hilarity. This month, a lot of people were having a good laugh at a suicidal security ‘bot at an office and retail complex in Washington D.C. who drove itself into a fountain. The egg-shaped robot, known as the K5 Autonomous Data Machine, drew both sympathy and jeers after it stumbled down a set of steps and into the water. The photos were widely shared. Fish that one out.
If expensive punchlines are your thing, then you might also be interested in the $300 USD Professor Einstein robot now available on eBay. Hanson Robotics’ expressive, wacky robotic character can chat about science, tell jokes, check on the weather and, naturally, quote Einstein himself. It connects to a companion app with games, videos, and interactive lessons And yeah, it’s constantly sticking its tongue out (of course).
Finally, last month we saw a group of students in England develop a robot to pull the perfect pint. This month, it’s cocktails! The Cocktail Bot 4.0 consists of five robots with one high-level goal: mix more than 20 possible drink combination for you!
Business & politics
Last month saw two robotics-related companies get $50 million each and 17 others raised $248 million for a monthly total of $348 million. Acquisitions also continued to be substantial with SoftBank’s acquisition of Google’s robotic properties Boston Dynamics and Schaft plus two others acquisitions.
Indeed, two reputable research resources in July reported that the robotics industry is growing more rapidly than expected. BCG (Boston Consulting Group) is conservatively projecting that the market will reach $87 billion by 2025; Tractica, incorporating the robotic and AI elements of the emerging self-driving industry, is forecasting the market will reach $237 billion by 2022.
Meanwhile, Singapore Technologies Engineering Ltd (ST Engineering) has acquired Pittsburgh, PA-based robotics firm Aethon Inc through Vision Technologies Land Systems, Inc. (VTLS), and its wholly-owned subsidiary, VT Robotics, Inc, for $36 million. The acquisition will be carried out by way of a merger with VT Robotics, a special newly incorporated entity established for the transaction. The merger will see Aethon as the surviving entity that will operate as a subsidiary of VTLS, and will be part of the the ST Group’s Land Systems sector. Aethon’s leadership team and employees will remain in place and the company will continue to operate out of its Pittsburgh, PA location.
And if you’re looking for funding them check this out: The Robotics Hub, in collaboration with Silicon Valley Robotics, is looking to invest up to $500,000 in robotics, AI and sensor startups! Finalists also receive exposure on Robohub and space in the new Silicon Valley Robotics Cowork Space. Plus you get to pitch your startup to an audience of top VCs, investors and experts. Entries close August 31.
Remember Tertill, the weed whacking robot? The response to Tertill’s crowdfunding campaign has amazed and delighted! Pledges totalling over $250,000 have come from 1000+ backers, and Tertill is shipping to all countries, with over a fifth of Tertill’s supporters coming from outside the United States. 11th of July was the last full day of the campaign. The discounted campaign price is no longer available and delivery in time for next year’s—northern hemisphere—growing season cannot be assured.
Self-driving news
In the race to develop self-driving technology, Chinese Internet giant Baidu unveiled its 50+ partners in an open source development program, revised its timeline for introducing autonomous driving capabilities on open city roads, described the Project Apollo consortium and its goals, and declared Apollo to be the ‘Android of the autonomous driving industry’.
And there could be more trouble ahead in the Asian market as India—another powerhouse in the region—could say no to self driving cars in general by banning their use entirely. ‘In a country where you have unemployment, you can’t have a technology that ends up taking people’s jobs,’ roads minister Nitin Gadkari says.
Conversely, lawmakers in the USA say self-driving carsare the future and federal law needs updating to ensure they’re developed and deployed in the United States. A panel approved a bill to boost testing of self-driving vehicles. The bill prohibits any state from imposing its own laws related to the design and construction of self-driving cars. Federal officials say 94% of auto accidents are caused by human error, so self-driving technology has the potential to save thousands of lives and improve the mobility of many elderly and disabled Americans. The Safely Ensuring Lives Future Deployment and Research In Vehicle Evolution Act, or SELF DRIVE, bill has hit the House of Representatives.
And you can read Brad Templeton’s take on all the news and commentary from July’s AUVSI/TRB Automated Vehicle Symposium 2017. It’s an odd mix of business and research, but also the oldest self-driving car conference.
Drone News
Drone manufacturers and designers have managed to shrink drone technology in recent years, even creating flying prototypes at a near insect scale. But the toughest task has been shrinking down the brains of the operation. Now, engineers at MIT have taken a first step in designing a computer chip that uses a fraction of the power of larger drone computers and is tailored for a drone as small as a bottlecap. They presented a new methodology and design, which they call “Navion,” at the Robotics: Science and Systems conference, held at MIT.
Perhaps shrinking UAVs might also go some way to addressing the problem of sonic irritation? A preliminary NASA study has discovered that people find the noise of drones more annoying than that of ground vehicles, even when the sounds are the same volume. “We didn’t go into this test thinking there would be this significant difference,” says Andrew Christian of NASA’s Langley Research Center, Virginia. “It is almost unfortunate the research has turned up this difference in annoyance levels,” he adds, “as its purpose was merely to prove that Langley’s acoustics research facilities could contribute to NASA’s wider efforts to study drones.”
Meanwhile, an unrelated study by the UK government on the danger of drones colliding with aircraft has drawn criticism from manufacturers. The Department for Transport (DfT) report recommended registration and competency testing, saying helicopters were especially vulnerable to drones. The Drone Manufacturers Alliance Europe (DMAE) has questioned the evidence gathered in the report and says some of the testing is flawed.
In other safety news, DJI has responded to reports of its drones randomly switching off mid-flight and dropping out of the sky. According to at least 14 users, DJI Spark drones have switched off and crashed into various areas ranging from open fields to lakes or forests. But luckily no crowded areas yet. DJI told Fortune in a statement that it is working to address the crash incidents going forward: “DJI is aware of a small number of reports involving Spark drones that have lost power mid-flight. Flight safety and product reliability are top priorities. Our engineers are thoroughly reviewing each customer case and working to address this matter urgently,” the statement read. “We are looking to implement additional safeguards with a firmware update which will be issued soon. When prompted on the DJI GO 4 App, we recommend all customers to connect to the internet and update their aircraft’s firmware to ensure a safe flight when flying their Spark,” the company added.
Learn
An interdisciplinary workshop on self-organization and swarm intelligence in cyber physical systems was held at Lakeside Labs in July. Experts presented their work and discussed open issues in this exciting field. Click here to watch some videos from the workshop.
July also welcomed The Second Edition of the award-winning Springer Handbook of Robotics, edited by Bruno Siciliano and Oussama Khatib. The contents of the first edition have been restructured to achieve four main objectives: the enlargement of foundational topics for robotics, the enlightenment of design of various types of robotic systems, the extension of the treatment on robots moving in the environment, and the enrichment of advanced robotics applications. Most previous chapters have been revised, fifteen new chapters have been introduced on emerging topics, and a new generation of authors have joined the handbook’s team.
Elsewhere, Mike Salem from Udacity’s Robotics Nanodegree is hosting a series of interviews with professional roboticists as part of their free online material. You can watch the interview with Nick Kohut, Co-Founder and CEO of Dash Robotics, below. Stay tuned to Robohub throughout August to see more featured interviews from Udacity.
And finally, check out Peter Feuilherade’s article on rescue robots, and Christoph Salge’s article on how Azimov’s laws might not be sufficient to rescue humanity from robots. I’ll leave it to you to muddle through the implicit dichotomy. Or I’ll just see you next month for all the latest Robot related news and views!
Enjoy.
Upcoming events for August – September 2017
Farm Progress Show: August 29–31, 2017, Decatur, IL
World of Drones Congress (WoDC): August 31–September 2, 2017, Brisbane, Australia
Interdrone: September 6–8, 2017, Las Vegas, NV
FSR 2017: September 12–15, 2017, Zurich, Switzerland
RobotWorld: September 13–16, 2017, Seoul, South Korea
IEEE Africon: September 18–20, 2017, Cape Town, South Africa
ROS Con: September 21–22, 2017, Vancouver, BC
IROS 2017: September 24–28, 2017, Vancouver, BC
RoboBusiness: September 27–28, 2017, Santa Clara, CA
Auris Surgical Robotics, the Silicon Valley startup headed by Frederic Moll who previously co-founded Hansen Medical and Intuitive Surgical, raised $280 million in a Series D round led by Coatue Management and including earlier investors Mithril Capital Management, Lux Capital, Highland Capital and 24 others.
An Auris spokesman said that the company has raised a total of $530 million and is developing targeted, minimally invasive therapies that treat only the diseased cells in order to prevent the progression of a patient’s illness. Lung cancer is the first disease they are targeting.
There are 1 billion smokers worldwide creating an epidemic of 6 million deaths per year. More patients die every year from lung cancer than from prostate, breast and colon cancer combined.
The reason lung cancer is so deadly is that the diagnostic and treatment processes are ineffective. The majority of lung cancer patients are diagnosed in late stage, when the cancer has already spread beyond its primary location. With our technology, physicians will be able to access early stage lung cancer without incisions, allowing accurate diagnosis and targeted treatment.
In this episode, MeiXing Dong interviews Matthias Vanoni, co-founder and CEO of Biowatch. Vanoni speaks about Biowatch, a wrist-veins biometric reader that functions as a security solution for mobile payments and smart devices. They discuss the technical challenges of building a miniaturized wrist-vein reader and how this device changes the usual user authentication process.
Matthias Vanoni
Matthias Vanoni is the co-founder and CEO of Biowatch. Previously, he studied vein biometrics as a PhD student at EPFL.
Flexible endoscopes can snake through narrow passages to treat difficult to reach areas of the body. However, once they arrive at their target, these devices rely on rigid surgical tools to manipulate or remove tissue. These tools offer surgeons reduced dexterity and sensing, limiting the current therapeutic capabilities of the endoscope.
Now, researchers from the Wyss Institute for Biologically Inspired Engineering at Harvard University and the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) have developed a hybrid rigid-soft robotic arm for endoscopes with integrated sensing, flexibility, and multiple degrees of freedom. This arm — built using a manufacturing paradigm based on pop-up fabrication and soft lithography — lies flat on an endoscope until it arrives at the desired spot, then pops up to assist in surgical procedures.
Soft robots are so promising for surgical applications because they can match the stiffness of the body, meaning they won’t accidentally puncture or tear tissue. However, at small scales, soft materials cannot generate enough force to perform surgical tasks.
“At the millimeter scale, a soft device becomes so soft that it can’t damage tissue but it also can’t manipulate the tissue in any meaningful way,” said Tommaso Ranzani, Ph.D., a Postdoctoral Fellow at the Wyss Institute and SEAS and coauthor of the paper. “That limits the application of soft microsystems for performing therapy. The question is, how can we develop soft robots that are still able to generate the necessary forces without compromising safety.”
Inspired by biology, the team developed a hybrid model that used a rigid skeleton surrounded by soft materials. The manufacturing method drew on previous work in origami-inspired, pop-up fabrication developed by Robert Wood, Ph.D., who coauthored the paper and is a Core Faculty Member of the Wyss Institute and the Charles River Professor of Engineering and Applied Sciences at SEAS.
Previous pop-up manufacturing techniques — such as those used with the RoboBees — rely on actuation methods that require high voltages or temperatures to operate, something that wouldn’t be safe in a surgical tool directly manipulating biological tissues and organs.
So, the team integrated soft actuators into the pop-up system.
“We found that by integrating soft fluidic microactuators into the rigid pop-up structures, we could create soft pop-up mechanisms that increased the performance of the actuators in terms of the force output and the predictability and controllability of the motion,” said Sheila Russo, Ph.D., Postdoctoral Fellow at the Wyss Institute and SEAS and lead author of the paper. “The idea behind this technology is basically to obtain the best of both worlds by combining soft robotic technologies with origami-inspired rigid structures. Using this fabrication method, we were able to design a device that can lie flat when the endoscope is navigating to the surgical area, and when the surgeon reaches the area they want to operate on, they can deploy a soft system that can safely and effectively interact with tissue.”
The soft actuators are powered by water. They are connected to the rigid components with an irreversible chemical bond, without the need of any adhesive. The team demonstrated the integration of simple capacitive sensing that can be used to measure forces applied to the tissue and to give the surgeon a sense of where the arm is and how it’s moving. The fabrication method allows for bulk manufacturing, which is important for medical devices, and allows for increased levels of complexity for more sensing or actuation. Furthermore, all materials used are biocompatible.
The arm is also equipped with a suction cup — inspired by octopus tentacles — to safely interact with tissue. The team tested the device ex vivo, simulating a complicated endoscopic procedure on pig tissue. The arm successfully manipulated the tissue safely.
“The ability to seamlessly integrate gentle yet effective actuation into millimeter-scale deployable mechanisms fits naturally with a host of surgical procedures,” said Wood. “We are focused on some of the more challenging endoscopic techniques where tool dexterity and sensor feedback are at a premium and can potentially make the difference between success and failure.”
The researchers demonstrated that the device could be scaled down to 1 millimeter, which would allow it to be used in even tighter endoscopic procedures, such as in lungs or the brain.
Next, the researchers hope to test the device in vivo.
“Our technology paves the way to design and develop smaller, smarter, softer robots for biomedical applications,” said Russo.
The paper was coauthored by Conor Walsh, Ph.D., a Core Faculty Member of the Wyss Institute and the John L. Loeb Associate Professor of Engineering and Applied Sciences at SEAS.
The research was supported by the DARPA “Atoms to Product” program and the Wyss Institute for Biologically Inspired Engineering.
It was a return to the source for RoboCup 2017, which took place last week in Nagoya Japan, 20 years after its launch in the same city.
Bigger than ever, the competition brought together roboticists from around the world. Originally focussed on robot football matches, RoboCup has expanded to include leagues for rescue robots, industrial robots, and robots in the home. Kids are also part of the fun, competing in their own matches and creative shows. You can watch video introductions of all of the leagues here, or watch a quick summary below.
And here’s a scroll through 5 hours of football glory from this year’s competition.
The data captured by today’s digital cameras is often treated as the raw material of a final image. Before uploading pictures to social networking sites, even casual cellphone photographers might spend a minute or two balancing color and tuning contrast, with one of the many popular image-processing programs now available.
This week at Siggraph, the premier digital graphics conference, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory and Google are presenting a new system that can automatically retouch images in the style of a professional photographer. It’s so energy-efficient, however, that it can run on a cellphone, and it’s so fast that it can display retouched images in real-time, so that the photographer can see the final version of the image while still framing the shot.
The same system can also speed up existing image-processing algorithms. In tests involving a new Google algorithm for producing high-dynamic-range images, which capture subtleties of color lost in standard digital images, the new system produced results that were visually indistinguishable from those of the algorithm in about one-tenth the time — again, fast enough for real-time display.
The system is a machine-learning system, meaning that it learns to perform tasks by analyzing training data; in this case, for each new task it learned, it was trained on thousands of pairs of images, raw and retouched.
The work builds on an earlier project from the MIT researchers, in which a cellphone would send a low-resolution version of an image to a web server. The server would send back a “transform recipe” that could be used to retouch the high-resolution version of the image on the phone, reducing bandwidth consumption.
“Google heard about the work I’d done on the transform recipe,” says Michaël Gharbi, an MIT graduate student in electrical engineering and computer science and first author on both papers. “They themselves did a follow-up on that, so we met and merged the two approaches. The idea was to do everything we were doing before but, instead of having to process everything on the cloud, to learn it. And the first goal of learning it was to speed it up.”
Short cuts
In the new work, the bulk of the image processing is performed on a low-resolution image, which drastically reduces time and energy consumption. But this introduces a new difficulty, because the color values of the individual pixels in the high-res image have to be inferred from the much coarser output of the machine-learning system.
In the past, researchers have attempted to use machine learning to learn how to “upsample” a low-res image, or increase its resolution by guessing the values of the omitted pixels. During training, the input to the system is a low-res image, and the output is a high-res image. But this doesn’t work well in practice; the low-res image just leaves out too much data.
Gharbi and his colleagues — MIT professor of electrical engineering and computer science Frédo Durand and Jiawen Chen, Jon Barron, and Sam Hasinoff of Google — address this problem with two clever tricks. The first is that the output of their machine-learning system is not an image; rather, it’s a set of simple formulae for modifying the colors of image pixels. During training, the performance of the system is judged according to how well the output formulae, when applied to the original image, approximate the retouched version.
Taking bearings
The second trick is a technique for determining how to apply those formulae to individual pixels in the high-res image. The output of the researchers’ system is a three-dimensional grid, 16 by 16 by 8. The 16-by-16 faces of the grid correspond to pixel locations in the source image; the eight layers stacked on top of them correspond to different pixel intensities. Each cell of the grid contains formulae that determine modifications of the color values of the source images.
That means that each cell of one of the grid’s 16-by-16 faces has to stand in for thousands of pixels in the high-res image. But suppose that each set of formulae corresponds to a single location at the center of its cell. Then any given high-res pixel falls within a square defined by four sets of formulae.
Roughly speaking, the modification of that pixel’s color value is a combination of the formulae at the square’s corners, weighted according to distance. A similar weighting occurs in the third dimension of the grid, the one corresponding to pixel intensity.
The researchers trained their system on a data set created by Durand’s group and Adobe Systems, the creators of Photoshop. The data set includes 5,000 images, each retouched by five different photographers. They also trained their system on thousands of pairs of images produced by the application of particular image-processing algorithms, such as the one for creating high-dynamic-range (HDR) images. The software for performing each modification takes up about as much space in memory as a single digital photo, so in principle, a cellphone could be equipped to process images in a range of styles.
Finally, the researchers compared their system’s performance to that of a machine-learning system that processed images at full resolution rather than low resolution. During processing, the full-res version needed about 12 gigabytes of memory to execute its operations; the researchers’ version needed about 100 megabytes, or one-hundredth as much. The full-resolution version of the HDR system took about 10 times as long to produce an image as the original algorithm, or 100 times as long as the researchers’ system.
“This technology has the potential to be very useful for real-time image enhancement on mobile platforms,” says Barron. “Using machine learning for computational photography is an exciting prospect but is limited by the severe computational and power constraints of mobile phones. This paper may provide us with a way to sidestep these issues and produce new, compelling, real-time photographic experiences without draining your battery or giving you a laggy viewfinder experience.”
Artificial skin with post-human sensing capabilities, and a better understanding of skin tissue, could pave the way for robots that can feel, smart-transplants and even cyborgs.
Few people would immediately recognise the skin as our bodies’ largest organ, but the adult human has on average two square metres of it. It’s also one of the most important organs and is full of nerve endings that provide us with instant reports of temperature, pressure and pain.
So far the best attempts to copy this remarkable organ have resulted in experimental skin with sensor arrays that, at best, can only measure one particular stimulus.
But the SmartCore project, funded by the EU’s European Research Council and at the Graz University of Technology (TU Graz) in Austria, hopes to create a material that responds to multiple stimuli. To do so requires working at a nanoscale — where one nanometre represents a billionth of a metre — creating embedded arrays of minuscule sensors that could be 2 000 times more sensitive than human skin.
Principal investigator Dr Anna Maria Coclite, an assistant professor at TU Graz’s Institute for Solid State Physics, says the project aims to create a nanoscale sensor which can pick up temperature, humidity and pressure — not separately, but as an all-in-one package.
‘They will be made of a smart polymer core which expands depending on the humidity and temperature, and a piezoelectric shell, which produces an electric current when pressure is applied,’ she said.
These smart cores would be sandwiched between two similarly tiny nanoscale grids of electrodes which sense the electrical charges given off when the sensors ‘feel’ and then transmit this data.
If the team can surmount the primary challenge of distinguishing between the different senses, the first prototype should be ready in 2019, opening the door for a range of test uses.
Robots
Dr Coclite says the first applications of a successful prototype would be in robotics since the artificial skin they’re developing has little in common with our fleshy exterior apart from its ability to sense.
‘The idea is that it could be used in ways, like robotic hands, that are able to sense temperatures,’ said Dr Coclite. ‘Or even things that can be sensed on even a much smaller scale than humans can feel, i.e, robotic hands covered in such an artificial skin material that is able to sense bacteria.’
Moreover, she says the polymers used to create smart cores are so flexible that a successful sensor could potentially be modified in the future to sense other things like the acidity of sweat, which could be integrated into smart clothes that monitor your health while you’re working out.
And perhaps, one day, those who have lost a limb or suffered burns could also benefit from such multi-stimuli sensing capabilities in the form of a convincingly human artificial skin.
‘It would be fantastic if we could apply it to humans, but there’s still lots of work that needs to be done by scientists in turning electronic pulses into signals that could be sent to the brain and recognised,’ said Dr Coclite.
She also says that even once a successful prototype is developed, possible cyborg use in humans would be at least a decade away — especially taking into account the need to test for things like toxicity and how human bodies might accept or reject such materials.
Getting a grip
But before any such solutions are possible, we must learn more about biological tissue mechanics, says Professor Michel Destrade, host scientist of the EU-backed SOFT-TISSUES project, funded by the EU’s Marie Skłodowska-Curie actions.
Prof. Destrade, an applied mathematician at the National University of Ireland Galway, is supporting Marie Skłodowska-Curie fellow Dr Valentina Balbi in developing mathematical models that explain how soft tissue like eyes, brains and skin behave.
‘For example, skin has some very marked mechanical properties,’ said Prof. Destrade. ‘In particular its stretch in the body — sometimes you get a very small cut and it opens up like a ripe fruit.’
This is something he has previously researched with acoustic testing, which uses non-destructive sound waves to investigate tissue structure, instead of chopping up organs for experimentation.
And in SOFT-TISSUES’ skin research, the team hopes to use sound waves and modelling as a cheap and immediate means of finding the tension of skin at any given part of the body for any given person.
‘This is really important to surgeons, who need to know in which direction they should cut skin to avoid extensive scarring,’ explained Prof. Destrade. ‘But also for the people creating artificial skin to know how to deal with mismatches in tension when they connect it to real skin.
‘If you are someone looking to create artificial skin and stretch it onto the body, then you need to know which is the best way to cut and stretch it, the direction of the fibres needed to support it and so on.‘
Dr Balbi reports that the biomedical industry has a real hunger for knowledge provided by mathematical modelling of soft tissues — and especially for use in bioengineering.
She says such knowledge could be useful in areas like cancer research into brain tumour growth and could even help improve the structure of lab-grown human skin as an alternative to donor grafts.
Sixteen teams from across the globe came to Nagoya, Japan to participate in the third annual Amazon Robotics Challenge. Amazon sponsors the event to strengthen ties between the industrial and academic robotics communities and to promote shared and open solutions to some of the big puzzles in the field. The teams took home $270,000 in prizes.
Watch the video below to see these inventive teams – and their robots – in action.
Congratulations to this year’s winners from the Australian Centre for Robotic Vision. See the full results here.
July 2017 was a big month for robotics-related company funding. Four raised $588 million and 19 others raised $370.6 million for a monthly total of $958.6 million. Acquisitions also continued to be significant with ST Engineering acquiring Aethon for $36 million, iRobot buying its European distributor for $141 million, and SoftBank purchasing 5% of iRobot shares for around $120 million.
Fundings
Plenty, a San Francisco vertical farm startup, raised $200 million in a Series B round led by SoftBank and included Bezos Expeditions, Data Collective, DCM Ventures, Finistere Ventures, Innovation Endeavors and Louis Bacon. Plenty plans to use the funds to expand to Japan and add strawberries and cucumbers to the leafy greens they already produce. Plenty makes an internet-connected system which delivers specific types of light, air composition, humidity and nutrition, depending on which crop is being grown, and is designing and adding robotics and automation as it can, particularly with their recent acquisition of Bright Agrotech (see below). Plenty says it can yield up to 350 times more produce in a given area than conventional farms — with 1 percent of the water.
Sanjeev Krishnan of S2G Ventures said: “This investment shows the potential of the sector. Indoor agriculture is a real toolkit for the produce industry. There is no winner takes all potential here. I could even see some traditional, outdoor growers do indoor ag as a way to manage some of the fundamental issues of the produce industry: agronomy, logistics costs, shrinkage, freshness, seasonality and manage inventory cycles better. There are many different models that could work and we are excited about the platforms being built in the market.”
Nauto, a Silicon Valley self-driving device and AI startup, raised $159 million in a Series B funding round led by SoftBank and Greylock Partners and also included previous investors BMW iVentures, General Motors Ventures, Toyota AI Ventures, Allianz Group, Playground Global and Draper Nexus.
SoftBank Group Corp. Chairman and CEO Masayoshi Son said, “While building an increasingly intelligent telematics business, Nauto is also generating a highly valuable dataset for autonomous driving, at massive scale. This data will help accelerate the development and adoption of safe, effective self-driving technology.”
Desktop Metal, the MIT spin-off and Massachusetts-based 3D metal printing technology startup, raised another $115 million in a Series D round which included New Enterprise Associates, GV (Google Ventures), GE Ventures, Future Fund and Techtronic Industries which owns Hoover U.S. and Dirt Devil.
According to CEO Ric Fulop, “You don’t need tooling. You can make short runs of production with basically no tooling costs. You can change your design and iterate very fast. You can make shapes you couldn’t make any other way, so now you can lightweight a part and work with alloys that are very, very hard, with very extreme properties. One of the benefits for this technology for robotics is that you’re able to do lots of turns. Unless you’re iRobot with the Roomba, you’re making a lot of one-off changes to your product.”
Brain Corp, a San Diego AI company developing self-driving technology, got $114 million in a Series C funding round led by the SoftBank Vision Fund. Qualcomm Ventures was the only other investor. The funds will be used to develop technology that enables robots to navigate in complex physical spaces. Last October, Brain Corp. rolled out its first commercial product —a self-driving commercial floor scrubber for use in grocery stores and big box retailers.
Beijing Geekplus Technology (Geek+), a Chinese startup developing a goods-to-man warehousing system of robots and software very similar to Kiva System’s products, raised $60 million in a B round led by Warburg Pincus and joined by existing shareholders and Volcanics Venture. The company claims to have delivered the largest numbers of logistics robots among its peers in China, delivering nearly 1,000 units of robots in warehouses for over 20 customers that include Tmall, VIPShop and Suning.
Yong Zheng, Founder and CEO of Geek+, said, “This round of financing will help us upgrade our business in three aspects. Firstly, we will accelerate the upgrading of our logistics robotics products and expand product offerings to cover more applications.” “Secondly, we will accelerate our geographical expansion and industry coverage to provide our one-stop intelligent logistics system and operation solutions to more customers. Thirdly, we will start exploring overseas markets through multiple channels.”
Vicarious, a Union City, California-based artificial intelligence company using computational neuroscience to build better machine learning models that help robots quickly address a wide variety of tasks, raised $50 million funding led by Khosla Ventures.
Momenta.ai, a Beijing autonomous driving startup that is developing digital maps, driving decision solutions and machine vision technology to detect traffic signs, pedestrians and track other cars, raised $46 million in a Series B funding round led by NIO Capital. Sequoia Capital China and Hillhouse Capital along with Daimler AG, Shunwei Capital, Sinovation Ventures and Unity Ventures also participated.
Autotalk, an Israeli chip maker of vehicle to vehicle communications, raised $40 million from Toyota, Sumitomo Mitsui Banking and other investors. The funding will allow Autotalks to prepare and expand its operations for the upcoming start of mass productions as well as continue to develop communication solutions for both connected and autonomous cars.
Flashhold (also named Shanghai Express Warehouse Intelligent Technology and Quicktron) raised $29 million in a Series B round led by Alibaba Group's Cainiao Network and SB China Venture Capital (SBCVC). Flashhold is a Shanghai-based logistic robotics company with robotic products, shelving and software very similar to Amazon's Kiva Systems.
Slamtec, a Chinese company developing a solid state LiDAR laser sensor for robots in auto localization and navigation, raised $22 million from Chinese Academy of Sciences Holdings, ChinaEquity Group Inc. and Shenzhen Guozhong Venture Capital Management Co.
6 River Systems, the Boston, MA startup providing alternative fulfillment solutions for e-commerce distribution centers, raised $15 million in a round led by Norwest Venture Partners with participation from Eclipse Ventures and other existing investors.
Prospera, an Israeli ag startup, raised $15 million in a Series B round for its end-to-end internet of things platform for indoor and outdoor farms. The round was led Qualcomm Ventures and fellow telecom heavyweight Cisco. Propsera uses computer vision, machine learning, and data science to detect and identify diseases, nutrient deficiencies, and other types of crop stress on farms with the hope of improving crop yields and saving farmer costs.
“Receiving funding from these major tech companies is a clear signal that tech industry heavy-hitters understand that agriculture is ripe for digitalization. It means that such companies, which are already involved in digitizing other traditional industries, see a significant opportunity in agtech,” said Prospera CEO Daniel Koppel.
Embark, a Belmont, California-based self-driving trucking startup, raised $15 million in Series A funding led by Data Collective and was joined by YC Continuity, Maven Ventures and SV Angel. Embark has teamed up with Peterbilt and plans to hire for their engineering team and add more trucks to expand their test fleet across the U.S.
Xometry, a Maryland startup with an Uber-like system for parts manufacture, raised $15 million in funding led by BMW Groups’ VC arm and GE.
Intuition Robotics, an Israeli startup developing social companion technologies for seniors, raised $14 million in a Series A round led by Toyota Research Institute plus OurCrowd and iRobot as well as existing seed investors Maniv Mobility, Terra Venture Partners, Bloomberg Beta and private investors.
Dr. Gill Pratt, CEO of Toyota Research Institute said: “We are impressed with Intuition Robotics’ thought leadership of a multi-disciplinary approach towards a compelling product offering for older adults including: Human-Robot-Interaction, cloud robotics, machine learning, and design. Specifically, we believe Intuition Robotics’ technology, in the field of cognitive computing, has strong potential to positively impact the world’s aging population with a proactive, truly autonomous agent that’s deployed in their social robot, ElliQ.”
SkySafe, a San Diego, California-based radio-wave anti-drone device manufacturer, raised $11.5 million in Series A funding, according to TechCrunch. Andreessen Horowitz led the round. SkySafe recently secured DoD contracts to provide counter-drone tech for Navy Seals.
Kuaile Zhihui, a Beijing educational robot startup, has raised around $10 million in a Series A funding round led by Qiming Venture Partners and included GGV Capital and China Capital.
Atlas Dynamics, a Latvian UAS startup, raised $8 million from unnamed institutional and individual investors. Funds will be used to advance the development of its Visual Line of Sight (VLOS) and Beyond Visual Line of Sight (BVLOS) drone-based data solutions, and to build its presence in key markets, including North America.
Reach Robotics, a gaming robots developer, raised $7.5 million in Series A funding led by Korea Investment Partners and IGlobe. Reach has produced and sold an initial run of 500 of its four-legged, crab-like, MekaMon bots. MekaMon fits into an emerging category of smartphone-enabled augmented reality toys like Anki.
UVeye, a New York-based startup that develops automatic vehicle inspection systems, has raised $4.5 million in a seed round led by Ahaka Capital. Israeli angel investors group SeedIL Investment Club also participated. Funds will be used to launch its products and expand to international markets, including China.
Miso Robotics, the Pasadena-based developer of a burger-flipping robot, raised $3.1 million in a funding round led by Acacia Research. Interestingly, Acacia is an agency that licenses patents and also enforces patented technologies.
Metamoto, the Redwood City autonomous driving simulation startup, raised $2 million in seed funding led by Motus Ventures and UL, a strategic investor.
Fastbrick, an Australian brick-laying startup, raised $2 million from Caterpillar with an option to invest a further $8 million subject to shareholder approval. Both companies signed an agreement to collaborate on the development, manufacture, selling and servicing of Fastbrick’s technology mounted on Caterpillar equipment.
Acquisitions
Robopolis SAS, the France-based distributor of iRobot products in Europe, is being acquired by iRobot for $141 million. Last year iRobot, in a similar move to bring their distribution network inhouse, acquired Demand Corp, their distributor for Japan.
Bright Agrotech, a Wyoming provider of vertical farming products, technology and systems, was acquired by Plenty, a vertical farm startup in San Francisco. No financial terms were disclosed. Bright has partnered with small farmers to start and grow indoor farms, providing high-tech growing systems and controls, workflow design, education and software.
Singapore Technologies Engineering Ltd (ST Engineering) has acquired robotics firm Aethon Inc through Vision Technologies Land Systems, Inc. (VTLS), and its wholly-owned subsidiary, VT Robotics, Inc for $36 million. This acquisition will be carried out by way of a merger with VT Robotics, a special purpose vehicle newly incorporated for the proposed transaction. The merger will see Aethon as the surviving entity that will operate as a subsidiary of VTLS, and will be part of the Group’s Land Systems sector.
On the Move Systems, a Canadian penny stock trucking systems provider, is merging with California-based RAD (Robotic Assistance Devices), an integrator of mobile robots for security applications. The merger involves RAD receiving 3.5 million shares of OMVS (around $250k).
IPOs and stock transactions
iRobot, the 27-year-old Massachusetts-based maker of the Roomba, has seen its stock soar from news of a purchase of an undisclosed amount of iRobot stock by SoftBank (or the SoftBank Vision Fund). The purchase is reported to be over $100 million and less than $120 million (5% of the market value).
Almost all robocars use maps to drive. Not the basic maps you find in your phone navigation app, but more detailed maps that help them understand where they are on the road, and where they should go. These maps will include full details of all lane geometries, positions and meaning of all road signs and traffic signals, and also details like the texture of the road or the 3-D shape of objects around it. They may also include potholes, parking spaces and more.
The maps perform two functions. By holding a representation of the road texture or surrounding 3D objects, they let the car figure out exactly where it is on the map without much use of GPS. A car scans the world around it, and looks in the maps to find a location that matches that scan. GPS and other tools help it not have to search the whole world, making this quick and easy.
Google, for example, uses a 2D map of the texture of the road as seen by LIDAR. (The use of LIDAR means the image is the same night and day.) In this map you see the location of things like curbs and lane markers but also all the defects in those lane markers and the road surface itself. Every crack and repair is visible. Just as you, a human being, will know where you are by recognizing things around you, a robocar does the same thing.
Some providers measure things about the 3D world around them. By noting where poles, signs, trees, curbs, buildings and more are, you can also figure out where you are. Road texture is very accurate but fails if the road is covered with fresh snow. (3D objects also change shape in heavy snow.)
Once you find out where you are (the problem called “localization”) you want a map to tell you where the lanes are so you can drive them. That’s a more traditional computer map, though much more detailed than the typical navigation app map.
Some teams hope to get a car to drive without a map. That is possible for simpler tasks like following a road edge or a lane. There you just look for a generic idea of what lane markings or road edges should look like, find them and figure out what the lanes look like and how to stay in the one you want to drive in. This is a way to get a car up and running fast. It is what humans do, most of the time.
Driving without a map means making a map
Most teams try to do more than driving without a map because software good enough to do that is also software good enough to make a map. To drive without a map you must understand the geometry of the road and where you are on it. You must understand even more, like what to do at intersections or off-ramps.
Creating maps is effectively the act of saying, “I will remember what previous cars to drive on this road learned about it, and make use of that the next time a car drives it.”
Put this way it seems crazy not to build and use maps, even with the challenges listed below. Perhaps some day the technology will be so good that it can’t be helped by remembering, but that is not this day.
The big advantages of the map
There are many strong advantages of having the map:
Human beings can review the maps built by software, and correct errors. You don’t need software that understands everything. You can drive a tricky road that software can’t figure out. (You want to keep this to a minimum to control costs and delays, but you don’t want to give it up entirely.)
Even if software does all the map building, you can do it using arbitrary amounts of data and computer power in cloud servers. To drive without a map you can must process the data in real time with low computing resources.
You can take advantage of multiple scans of the road from different lanes and vantage points. You can spot things that moved.
You can make use of data from other sources such as the cities and road authorities themselves.
You can cooperate with other players — even competitors — to make everybody’s understanding of the road better.
One intermediate goal might be to have cars that can drive with only a navigation map, but use more detailed maps in “problem” areas. This is pretty similar, except in database size, with automatic map generation with human input only on the problem areas. If your non-map driving is trustworthy, such that it knows not to try problem areas, you could follow the lower cost approach of “don’t map it until somebody’s car pulled over because it could not handle an area.”
Levels of maps
There are two or three components of the maps people are building, in order to perform the functions above. At the most basic level is something not too far above the navigation maps found in phones. That’s a vector map, except with lane level detail. Such maps know how many lanes there are, and usually what lanes connect to what lanes. For example, they will indicate that to turn right, you can use either of the right two lanes at some intersections.
Usually on top of that will be physical dimensions for these lanes, with their shape and width. The position information may be absolute (ie. GPS coordinates) but in most cases cars are more interested in the position of things relative to one another. It doesn’t matter that you drive exactly the path on the Earth that the lane is in, what matters is that you’re in the right lane relative to the edge of the road. That’s particularly true when you have to deal with re-striping.
Maps will have databases of interesting objects. The most interesting will be traffic signals. It is much easier to decode them if you know exactly where they are in advance. Cars also want to know the geometry of sidewalks and crosswalks to spot where pedestrians will be and what it means if they are there.
Somewhat independent of this are the databases of texture, objects or edges which the car uses to figure out exactly where it is on the map. A car’s main job is “stay in your lane” which means knowing the trajectory of the lane and where you are relative to the lane.
Even those who hope to “drive without a map” still want the basic navigation map, because driving involves not just staying in lanes, but deciding what to do at intersections. You still need to pick a route, just as humans use maps in tools like Waze. The human using Waze still often has the job of figuring out where the lanes are and which one to be in for turns, and how to make the turn, but a map still governs where you will be making turns.
The cost of maps
The main reason people seek to drive without a map is the cost of making maps. Because there is a cost, it means your map only covers the roads you paid to map. If you can only drive at full safety where you have a map, you have limited your driving area. You might say, “Sorry, I can’t go down that road, I don’t have a map.”
This is particularly true if mapping requires human labour. Companies like Google started by sending human driven cars out to drive roads multiple times to gather data for the map. Software builds the first version of the map, and humans review it. This has to be repeated if the roads change.
Maps also are fairly large, so a lot of data must be moved, but storage is cheap, and almost all of it can be moved when cars are parked next to wifi.
To bring down this cost, many companies hope to have ordinary “civilian” drivers go out and gather the sensor data, and to reduce the amount of human labour needed to verify and test the maps.
When the road changes
The second big challenge with maps is the fact that roads get modified. The map no longer matches the road. Fortunately, if the map is detailed enough, that’s quite obvious to the car’s software. The bigger challenge is what to do.
This means that even cars that drive on maps must have some ability to drive when the map is wrong, and even absent. The question is, how much ability?
A surprise change in the road should actually be rare. They happen every day of course as construction crews go out on jobs, but it’s only a surprise to the first car to encounter the change. That very first car will immediately log in the databases that there is a change. If it still drives the road, it will also upload sensor data about the new state of the road. We all see construction zones every day, but how often are the first car even to see that zone?
Most construction zones are scheduled and should not be a surprise even to the first car. Construction crews are far from perfect, so there will still be surprises. In the future, as crews all carry smartphones and have strict instructions to log construction activity with that phone before starting, surprises should become even more rare. In addition, in the interests of safety, the presence of such zones is likely to be shared, even among competitors.
Once a problem zone is spotted, all other cars will know about it. Unmanned cars will probably take a very simple strategy and avoid that section of road if they can, until the map is updated. Why take any risk you don’t need to? Cars with a capable human driver in them may decide they can continue through such zones with the guidance of the passenger. (This does not necessarily mean the passenger taking the controls, but instead just helping the car if it gets confused about things like two sets of lane markings, or unusual cones, or a construction flag crew.)
Nissan has also built a system where the car can ask a remote operations center for such advice, if there is data service at the construction zone. Unmanned cars will probably avoid routes where there could be surprise construction in a place with no data service.
As noted above, several teams are trying to make cars that drive without maps, even in construction zones. Even the cars with maps can still make use of such ability. Even if the car is not quite as safe as it is with a correct map, this will be so rare that the overall safety level can still be acceptable. (Real driving today consists of driving a mix of safer and more dangerous roads after all.) The worst case, which should be very rare, would be a car pulling over, deciding it can’t figure out the road and can’t get help from anybody. A crew in another car would come out to fetch it quickly.
The many players in mapping
This long introduction is there to help understand all the different types of efforts that are going on in mapping and localization. There is lots of variation.
Google/Waymo
The biggest and first player, Google’s car team was founded by people who had worked on Google Streetview. For them the idea of getting cars to scan every road in a region was not daunting — they had done it several times before. Their approach uses detailed texture maps of all roads and surrounding areas. Google is really the world’s #1 map company, so this was a perfect match for them.
Waymo’s maps are not based on Google Maps information, they are much more detailed. Unlike Google Maps, which they give out free to everybody to build on top of, the Waymo maps are proprietary, at least for now.
Navteq/Here
The company with the silly name of “Here” was originally a mapping company named Navteq. It was purchased by Nokia, renamed to “Here” and then sold to a consortium of German automakers. They will thus share their mapping efforts, and also sell the data to other automakers. In addition, the company gets to gather data from a giant fleet of cars from its owners and customers.
Here’s product is called “HD Maps” and it has some similarity to Google’s efforts in scope, but they took a lower cost approach to building them. They build a 3D map of the world using LIDAR. You can find an article about their approach at Here.
TomTom
The Dutch navigation company was already feeling the hurt from the move to phone-based navigation, and with my encouragement, entered the space of self-driving car maps. They have decided to take a “smaller data” approach. Their maps measure the width, not just of the road, but of the space around the road. The width is the distance from what you see looking left and looking right. Sensors will measure the presence of trees, poles, buildings and more to build a profile of the width. That’s enough to figure out where you are on the map, and also where you are on the road.
I’m not sure this is the right approach. It can work, but I don’t think there is a lot of merit in keeping the map small. That’s like betting that bandwidth and storage and computing will be expensive in the future. That’s always been the wrong bet.
MobilEye
MobilEye (now a unit of Intel) has cameras in a lot of cars. They provide the ADAS functions (like adaptive cruise control, and emergency braking) for a lot of OEMs, and they are trying to take a lead in self-driving cars. That’s what pushed their value up to $16B when Intel bought them.
MobilEye wants to leverage all those cars by having them scan the world as they drive, and looks for differences from ME’s compact maps. The maps are very small — just 3D locations of man-made objects around the highway. The location of these objects can be determined by a camera using motion parallax — how things like signs and poles move against the background.
ME believes they can get this data compact enough so that every car with their gear can be uploading updates to maps over the cell network. That way any changes to the road will be reflected in their database quickly, and before they get dramatic, and before a self-driving car gets there.
This is a good plan and the company that does this with the most cars sending data will have an advantage. Like TomTom this makes the bad bet that taking low bandwidth will be an important edge. A more interesting question is how strong the value is in live updates. A fleet that is 10x bigger will discover a change to the road sooner, but is there a big advantage to discovering it in 1 minute vs. 10 minutes?
Tesla
Tesla is one of the few companies hoping to drive without a map, or with a very limited map. A very limited map is more like a phone navigation map — it is not used to localize or plan, but does provide information on specific locations, such as details about off-ramps, or locations of potholes.
Tesla also has an interesting edge because they have many cars out there in production with their autopilot system. That gives them huge volumes of new data every day, though it is the limited data of their cameras. By having customers gather data about the roads, that’s given them a jump up.
Civil Maps
Civil Maps is a VC funded mapping startup. Their plan is to use neural network AI techniques to do a better job at turning image and sensor data into maps. That’s a good, but fairly obvious idea. The real challenge will be just how well they can do it. When it comes to the question of turning the map into a map of lane paths that guide where the vehicle drives, errors can’t be tolerated. If the AI software can’t figure out where the lane is, the software in the car isn’t going to do it either. If successful, the key will be to reduce the amount of human QA done on the maps, not to eliminate it.
Civil Maps publishes technical articles on their web site about their approaches — kudos to them.
DeepMap
DeepMap is another VC funded startup trying to generate a whole map ecosystem. They have not said a lot about their approach, other than they want to use data from production cars rather than having survey fleets do the mapping and re-mapping. That’s hardly a big leap — everybody will use that data if they can get it, and the battle there will partly depend on who has access to the data streams from cars that are out driving with good sensors. We’ll see in the future what other special sauce they want to provide.
Others
Almost every team has some mapping effort. Most teams do not roam a large enough set of roads to have encountered the cost and scaling problems of mapping yet. Only those attempting production cars (like Tesla and Audi) that allow driving without constant supervision had truly needed to deal with a very wide road network. In fact, those planning taxi fleets will not have to cover a wide road network for a number of years.
Most players expect to buy from a provider if they can. While all teams seek competitive edges, this is one sector where the edge is less and the value of cooperation is high. Indeed, the big question in mapping as an industry, is will it become cooperative — as in the case of 3 German automakers co-owning “Here Inc.” or will it become a competitive advantage, with one player making a better product because they have better or cheaper maps?
Infrastructure providers
It seems like a natural for the folks who build and maintain infrastructure to map it. A few things stand in the way of that. Because teams will be trusting the safety of their vehicles to their maps, they need to be very sure about the QA. That means either doing it themselves, or working with a provider whose QA process they can certify.
Working with the thousands of agencies who maintain and build roads is another story. Making all their data consistent and safety critical is a big challenge. Providers will certainly make use of data that infrastructure providers offer, but they will need to do expensive work on it in some cases.
Infrastructure providers can and should work to make sure that “surprises” are very rare. They will never be totally eliminated, but things can be improved. One simple step would be the creation of standardized databases for data on roads and road work. Authorities can pass laws saying that changes to the road can’t be done until they are logged in a smartphone app. This is not a big burden — everybody has smartphones, and those phones know where they are. In fact, smartphones used by contractors can even get smarts to notice that the contractor might be doing work without logging it. Old cheap phones could be stuck in every piece of road maintenance equipment. Those phones would say, “Hmm, I seem to suddenly be parked on a road but there is no construction logged for this area” and alert the workers or a control center.
All new road signs could be also logged by a smartphone app. A law could be made to say, “A road sign is not actually legally in effect until it is logged.” In addition, contractors can face financial penalties for changing roads without logging them. “Fire up the app when you start and end work or you don’t get paid” — that will make it standard pretty quickly.
In episode five of season three we compare and contrast AI and data science, take a listener question about getting started in machine learning, and listen to an interview with Joaquin Quiñonero Candela.
Talking Machines is now working with Midroll to source and organize sponsors for our show. In order find sponsors who are a good fit for us, and of worth to you, we’re surveying our listeners.
If you’d like to help us get a better idea of who makes up the Talking Machines community take the survey at http://podsurvey.com/MACHINES.
If you enjoyed this episode, you may also want to listen to: