Economist predicts job loss to machines, but sees long-term hope
Robots break new ground in construction industry
#256: Socially Assistive Robots, with Maja Matarić
In this episode, Audrow Nash speaks with Maja Matarić, a professor at the University of Southern California and the Chief Scientific Officer of Embodied, about socially assistive robotics. Socially assistive robotics aims to endow robots with the ability to help people through individual non-contact assistance in convalescence, rehabilitation, training, and education. For example, a robot could help a child on the autism spectrum to connect to more neurotypical children and could help to motivate a stroke victim to follow their exercise routine for rehabilitation (see the videos below). In this interview, Matarić discusses the care gap in health care, how her work leverages research in psychology to make robots engaging, and opportunities in socially assistive robotics for entrepreneurship.
A short video about how personalized robots might act as a “social bridge” between a child on the autism spectrum and a more neurotypical child.
A short video about how a robot could assist stroke victims in their recovery.
Maja Matarić
Maja Matarić is professor and Chan Soon-Shiong chair in Computer Science Department, Neuroscience Program, and the Department of Pediatrics at the University of Southern California, founding director of the USC Robotics and Autonomous Systems Center (RASC), co-director of the USC Robotics Research Lab and Vice Dean for Research in the USC Viterbi School of Engineering. She received her PhD in Computer Science and Artificial Intelligence from MIT, MS in Computer Science from MIT, and BS in Computer Science from the University of Kansas.
Links
At SXSW, the future is a place where robots make your latte and grocery shopping is like gaming
Carnegie Mellon Research – Invisible, Stretchable Circuits
The Many Uses of Bellows
Worcester Polytechnic Institute Students Creating Security Robot Prototype for U.S. Air Force
Origami-inspired self-locking foldable robotic arm
Sherlock Drones—automated investigators tackle toxic crime scenes
European Robotics Forum 2018: Over 900 roboticists meet in Tampere, Finland
The European Robotics Forum 2018 (ERF2018), the most influential meeting of the robotics community in Europe, takes place in Tampere on 13-15 March 2018. ERF brings together over 900 leading scientists, companies, and policymakers for the largest robotics networking event in Europe.
Under the theme “Robots and Us”, the over 50 workshops cover current societal and technical themes, including human-robot-collaboration and how robotics can improve industrial productivity and service sector operations.
During the opening the ERF2018, on 13 March, Juha Heikkilä, Head of unit, EC DG CNECT, explained that “the European Robotics Forum has been instrumental in breaking down silos and bringing together a strong, integrated robotics community in Europe. This year’s theme, “Robots and Us”, reflects the increasingly broad impact of robotics and allows discussing not just technology but also the all-important non-technological aspects of robotics.”
Bernd Liepert, President of euRobotics and Chief Innovation Officer at KUKA highlighted that ““Robots and us” implies that we need to put significant work in topics such as safe human-robot interaction on the technical side, but also raise awareness about the offerings of modern technology to the wider public.”
Anne Berner, Minister of Transport and Communications of Finland, emphasized in her keynote that “digitalization and robotics require changes in mindset from both the public and the private sector in Finland. We, as the public side, create a framework for change, but the responsibility for implementing lies with companies. Robotics and automation walk on the road paved with data. Data that is not shared is benefitting no one. Data needs to be combined with other data and then refined and enriched with knowledge to create value”.
Tomas Hedenborg, President of Orgalime and Group CEO of Fastems, added that “in the era of digitalization, automation and more specifically robotization, is in the core of the transformation. The huge innovation potential includes major societal challenges that need to be tackled in parallel.”
The end of the Opening saw a panel discussion about “How should the society prepare for the rapid development of robotics” with the keynote speakers and representatives of the local organisers.
Photo, from left to right: Tomas Hedenborg, Orgalime/Fastems; Marketta Niemelä, VTT; Minna Lanz, Tampere University of Technology; Jyrki Kasvi, Finish MP; Bernd Liepert, euRobotics/KUKA; Thomas Pilz, Pilz; Juha Heikkilä, EC
The conference showcases the newest research in the field, and the projects funded under EU’s Horizon 2020 research programme. By bringing together over 50 sponsors and exhibitors, amongst them Fastems, KUKA and Sandvik (Platinum sponsors) and Schunk (Gold sponsor), the event offers a unique window to the European robotics, also putting the spotlight on the Nordic markets.
ERF2018 had the honour to welcome Markku Markkula, First Vice-President of the European Committee of the Regions, who visited the exhibition area and gave a speech at the reception hosted by the Tampere City Hall.
Photo from left to right: Anna-Kaisa Heinämäki, Deputy Mayor of City of Tampere; Bernd Liepert, President of euRobotics; Reinhard Lafrenz, euRobotics Secretary General; Jyrki Latokartano from The Robotics Society in Finland; Markku Markkula, First Vice-President of the European Committee of the Regions
The Awards Ceremony on 14 March will announce the winners of the Georges Giralt PhD Award 2017 and 2018, the TechTransfer Award 2018 and the European Robotics League Service and Emergency Robots Season 2017-2018.
After its start in San Sebastian in 2010, The European Robotics Forums has grown into a major annual event with hundreds of attendees every year. In 2017, the conference was held in Edinburgh.
The European Robotics Forum is organised by euRobotics under SPARC, the Public-Private partnership for Robotics in Europe. ERF2018 is hosted by The Robotics Society in Finland in collaboration with Tampere University of Technology.
The autonomous “selfie drone”
By Rob Matheson
If you’re a rock climber, hiker, runner, dancer, or anyone who likes recording themselves while in motion, a personal drone companion can now do all the filming for you — completely autonomously.
Skydio, a San Francisco-based startup founded by three MIT alumni, is commercializing an autonomous video-capturing drone — dubbed by some as the “selfie drone” — that tracks and films a subject, while freely navigating any environment.
Called R1, the drone is equipped with 13 cameras that capture omnidirectional video. It launches and lands through an app — or by itself. On the app, the R1 can also be preset to certain filming and flying conditions or be controlled manually.
The concept for the R1 started taking shape almost a decade ago at MIT, where the co-founders — Adam Bry SM ’12, Abraham Bacharach PhD ’12, and Matt Donahoe SM ’11 — first met and worked on advanced, prize-winning autonomous drones. Skydio launched in 2014 and is releasing the R1 to consumers this week.
“Our goal with our first product is to deliver on the promise of an autonomous flying camera that understands where you are, understands the scene around it, and can move itself to capture amazing video you wouldn’t otherwise be able to get,” says Bry, co-founder and CEO of Skydio.
Deep understanding
Existing drones, Bry says, generally require a human pilot. Some offer pilot-assist features that aid the human controller. But that’s the equivalent of having a car with adaptive cruise control — which automatically adjusts vehicle speed to maintain a safe distance from the cars ahead, Bry says. Skydio, on the other hand, “is like a driverless car with level-four autonomy,” he says, referring to the second-highest level of vehicle automation.
R1’s system integrates advanced algorithm components spanning perception, planning, and control, which give it unique intelligence “that’s analogous to how a person would navigate an environment,” Bry says.
On the perception side, the system uses computer vision to determine the location of objects. Using a deep neural network, it compiles information on each object and identifies each individual by, say, clothing and size. “For each person it sees, it builds up a unique visual identification to tell people apart and stays focused on the right person,” Bry says.
That data feeds into a motion-planning system, which pinpoints a subject’s location and predicts their next move. It also recognizes maneuvering limits in one area to optimize filming. “All information is constantly traded off and balanced … to capture a smooth video,” Bry says.
Finally, the control system takes all information to execute the drone’s plan in real time. “No other system has this depth of understanding,” Bry says. Others may have one or two components, “but none has a full, end-to-end, autonomous [software] stack designed and integrated together.”
For users, the end result, Bry says, is a drone that’s as simple to use as a camera app: “If you’re comfortable taking pictures with your iPhone, you should be comfortable using R1 to capture video.”
A user places the drone on the ground or in their hand, and swipes up on the Skydio app. (A manual control option is also available.) The R1 lifts off, identifies the user, and begins recording and tracking. From there, it operates completely autonomously, staying anywhere from 10 feet to 30 feet away from a subject, autonomously, or 300 feet away, manually, depending on Wi-Fi availability.
When batteries run low, the app alerts the user. Should the user not respond, the drone will find a flat place to land itself. After the flight — which can last about 16 minutes, depending on speed and use — users can store captured video or upload it to social media.
Through the app, users can also switch between several cinematic modes. For instance, with “stadium mode,” for field sports, the drone stays above and moves around the action, following selected subjects. Users can also direct the drone where to fly (in front, to the side, or constantly orbiting). “These are areas we’re now working on to add more capabilities,” Bry says.
The lightweight drone can fit into an average backpack and runs about $2,500.
Skydio takes wing
Bry came to MIT in 2009, “when it was first possible to take a [hobby] airplane and put super powerful computers and sensors on it,” he says.
He joined the Robust Robotics Group, led by Nick Roy, an expert in drone autonomy. There, he met Bacharach, now Skydio’s chief technology officer, who that year was on a team that won the Association for Unmanned Vehicles International contest with an autonomous minihelicopter that navigated the aftermath of a mock nuclear meltdown. Donahoe was a friend and graduate student at the MIT Media Lab at the time.
In 2012, Bry and Bacharach helped develop autonomous-control algorithms that could calculate a plane’s trajectory and determine its “state” — its location, physical orientation, velocity, and acceleration. In a series of test flights, a drone running their algorithms maneuvered around pillars in the parking garage under MIT’s Stata Center and through the Johnson Athletic Center.
These experiences were the seeds of Skydio, Bry says: “The foundation of the [Skydio] technology, and how all the technology works and the recipe for how all of it comes together, all started at MIT.”
After graduation, in 2012, Bry and Bacharach took jobs in industry, landing at Google’s Project Wing delivery-drone initiative — a couple years before Roy was tapped by Google to helm the project. Seeing a need for autonomy in drones, in 2014, Bry, Bacharach, and Donahoe founded Skydio to fulfill a vision that “drones [can have] enormous potential across industries and applications,” Bry says.
For the first year, the three co-founders worked out of Bacharach’s dad’s basement, getting “free rent in exchange for helping out with yard work,” Bry says. Working with off-the-shelf hardware, the team built a “pretty ugly” prototype. “We started with a [quadcopter] frame and put a media center computer on it and a USB camera. Duct tape was holding everything together,” Bry says.
But that prototype landed the startup a seed round of $3 million in 2015. Additional funding rounds over the next few years — more than $70 million in total — helped the startup hire engineers from MIT, Google, Apple, Tesla, and other top tech firms.
Over the years, the startup refined the drone and tested it in countries around the world — experimenting with high and low altitudes, heavy snow, fast winds, and extreme high and low temperatures. “We’ve really tried to bang on the system pretty hard to validate it,” Bry says.
Athletes, artists, inspections
Early buyers of Skydio’s first product are primarily athletes and outdoor enthusiasts who record races, training, or performances. For instance, Skydio has worked with Mikel Thomas, Olympic hurdler from Trinidad and Tobago, who used the R1 to analyze his form.
Artists, however, are also interested, Bry adds: “There’s a creative element to it. We’ve had people make music videos. It was themselves in a driveway or forest. They dance and move around and the camera will respond to them and create cool content that would otherwise be impossible to get.”
In the future, Skydio hopes to find other applications, such as inspecting commercial real estate, power lines, and energy infrastructure for damage. “People have talked about using drones for these things, but they have to be manually flown and it’s not scalable or reliable,” Bry says. “We’re going in the direction of sleek, birdlike devices that are quiet, reliable, and intelligent, and that people are comfortable using on a daily basis.”
Healthcare’s regulatory AI conundrum
It was the last question of the night and it hushed the entire room. An entrepreneur expressed his aggravation about the FDA’s antiquated regulatory environment for AI-enabled devices to Dr. Joel Stein of Columbia University. Stein a leader in rehabilitative robotic medicine, sympathized with the startup knowing full well that tomorrow’s exoskeletons will rely heavily on machine intelligence. Nodding her head in agreement, Kate Merton of JLabs shared the sentiment. Her employer, Johnson & Johnson, is partnered with Google to revolutionize the operating room through embedded deep learning systems. In many ways this astute observation encapsulated RobotLab this past Tuesday with our topic being “The Future Of Robotic Medicine,” the paradox of software-enabled therapeutics offering a better quality of life and the societal, technological and regulatory challenges ahead.
To better understand the frustration expressed at RobotLab, a review of the policies of the Food & Drug Administration (FDA) relative to medical devices and software is required. Most devices fall within a criteria that was established in the 1970s. The “build and freeze” model whereby a product filed doesn’t change overtime and currently excludes therapies that rely on neural networks and deep learning algorithms that evolve with use. Charged with progressing its regulatory environment, the Obama Administration established a Digital Health Program tasked with implementing new regulatory guidance for software and mobile technology. This initiative eventually led Congress to pass the 21st Century Cures Act (“Cures Act”) in December 2016. An important aspect of the Cures Act is its provisions for digital health products, medical software, and smart devices. The legislators singled out AI for its unparalleled ability to be used in supporting human decision making referred to as “Clinical Decision Support” (“CDS”) with examples like Google and IBM Watson. Last year, the administration updated the Cures Act with a new framework that included a Digital Health Innovation Action Plan. These steps have been leading a change in the the FDA’s attitude towards mechatronics, updating its traditional approach to devices to include software and hardware that iterates with cognitive learning. The Action Plan states “an efficient, risk-based approach to regulating digital health technology will foster innovation of digital health products.” In addition, the FDA has been offering tech partners the ability of filing a Digital Health Software Pre-Certification (“Pre-Cert”) to fast track the evaluation and approval process, current Pre-Cert pilot filings include Apple, Fitbit, Samsung and other leading technology companies.
Another way for AI and robotic devices to receive approval from the FDA is through their “De Novo premarket review pathway.” According to the FDA’s website, the De Novo program is designed for “medical devices that are low to moderate risk and have no legally marketed predicate device to base a determination of substantial equivalence.” Many computer vision systems fall into the De Novo category using their sensors to provide “triage” software to efficiently identify disease markers based upon its training data of radiology images. As an example, last month the FDA approved Viz.ai a new type of “clinical decision support software designed to analyze computed tomography (CT) results that may notify providers of a potential stroke in their patients.”
Dr. Robert Ochs of the FDA’s Center for Devices and Radiological Health explains, “The software device could benefit patients by notifying a specialist earlier thereby decreasing the time to treatment. Faster treatment may lessen the extent or progression of a stroke.” The Viz.ai algorithm has the ability to change the lives of the nearly 800,000 annual stroke victims in the USA. The data platform will enable clinicians to quickly identify patients at risk for stroke by analyzing thousands of CT brain scans for blood vessel blockages and then automatically send alerts via text messages to neurovascular specialists. Viz.AI promises to streamline the diagnosis process by cutting the traditional time it takes for radiologists to review, identify and escalate cases to specialists for high-risk patients.
Dr. Chris Mansi, Viz.ai CEO, says “The Viz.ai LVO Stroke Platform is the first example of applied artificial intelligence software that seeks to augment the diagnostic and treatment pathway of critically unwell stroke patients. We are thrilled to bring artificial intelligence to healthcare in a way that works alongside physicians and helps get the right patient, to the right doctor at the right time.” According to the FDA’s statement, Mansi’s company “submitted a study of only 300 CT scans that assessed the independent performance of the image analysis algorithm and notification functionality of the Viz.ai Contact application against the performance of two trained neuro-radiologists for the detection of large vessel blockages in the brain. Real-world evidence was used with a clinical study to demonstrate that the application could notify a neurovascular specialist sooner in cases where a blockage was suspected.”
Viz.ai joins a market for AI diagnosis software that is growing rapidly and projected to eclipse six billion by 2021 (Frost & Sullivan), an increase of more than forty percent since 2014. According to the study, AI has the ability to reduce healthcare costs by nearly half, while at the same time improving the outcomes for a third of all US healthcare patients. However, diagnosis software is only part of the AI value proposition, adding learning algorithms throughout the entire ecosystem of healthcare could provide new levels of quality of care.
At the same time, the demand for AI treatment is taking its toll on an underfunded FDA which is having difficulty keeping up with the new filings to review computer-aided therapies from diagnosis to robotic surgery to invasive therapeutics. In addition, many companies are currently unable to afford the seven-figure investment required to file with the FDA, leading to missed opportunities to find cures for the most plaguing diseases. The Atlantic reported last fall about a Canadian company, Cloud DX, that is still waiting for approval for its AI software that analyzes coughing data via audio wavelengths to detect lung-based diseases (i.e., asthma, tuberculosis, and pneumonia). Cloud DX’s founder, Robert Kaul, shared wth the magazine, “There’s a reason that tech companies like Google haven’t been going the FDA route [of clinical trials aimed at diagnostic certification]. It can be a bureaucratic nightmare, and they aren’t used to working at this level of scrutiny and slowness.” It took Cloud DX two years and close to a million dollars to achieve the basic ISO 13485 certification required to begin filing with the agency. Kaul, questioned, “How many investors are going to give you that amount of money just so you can get to the starting line?”
Last month, Rani Therapeutics raised $53 million to begin clinical trials for its new robotic pill. Rani’s solution could usher in a new paradigm of needle-free therapy, whereby drugs are mechanically delivered to the exact site of infection. Unfortunately, innovations like Rani’s are getting backlogged with a shortage of knowledgable examiners able review the clinical data. Bakul Patel, the FDA’s New Associate Center Director For Digital Health, describes that one of his top priorities is hiring, “Yes, it’s hard to recruit people in AI right now. We have some understanding of these technologies. But we need more people. This is going to be a challenge.” Patel is cautiously optimistic, “We are evolving… The legacy model is the one we know works. But the model that works continuously—we don’t yet have something to validate that. So the question is [as much] scientific as regulatory: How do you reconcile real-time learning [with] people having the same level of trust and confidence they had yesterday?”
As I concluded my discussion with Stein, I asked if he thought disabled people will eventually be commuting to work wearing robotic exoskeletons as easily as they do in electric wheelchairs? He answered that it could come within the next decade if society changes its mindset on how we distribute and pay for such therapies. To quote the President, “Nobody knew health care could be so complicated.”
The paradox on robocar accidents
I have written a few times about the unusual nature of robocar accidents. Recently I was discussing this with a former student who is doing some research on the area. As a first step, she began looking at lists of all the reasons that humans cause accidents. (The majority of them, on police reports, are simply that one car was not in its proper right-of-way, which doesn’t reveal a lot.)
This led me, though to the following declaration that goes against most early intuitions.
Every human accident teaches us something about the way people have accidents. Every robocar accident teaches us about a way robocars will never have an accident again.
While this statement is not 100% true, it reveals the stark difference between the way people and robots drive. The whole field of actuarial science is devoted to understanding unusual events (with car accidents being the primary subject) and their patterns. When you notice a pattern, you can calculate probabilities that other people will do it, and they use that to price insurance and even set policy.
When a robocar team discovers their car has made any mistake, and certainly caused any incident, their immediate move is to find and fix the cause of the mistake, and update the software. That particular mistake will generally never happen again. We have learned very little about the pattern of robocar accidents, though the teams have definitely learned something to fix in their software. Since for now, and probably forever, the circumstances of robocar accidents will be a matter of public record, all teams and thus call cars will also learn portions of the same thing. (More on that below.)
The rule won’t be entirely true. There are some patterns. There are patterns of software bugs too — every programmer knows the risk of off-by-one errors and memory allocation mistakes. We actually build our compilers, tools and even processors to help detect and prevent all the known common programming mistakes we can, and there’s an active field of research to develop AI able to do that. But that does not teach us a lot about what type of car accidents this might generate. We know robocars will suffer general software crashes and any good system has to be designed to robustly handle that with redundancies. There is a broad class of errors known as perception false negatives, where a system fails to see or understand something on the road, but usually learning about one such error does not teach us much about the others of its class. It is their consequences which will be similar, not their causes.
Perception errors are the easiest to analogize to human activity. Humans also don’t see things before an accident. This, however, is due to “not paying attention,” or “not looking in that direction,” something that robots won’t ever be guilty of. A robot’s perception failure would be like a human’s mini-stroke temporarily knocking out part of our visual cortex (ie. not a real-world issue) or a flaw in the “design” of the human brain. In the robot, however, design flaws can usually be fixed.
There are some things that can’t be fixed, and thus teach us patterns. Some things are just plain hard, like seeing in fog or under snow, or seeing hidden vehicles/pedestrians. The fact that these things are hard can help you calculate probabilities of error.
This is much less true for the broad class of accidents where the vehicle perceives the world correctly, but decides the wrong thing to do. These are the mistakes that once done, will probably never be done badly again.
There actually have been very few accidents involving robocars that were the fault of the robocar system. In fact, the record is surprisingly good. I am not including things like Tesla autopilot crashes — the Tesla autopilot is designed to be an incomplete system and they document explicitly what it won’t do.
Indeed the only pattern I see from the few reported incidents is the obvious one — they happened in unusual driving situations. Merging with a bus when one wide lane is often used by 2 cars at once. Dealing with a lane splitting motorcycle after aborting an attempt at a lane change. (Note that police ruled against the motorcyclist in this case but it is being disputed in court.) Whatever faults occur here have been fixed by now.
Developers know that unusual situations are an area of risk, so they go out searching for them, and use simulators and test tracks to let them work extensively with them. You may not learn patterns, but you can come up with probability estimates to measure what fraction of everyday driving involves extraordinary situations. This can give you insurance-style confidence, to know that even if you aren’t sure you handle every unusual situation, the overall total risk is low.
One place this can be useful is in dealing with equipment failures. Today, a car driving only with vision and radar is not safe enough; a LIDAR is needed for the full safety level. If the LIDAR fails, however, the car is not blind, it is just a bit less safe. While you would not drive for miles with the LIDAR off, you might judge that the risk of driving to a safe spot to pull off on just camera and radar is acceptable, simply because the amount of driving in that mode will be very small. We do the same thing with physical hardware — driving with a blown out tire is riskier, but we can usually get off the road. We don’t insist every car have 5 wheels to handle that situation.
Learning from humans
It is also worth noting that the human race is capable of learning from accidents. In a sense, every traffic safety rule, most road signs, and many elements of road engineering are the result of learning from accidents and improving safety. While the fact that one human misjudges a left turn doesn’t stop other humans from doing so, if it causes a “no left turn” sign to go up, we do. Ironically, the robot does not need the no left turn sign — it will never misjudge the speed of oncoming vehicles and make the turn at the wrong time.
Car design also is guided a lot from lessons of past accidents. That’s everything from features like crumple zones which don’t affect human behaviour, to the blindspot warning system which does.
Pure neural network approaches
There are a few teams who hope to make a car drive with only a neural network. That is to say the neural network outputs steering controls and takes in sensor data. Such a system is more akin to humans in that a flaw found in that approach might be found again in other similarly designed vehicles. After an accident, such a car would have its network retrained so that it never made that precise mistake again (nor any of the other potential mistakes in its training library.) This might be a very narrow, retraining however.
This is one reason that only smaller teams are trying this approach. Larger teams like Waymo are making very extensive use of neural networks, but primarily in the area of improving perception, not in making driving decisions. If a perception error is discovered, the network retraining to fix it will ideally be quite extensive, to avoid related errors. Neural network perception errors also tend to be intermittent — ie. the network fails to see something in one frame, but sees it in a later frame. The QA effort is to make it see things sooner and more reliably.
Sharing crash reports
This raises the interesting question of sharing crash reports. Developers are now spending huge amounts of money racking up test miles on their cars. They want to encounter lots of strange situations and learn from them. Waymo’s 4 million miles of testing hardly came for free. This makes them and others highly reluctant to release to the public all that hard-won information.
The 2016 NHTSA car regulations, though highly flawed, included a requirement that vendors share all the sensor data on any significant incident. As you might expect, they resisted that, and the requirement was gone from the 2017 proposed regulations (along with almost all the others.)
Vendors are unlikely to want to share every incident, and they need incentives to do lots of test driving, but it might be reasonable to talk about sharing for any actual accident. It is not out of the question that this could be forced by law, though it is an open question on how far this should go. If an accident is the system’s fault, and there is a lawsuit, the details will certainly come out, but companies don’t want to air their dirty laundry. Most are so afraid of accidents that the additional burden of having their mistake broadcast to all won’t change the incentives.
Generally, companies all want as much test data as they can get their hands on. They might be convinced that while unpleasant, sharing all accident data helps the whole industry. Only a player who was so dominant that they had more data than everybody else combined (which is Waymo at this time) would not gain more from sharing than they lose.