In this episode of Robots in Depth, Per Sjöborg speaks with Linda Thayer about how to apply for, maintain and use patents.
Linda tells us about the benefits of getting in touch with a patent attorney early in the innovation process. She then walks us through the process of applying for a patent, key dates and important steps.
We also get to hear about defending your patent and international patents.
Uber’s recent self-driving car fatality underscores the fact that the technology is still not ready for widespread adoption. The reality is that there aren’t many places where today’s self-driving cars can actually reliably drive. Companies like Google only test their fleets in major cities, where they’ve spent countless hours meticulously labeling the exact 3-D positions of lanes, curbs, and stop signs.
“The cars use these maps to know where they are and what to do in the presence of new obstacles like pedestrians and other cars,” says Daniela Rus, director of MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL). “The need for dense 3-D maps limits the places where self-driving cars can operate.”
Indeed, if you live along the millions of miles of U.S. roads that are unpaved, unlit, or unreliably marked, you’re out of luck. Such streets are often much more complicated to map, and get a lot less traffic, so companies aren’t incentivized to develop 3-D maps for them anytime soon. From California’s Mojave Desert to Vermont’s White Mountains, there are huge swaths of America that self-driving cars simply aren’t ready for.
One way around this is to create systems advanced enough to navigate without these maps. In an important first step, Rus and colleagues at CSAIL have developed MapLite, a framework that allows self-driving cars to drive on roads they’ve never been on before without 3-D maps.
MapLite combines simple GPS data that you’d find on Google Maps with a series of sensors that observe the road conditions. In tandem, these two elements allowed the team to autonomously drive on multiple unpaved country roads in Devens, Massachusetts, and reliably detect the road more than 100 feet in advance. (As part of a collaboration with the Toyota Research Institute, researchers used a Toyota Prius that they outfitted with a range of LIDAR and IMU sensors.)
“The reason this kind of ‘map-less’ approach hasn’t really been done before is because it is generally much harder to reach the same accuracy and reliability as with detailed maps,” says CSAIL graduate student Teddy Ort, who was a lead author on a related paper about the system. “A system like this that can navigate just with on-board sensors shows the potential of self-driving cars being able to actually handle roads beyond the small number that tech companies have mapped.”
The paper, which will be presented in May at the International Conference on Robotics and Automation (ICRA) in Brisbane, Australia, was co-written by Ort, Rus, and PhD graduate Liam Paull, who is now an assistant professor at the University of Montreal.
For all the progress that has been made with self-driving cars, their navigation skills still pale in comparison to humans’. Consider how you yourself get around: If you’re trying to get to a specific location, you probably plug an address into your phone and then consult it occasionally along the way, like when you approach intersections or highway exits.
However, if you were to move through the world like most self-driving cars, you’d essentially be staring at your phone the whole time you’re walking. Existing systems still rely heavily on maps, only using sensors and vision algorithms to avoid dynamic objects like pedestrians and other cars.
In contrast, MapLite uses sensors for all aspects of navigation, relying on GPS data only to obtain a rough estimate of the car’s location. The system first sets both a final destination and what researchers call a “local navigation goal,” which has to be within view of the car. Its perception sensors then generate a path to get to that point, using LIDAR to estimate the location of the road’s edges. MapLite can do this without physical road markings by making basic assumptions about how the road will be relatively more flat than the surrounding areas.
“Our minimalist approach to mapping enables autonomous driving on country roads using local appearance and semantic features such as the presence of a parking spot or a side road,” says Rus.
The team developed a system of models that are “parameterized,” which means that they describe multiple situations that are somewhat similar. For example, one model might be broad enough to determine what to do at intersections, or what to do on a specific type of road.
MapLite differs from other map-less driving approaches that rely more on machine learning by training on data from one set of roads and then being tested on other ones.
“At the end of the day we want to be able to ask the car questions like ‘how many roads are merging at this intersection?’” says Ort. “By using modeling techniques, if the system doesn’t work or is involved in an accident, we can better understand why.”
MapLite still has some limitations. For example, it isn’t yet reliable enough for mountain roads, since it doesn’t account for dramatic changes in elevation. As a next step, the team hopes to expand the variety of roads that the vehicle can handle. Ultimately they aspire to have their system reach comparable levels of performance and reliability as mapped systems but with a much wider range.
“I imagine that the self-driving cars of the future will always make some use of 3-D maps in urban areas,” says Ort. “But when called upon to take a trip off the beaten path, these vehicles will need to be as good as humans at driving on unfamiliar roads they have never seen before. We hope our work is a step in that direction.”
This project was supported, in part, by the National Science Foundation and the Toyota Research Initiative.
E-commerce sales for 2017 were $453.5 billion in the U.S. and $1.1 trillion in China, an increase of 16.0% and 32.6% respectively over 2016. This upward trend is projected to continue for the next many years. Consequently flexibility and an ability to handle an ever-increasing number of parcels is of concern to warehousing, fulfillment and distribution center (DC) managers around the world.
Handling, distribution, transport and delivery – and the amortization of facility setup charges which often represent more cost than raw materials and manufacturing combined – are part of mounting challenges faced by today’s fulfillment executives. Accordingly, warehousing and material handling are a big business for hundreds of different types of companies that provide conveyors, rollers, racks, vision systems, hoists, shelving, electric motors, slides, barcode readers, printers, ladders, gantries, tugs, forklifts, skids, totes, carts, and software systems of all types. Most of these vendors provide products which serve the man-to-goods model, ie, a person goes somewhere in the warehouse, finds the item, and either puts it into further play in the system or packs it himself.
Kiva Systems shattered that model with their goods-to-person robots and dynamic shelving systems. Amazon was so enamored with Kiva’s robotic solution that it acquired Kiva and their robots. Since that acquisition Amazon Robotics (as Kiva Systems was renamed) has since produced over 130,000 Kiva robots and put them all to work in Amazon warehouses and DCs thus proving the efficacy of the method – a method which has been copied and also expanded upon by multiple vendors listed below.Bottom line: In warehouse and supply chain logistics focused on e-commerce fulfillment, whether third-party logistics service providers or e-retailers and their logistics arms, fixed and exorbitant front-end costs for conveyors, elevators and old style AS/RS systems have become anathema to warehouse executives worldwide who are clamoring to lower fixed costs while increasing flexibility and handling more goods. Comprehensive software and analytics — particularly predictive analytics — are on executives near-term agendas. Hence the need to invest in NextGen Supply Chain methods offered by the companies listed below.
Automating lifts, tows, carts and AGVs
Human-operated tows, lifts, AGVs and other warehouse and factory vehicles has been a staple in material movement for decades. Now, with low-cost cameras, sensors and advanced vision and depth-sensing systems, they are slowly transitioning to more flexible mobile robots (AMRs) that can autonomously tow, lift and carry and can work in either autonomous or human-operated modes.
Vendors providing kits and systems for existing forklifts and carts to convert them to Vision Guided Vehicles (VGVs, AMRs) for line-side replenishment, pallet movement, etc. include:
RoboCVis a Russian provider of autopilots for warehouse machines at Russian facilities for Samsung, VW and 3PLs. RoboCV also provides cloud-based task optimization and traffic control.
Balyo, a French provider of autonomous vehicle kits to forklift OEMs Hyster and Yale.
Seegrid, a Pittsburgh-based provider of vehicle autonomous kits for OEM Raymond, 3PLs and distribution centers of all types, also makes their own VGVs, and provides software and engineering systems to minimize human involvement and maximize VGV productivity.
Vendors providing AMRs, VGVs and AIVs (Autonomous Intelligent Vehicles) for goods-to-person, point-to-point, load transfer, restocking, etc. include:
6 River Systems
Aethon
Beijing Geekplus Technology (Geek+)
Canvas Technology
Clearpath’s OTTO robots
Fetch Robotics
Grenzebach
Kuka
Locus Robotics
Mobile Industrial Robots
Robotnik Automation
Seegrid
STILL
Swisslog
Toyota’s Autopilot
Vecna Robotics
and others
Grasping
Where humans surpass machines is in the quick visual determination of what to pick, how to grasp, and then move the item to wherever it needs to go. Until recently, this has been the missing link in automated fulfillment and one of the biggest challenges in robotics acceptance. A few vendors are perfecting the science that enables high speed random grasping from moving conveyors or bins:
RightHand Robotics
Universal Logic
Kinema Systems
Swisslog
Soft Robotics
Vendors providing grasping capabilities in addition to autonomous mobility include:
InVia Robotics
IAM Robotics
Magazino
Dorabot
GreyOrange (see below for details)
Indoor navigation
Navigation systems have changed along with all the other technological improvements and often don’t require floor grid markings, barcodes or extensive indoor localization and segregation systems such as those used by Kiva Systems (and subsequently Amazon). SLAM and combinations of floor grids, SLAM, path planning and mapping systems, indoor beacons, and collision avoidance systems are adding flexibility to swarms of point-to-point mobile robots and enabling traffic control and dynamic inventory placement.
Kiva look-alikes
In March 2012, in an effort to make their fulfillment centers as efficient as possible, Amazon acquired Kiva Systems for $775 million and almost immediately took them in-house, leaving a disgruntled set of Kiva customers who couldn’t expand and a larger group of prospective clients who were left with a technological gap and no solutions. I wrote about this gap and about the whole community of new providers that had sprung up to fill the void and were beginning to offer and demonstrate their solutions. Many of those new providers are listed above.
Recently, another set of competitors has emerged in this space. Chinese e-commerce giants Alibaba, JD (JingDong), VIPShop, Tencent and others have funded companies who copied the Kiva Systems formula to provide Kiva-like goods-to-person robot systems and dynamic free-form warehousing for their in-country fulfillment and distribution centers.
Now some of those companies are braving the prospect of IP infringement proceedings from Amazon and are expanding outside of China and SE Asia to Europe and America:
Grey Orange Robotics has sites using their systems in Japan and Europe and exhibited at Europe’s Logimat trade show where they launched PickPal, an autonomous picking robot which can pick a wide variety of SKUs using machine vision and a scalable gripper system specifically suitable for high-volume order fulfillment.
Beijing Geekplus (Geek+) Technology also has sites using their systems in Japan and Poland and had booths at MODEX and CeMAT trade shows to introduce Geek+ to the West.
Xinyi Logistics Science & Technology (Alog) – has not yet ventured beyond China and SE Asia.
Hanzhou Hikrobot Technology (HIK) – has not yet ventured beyond China and SE Asia.
Kiva alternatives
Symbotic
Swisslog
Dematic
Locus Robotics
Fetch Robotics
[NOTE: The lists shown above are not fully comprehensive. The universe is much larger. I have some knowledge of the vendors shown and know that they are beyond pilot projects, researching and prototyping which was my criteria for including them.]
OTTO self-driving vehicles depend on three different sensors to identify its current position as well as to navigate where and how it needs to travel from point A to point B. As in any dense and dynamic environment, elements in OTTO’s path can change minute to minute
Tesla was hoping to produce 5,000 new Model 3 electric cars each week in 2018. So far, it has failed to manufacture even half that number. Questioned on the matter, the company's CEO, Elon Musk, claimed that "excessive automation was a mistake" and that "humans are underrated".
Both are crowdfunding right now — Magni has only 1 week days to go and Misty has just launched today. Both robots come from pedigree robotics companies and both robots are top of the line in terms of capabilities. And I have to confess, I’m a sucker for great robot crowdfunding campaigns so have purchased one of each for Silicon Valley Robotics and Circuit Launch.
Magni is a robust mobile platform capable of carrying a payload of over 100 kilos, developed by Ubiquity Robotics. It comes with all the sensors you need for autonomous navigation, indoor or outdoor, has a sophisticated power train, and runs on a Raspberry Pi and ROS. Magni is more than a hobbyist package, you can build commercial applications on the top of the base platform, for food or package delivery, mobile manipulation, kiosk robots or security, inventory…. etc.
Here’s what you get with Magni:
Payload: 100 kg
Drive System: 2 x 200 W hub motors, 2 m/s top speed
Power: 7 A+ 5 V and 7 A+ 12 V DC power
Computer: Quad-core ARM A9 — Raspberry Pi3
Software: Ubuntu 16.04, ROS Kinetic
Camera: Single upward facing
Navigation: Ceiling fiducial based navigation
Battery life: With 10 Ah batteries, 8 hours of normal operation. Up to 32 Ah lead acid batteries can be installed, which will provide 24 hours+ of normal operation
3D sensor (optional): 2x time of flight cameras, 120 degree field of view
In addition, Ubiquity is offering Loki, a small and more or less affordable learning platform that you can use to develop applications for Magni. The team at Ubiquity Robotics are well known in Silicon Valley as robotics experts and members of the Homebrew Robotics Club.
Misty Robotics also has a great pedigree, as the team is a spin off from Sphero by Orbotix. Sphero and sister/brother robot BB8 have been immensely popular, but cofounder Ian Bernstein felt that it was time to build a personal home robot with more capabilities than a toy. Misty Robotics was formed in 2017 but have kept their robot under wraps until this year — showcasing the first version of Misty 1 as a developer product in January and now releasing Misty 2 as a crowdfunding campaign.
From IEEE Spectrum: One of the things that should set Misty apart is that it’s been designed specifically to be able to perform advanced behaviors without requiring advanced knowledge of robotics. Out of the box, Misty II can:
Move autonomously as well as dynamically respond to her environment
Recognize faces
Create a 3D map of her surroundings
Perform seeing, hearing, and speaking capabilities
Receive and respond to commands
Locate her charger to charge herself
Display emotive eyes and other emotional characteristics
All of this stuff can be accessed and leveraged if you know how to code, even a little bit. Or even not at all, since Misty can be programmed through Blockly. Misty is a great introduction to the realm of robotics!
Sadly what is also clear is that Misty Robotics has a far greater customer/support base than Ubiquity Robotics because both robots are worth purchasing (albeit for slightly different purchases) but the Ubiquity Robotics campaign hasn’t spread far beyond their Silicon Valley supporter base. I hope the campaign crosses the line because I want my robots!
Fifteen disclosed transaction amounts totaling $808 million of which the $600 million to SenseTime, the Alibaba-funded Chinese deep learning and facial recognition software provider focused on smart self-driving vehicle systems, was by far the largest.
Year to date, fundings total $2.3 billion!
Seven acquisitions also occurred in April. The most notable was the acquisition by Teradyne (which previously acquired Universal Robots and Energid) of MiR (Mobile Industrial Robots) for $148 million with an additional $124 million predicated on very achievable milestones between now and 2020.
Robotics Fundings
SenseTime, a Chinese deep learning and facial recognition software provider focused on smart self-driving vehicle systems, raised $600 million in a Series C funding round led by Alibaba Group with participation by Temasek Holdings and Suning Commerce Group.
Formlabs, a Somerville, Mass.-based manufacturer of industrial quality 3D printing systems, raised $30 million in a Series C funding. Tyche Partners led the round, and was joined by Shenzhen Capital Group, UpNorth Investment Limited, DFJ, Pitango and Foundry Group.
Zimplistic, a Singapore-based kitchen robotics firm which makes the $999 Rotimatic roti maker, raised $30 million in a Series C funding led by Credence Partners and EDBI.
6 River Systems, a Massachusetts-based point-to-point logistics mobile robot maker, raised $25 million in Series B funding in a round led by Menlo Ventures with participation from all existing investors (Norwest Venture Partners, Eclipse Ventures and iRobot). Details here.
Houston Mechatronics, a Texas defense and space systems integrator, raised $20 million in Series B funding from Iain Cooper and Simple-Fill. Funds will be used to develop a novel transformer-like underwater device called Aquanaut.
Vicarious Surgical, a Cambridge, Mass-based robotic surgery startup, raised $16.75 million in Series A funding. Khosla Ventures and Innovation Endeavors led the round the round, and were joined by Gates Ventures, AME Cloud Ventures, and Marc Benioff.
Efy-Tech, a Chinese UAS control systems startup, raised $15.8 million in a Series A funding from Aviation Industry Corporation, a Chinese state-owned aerospace and defense company.
DeepScale, a Silicon Valley self-driving vehicle AI perception startup, raised $15 million in a Series A funding round led by Point72 and next47.
Ready Robotics, a Baltimore-based provider of collaborative robots as a service (RaaS), raised $15 million in funding. Drive Capital led the round, and was joined by Eniac Ventures and RRE Ventures.
Symbio Robotics, a Berkeley, CA robotics control software startup, raised $15 million from undisclosed sources.
Marble, a San Francisco-based developer of a fleet of intelligent courier robots, raised $10 million in Series A funding. Investors include Tencent, Lemnos, Crunchfund, and Maven.
Regulus Cyber, an Israeli startup developing and making security devices for drones and autonomous vehicles, raised $6.3 million in a Series A round led by Sierra Ventures and Canaan Partners Israel.
Comma.ai, the San Francisco startup led by superstar hacker George Hotz, raised $5 million in a Series A round although it’s unclear who invested in the round which was reported in an SEC filing.
Bear Robotics, a Silicon Valley mobile robot startup for the food industry, raised $3 million in a Seed round (in January) of which $2 million came from Korean food-tech firm Woowa Brothers.
Segway Robotics raised $1.1 million from 952 backers in an IndieGoGo campaign for their Loomo mobil robotic mini personal transporter which they are selling for $1,499 and begin shipping in May.
Robotics Fundings: amounts undisclosed
DroneSense, a Texas UAS platform for drone users and OEMs, raised an undisclosed amount from FLIR Systems. “This alliance with DroneSense will help bring to market a truly mission-critical solution needed by first responders to effectively deploy a complete UAS program across their organizations. We believe this platform is scalable geographically, across multiple markets, and across multiple FLIR Business Units,” said James Cannon, President and CEO of FLIR.
BBS Automation, a Germany-based global integrator of automated testing and inspection systems, raised an undisclosed amount from equity fund EQT Mid-Marketwhich intends to assist BBS Automation’s growth ambitions both organically and through add-on acquisitions in new end markets.
Plug-and-play Panda robot
Franka Emika, a German startup producing the Panda co-bot, raised an undisclosed amount from their new joint venture partner, German conglomerate Voith. The new joint venture has launched Voith Robotics which will develop the Panda co-bot business while Franka will focus on the research and selling to academia and the research community.
Franklin Robotics, the Lowell MA startup that created a garden weeding robot named Tertill, sold 25% of the company to Husqvarna Group for an undisclosed amount. “With almost 1,500,000 environmentally friendly robotic mowers sold all over the world, Husqvarna Group has vast experience and insight that will be invaluable to us as we bring Tertill to market, and continue to develop robotic weeding solutions for the garden and beyond”, says Rory MacKean, CEO Franklin Robotics.
Intuition Robotics, an Israeli startup developing an eldercare social robot, raised an undisclosed amount (in January) from SamsungNEXT Ventures.
Acquisitions
UPDATE to the acquisition of Energid by Teradyne in February for an undisclosed amount. The amount is now known to be $25 million.
Beijing Aresbots Technology (Ares Robot), a Beijing startup developing Kiva-like warehousing robots, was acquired for an undisclosed amount by Face++, a Beijing facial recognition and ID company also known as Megvii.
Genesis Advanced Technology, the Canadian startup developing LiveDrive, a direct-drive actuator with torque-to-weight that can meet or beat motor-gearbox actuators, has been acquired by Koch Industries for an undisclosed amount. Koch will form a new company Genesis Robotics & Motion Technologies (Genesis Robotics) – to commercialize LiveDrive and related technologies. Details here.
Genmark Automation, a Fremont, CA maker of automation tools and wafer handling robots for the semiconductor industry, was acquired by NidecSankyo, a Japanese maker of motors, clean-room robots and robot components, for an undisclosed amount.
JR Automation, a Michigan industrial robot integrator, acquired Setpoint Systems, a Littleton, CO, an integrator of building automation solutions, and Setpoint, an Ogden, Utah amusement and theme parks ride designer. Financial terms weren’t disclosed.
MiR (Mobile Industrial Robots), the Danish startup with 300% sales growth in 2017, was acquired by Teradyne (NYSE:TER) for $148 million with an additional $124 million predicated on very achievable milestones between now and 2020. Details here.
Van Hoecke Automation, a Belgian-based industrial robot integrator, was acquired by Michigan-based Burke Porter Groupfor an undisclosed amount. BPG is a multi-subsidiary conglomerate providing testing and clean room equipment to the auto industry.
Wind River, a control systems software provider acquired by Intel, has been acquired by private equity firm TPG Capital for an undisclosed amount. “This acquisition will establish Wind River as a leading independent software provider uniquely positioned to advance digital transformation within critical infrastructure segments with our comprehensive edge to cloud portfolio,” said Jim Douglas, Wind River President. “At the same time, TPG will provide Wind River with the flexibility and financial resources to fuel our many growth opportunities as a standalone software company that enables the deployment of safe, secure, and reliable intelligent systems.”
IPOs
None
Failures
Revolve Robotics (developer of the KUBI tabletop remote presence device) has folded and turned over remaining sales, service and support to their Northern California manufacturer Xandex.
In this episode Andrew Vaziri speaks with Nicolas Economou, CEO of the eDiscovery company H5 and co-founder and chair of the Science, Law and Society Initiative at The Future Society, a 501c3 think tank incubated at the Harvard Kennedy School of Government. Economou discusses how AI is applied in the legal system, as well as some of the key points from the recent “Global Governance of AI Roundtable”. The roundtable, hosted by the government of the UAE, brought together a diverse group of leaders from tech companies, governments, and academia to discuss the societal implications of AI.
Nicolas Economou
Nicolas Economou is the chief executive of H5 and was a pioneer in advocating the application of scientific methods to electronic discovery. He contributes actively to advancing dialogue on public policy challenges at the intersection of law, science, and technology. He is co-chair of the Law Committee of the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems; is chair of The Future Society’s Science, Law and Society Initiative and Senior Advisor to its Artificial Intelligence Initiative; and chaired the Law Committee of the Global Governance of AI Roundtable hosted in Dubai as part of the 2018 World Government Summit.
A question of torque - Inspection robots for pipes and ducts, rescue robots in disaster areas, or humanoid robots – they all have one thing in common: They are mobile robots that aid humans.
Amazon has created more than 3,500 full-time jobs in Massachusetts and invested over $400 million in the state since 2011, from customer fulfillment infrastructure to research facilities
You have a telephone interview for your dream job, and you're feeling nervous. You make yourself a cup of tea as you wait for the phone to ring, and you count to three before picking up.
Tesla can do better than its current public response to the recent fatal crash involving one of its vehicles. I would like to see more introspection, credibility, and nuance.
Introspection
Over the last few weeks, Tesla has blamed the deceased driver and a damaged highway crash attenuator while lauding the performance of Autopilot, its SAE level 2 driver assistance system that appears to have directed a Model X into the attenuator. The company has also disavowed its own responsibility: “The fundamental premise of both moral and legal liability is a broken promise, and there was none here.”
In Tesla’s telling, the driver knew he should pay attention, he did not pay attention, and he died. End of story. The same logic would seem to apply if the driver had hit a pedestrian instead of a crash barrier. Or if an automaker had marketed an outrageously dangerous car accompanied by a warning that the car was, in fact, outrageously dangerous. In the 1980 comedy Airplane!, a television commentator dismisses the passengers on a distressed airliner: “They bought their tickets. They knew what they were getting into. I say let ‘em crash.” As a rule, it’s probably best not to evoke a character in a Leslie Nielsen movie.
It may well turn out that the driver in this crash was inattentive, just as the US National Transportation Safety Board (NTSB) concluded that the Tesla driver in an earlier fatal Florida crash was inattentive. But driver inattention is foreseeable (and foreseen), and “[j]ust because a driver does something stupid doesn’t mean they – or others who are truly blameless – should be condemned to an otherwise preventable death.” Indeed, Ralph Nader’s argument that vehicle crashes are foreseeable and could be survivable led Congress to establish the National Highway Traffic Safety Administration (NHTSA).
Airbags are a particularly relevant example. Airbags are unquestionably a beneficial safety technology. But early airbags were designed for average-size male drivers—a design choice that endangered children and lighter adults. When this risk was discovered, responsible companies did not insist that because an airbag is safer than no airbag, nothing more should be expected of them. Instead, they designed second-generation airbags that are safer for everyone.
Similarly, an introspective company—and, for that matter, an inquisitive jury—would ask whether and how Tesla’s crash could have been reasonably prevented. Tesla has appropriately noted that Autopilot is neither “perfect” nor “reliable,” and the company is correct that the promise of a level 2 system is merely that the system will work unless and until it does not. Furthermore, individual autonomy is an important societal interest, and driver responsibility is a critical element of road traffic safety. But it is because driver responsibility remains so important that Tesla should consider more effective ways of engaging and otherwise managing the imperfect human drivers on which the safe operation of its vehicles still depends.
Such an approach might include other ways of detecting driver engagement. NTSB has previously expressed its concern over using only steering wheel torque as a proxy for driver attention. And GM’s own level 2 system, Super Cruise, tracks driver head position.
Such an approach may also include more aggressive measures to deter distraction. Tesla could alert law enforcement when drivers are behaving dangerously. It could also distinguish safety features from convenience features—and then more stringently condition convenience on the concurrent attention of the driver. For example, active lane keeping (which might ping pong the vehicle between lane boundaries) could enhance safety even if active lane centering is not operative. Similarly, automatic deceleration could enhance safety even if automatic acceleration is inoperative.
NTSB’s ongoing investigation is an opportunity to credibly address these issues. Unfortunately, after publicly stating its own conclusions about the crash, Tesla is no longer formally participating in NTSB’s investigation. Tesla faults NTSB for this outcome: “It’s been clear in our conversations with the NTSB that they’re more concerned with press headlines than actually promoting safety.” That is not my impression of the people at NTSB. Regardless, Tesla’s argument might be more credible if it did not continue what seems to be the company’s pattern of blaming others.
Credibility
Tesla could also improve its credibility by appropriately qualifying and substantiating what it says. Unfortunately, Tesla’s claims about the relative safety of its vehicles still range from “lacking” to “ludicrous on their face.” (Here are some recent views.)
Tesla repeatedlyemphasizes that “our first iteration of Autopilot was found by the U.S. government to reduce crash rates by as much as 40%.” NHTSA reached its conclusion after (somehow) analyzing Tesla’s data—data that both Tesla and NHTSA have kept from public view. Accordingly, I don’t know whether the underlying math actually took only five minutes, but I can attempt some crude reverse engineering to complement the thoughtfulanalyses already done by others.
Let’s start with NHTSA’s summary: The Office of Defects Investigation (ODI) “analyzed mileage and airbag deployment data supplied by Tesla for all MY 2014 through 2016 Model S and 2016 Model X vehicles equipped with the Autopilot Technology Package, either installed in the vehicle when sold or through an OTA update, to calculate crash rates by miles travelled prior to and after Autopilot installation. [An accompanying chart] shows the rates calculated by ODI for airbag deployment crashes in the subject Tesla vehicles before and after Autosteer installation. The data show that the Tesla vehicles crash rate dropped by almost 40 percent after Autosteer installation”—from 1.3 to 0.8 crashes per million miles.
This raises at least two questions. First, how do these rates compare to those for other vehicles? Second, what explains the asserted decline?
Comparing Tesla’s rates is especially difficult because of a qualification that NHTSA’s report mentions only once and that Tesla’s statements do not acknowledge at all. The rates calculated by NHTSA are for “airbag deployment crashes” only—a category that NHSTA does not generally track for nonfatal crashes.
NHTSA does estimate rates at which vehicles are involved in crashes. (For a fair comparison, I look at crashed vehicles rather than crashes.) With respect to crashes resulting in injury, 2015 rates were 0.88 crashes per million miles for light trucks and 1.26 for passenger cars. And with respect to property-damage only crashes, they were 2.35 for light trucks and 3.12 for passenger cars. This means that, depending on the correlation between airbag deployment and crash injury (and accounting for the increasing number and sophistication of airbags), Tesla’s rates could be better than, worse than, or comparable to these national estimates.
Airbag deployment is a complextopic, but the upshot is that, by design, airbags do not always inflate. An analysis by the Pennsylvania Department of Transportation suggests that airbags deploy in less than half of the airbag-equipped vehicles that are involved in reported crashes, which are generally crashes that cause physical injury or significant property damage. (The report’s shift from reportable crashes to reported crashes creates some uncertainty, but let’s assume that any crash that results in the deployment of an airbag is serious enough to be counted.)
Data from the same analysis show about two reported crashed vehicles per million miles traveled. Assuming a deployment rate of 50 percent suggests that a vehicle deploys an airbag in a crash about once every million miles that it travels, which is roughly comparable to Tesla’s post-Autopilot rate.
Indeed, at least two groups with access to empirical data—the Highway Loss Data Institute and AAA – The Auto Club Group—have concluded that Tesla vehicles do not have a low claim rate (in addition to having a high average cost per claim), which suggests that these vehicles do not have a low crash rate either.
Tesla offers fatality rates as another point of comparison: “In the US, there is one automotive fatality every 86 million miles across all vehicles from all manufacturers. For Tesla, there is one fatality, including known pedestrian fatalities, every 320 million miles in vehicles equipped with Autopilot hardware. If you are driving a Tesla equipped with Autopilot hardware, you are 3.7 times less likely to be involved in a fatal accident.”
In 2016, there was one fatality for every 85 million vehicle miles traveled—close to the number cited by Tesla. For that same year, NHTSA’s FARS database shows 14 fatalities across 13 crashes involving Tesla vehicles. (Ten of these vehicles were model year 2015 or later; I don’t know whether Autopilot was equipped at the time of the crash.) By the end of 2016, Tesla vehicles had logged about 3.5 billion miles worldwide. If we accordingly assume that Tesla vehicles traveled 2 billion miles in the United States in 2016 (less than one tenth of one percent of US VMT), we can estimate one fatality for every 150 million miles traveled.
It is not surprising if Tesla’s vehicles are less likely to be involved in a fatal crash than the US vehicle fleet in its entirety. That fleet, after all, has an average age of more than a decade. It includes vehicles without electronic stability control, vehicles with bald tires, vehicles without airbags, and motorcycles. Differences between crashes involving a Tesla vehicle and crashes involving no Tesla vehicles could therefore have nothing to do with Autopilot.
More surprising is the statement that Tesla vehicles equipped with Autopilot are much safer than Tesla vehicles without Autopilot. At the outset, we don’t know how often Autopilot was actually engaged (rather than merely equipped), we don’t know the period of comparison (even though crash and injury rates fluctuateover the calendar year), and we don’t even know whether this conclusion is statistically significant. Nonetheless, on the assumption that the unreleased data support this conclusion, let’s consider three potential explanations:
First, perhaps Autopilot is incredibly safe. If we assume (again, because we just don’t know otherwise) that Autopilot is actually engaged for half of the miles traveled by vehicles on which it is installed, then a 40 percent reduction in airbag deployments per million miles really means an 80 percent reduction in airbag deployments while Autopilot is engaged. Pennsylvania data show that about 20 percent of vehicles in reported crashes are struck in the rear, and if we further assume that Autopilot would rarely prevent another vehicle from rear-ending a Tesla, then Autopilot would essentially need to prevent every other kind of crash while engaged in order to achieve such a result.
Second, perhaps Tesla’s vehicles had a significant performance issue that the company corrected in an over-the-air update at or around the same time that it introduced Autopilot. I doubt this—but the data released are as consistent with this conclusion as with a more favorable one.
Third, perhaps Tesla introduced or upgraded other safety features in one of these OTA updates. Indeed, Tesla added automatic emergency braking and blind spot warning about half a year before releasing Autopilot, and Autopilot itself includes side collision avoidance. Because these features may function even when Autopilot is not engaged and might not induce inattention to the same extent as Autopilot, they should be distinguished from rather than conflated with Autopilot. I can see an argument that more people will be willing to pay for convenience plus safety than for just safety alone, but I have not seen Tesla make this more nuanced argument.
Nuance
In general, Tesla should embrace more nuance. Currently, the company’s explicit and implicit messages regarding this fatal crash have tended toward the absolute. The driver was at fault—and therefore Tesla was not. Autopilot improves safety—and therefore criticism is unwarranted. The company needs to be able to communicate with the public about Autopilot—and therefore it should share specific and, in Tesla’s view, exculpatory information about the crash that NTSB is investigating.
Tesla understands nuance. Indeed, in its statement regarding its relationship with NTSB, the company noted that “we will continue to provide technical assistance to the NTSB.” Tesla should embrace a systems approach to road traffic safety and acknowledge the role that the company can play in addressing distraction. It should emphasize the limitations of Autopilot as vigorously as it highlights the potential of automation. And it should cooperate with NTSB while showing that it “believe[s] in transparency” by releasing data that do not pertain specifically to this crash but that do support the company’s broader safety claims.
For good measure, Tesla should also release a voluntary safety self-assessment. (Waymo and General Motors have.) Autopilot is not an automated driving system, but that is where Tesla hopes to go. And by communicating with introspection, credibility, and nuance, the company can help make sure the public is on board.
In this episode of Robots in Depth, Per Sjöborg speaks with Frank Tobe about his experience covering robotics in The Robot Report and creating the index Robo-Stox.
Frank talks about how and why he shifted to robotics and how looking for an investment opportunity in robotics lead him to start Robo-Stox (since renamed to ROBO Global), a robotics focused index company.
Both companies have given Frank a unique perspective on the robotics scene as a whole, over a significant period of time.