All posts by Brad Templeton, Robocars.com

Page 2 of 3
1 2 3

Uber and Waymo settle lawsuit in a giant victory for Uber

In a shocker, it was announced that Uber and Waymo (Google/Alphabet) have settled their famous lawsuit for around $245 million in Uber stock. No cash, and Uber agrees it won’t use any Google hardware or software trade secrets — which it of course had always denied that it ever did.

I think this is a tremendous victory for Uber. Google had proposed a $1B settlement early on that was rejected. Waymo had not yet provided all the evidence necessary to show damages, but one has to presume they had more to come that made Uber feel it should settle. Of course, the cloud of a lawsuit and years of appeals over their programs and eventual IPO also were worth closing out.

What’s great for Uber is that it’s a stock deal. While the number is not certain, some estimates suggest that this amount of stock might not be much more than the shares of Uber lost by Anthony Levandowski when he was fired for not helping with the lawsuit. In other words, Uber fixes the problems triggered by Anthony’s actions by paying off Waymo with the stock they used to buy Otto from Anthony. They keep the team (which is really what they bought, since at 7 months of age, Otto had done some impressive work but nothing worth $700M) and they get clear of the lawsuit.

The truth is, Uber can’t be in a fight with Google. All Uber rides are booked through the platforms of Google and Apple. Without those platforms there is no Uber. I am not suggesting that Apple or Google would do illegal-monopoly tricks to fight Uber. They don’t have to, though there are some close-to-the-line tricks they could use that don’t violate anti-trust but make Uber’s life miserable. You simply don’t want to be in a war for your existence with the platform you depend on for that existence.

Instead, Alphabet now increases its stake in Uber. They are now more motivated to be positively inclined to it. There is still going to be a heavy competition between Waymo and Uber, but Waymo now has this incentive not to hurt Uber too much.

For a long time, it had seemed like there would be a fantastic synergy between the companies. Google had been an early investor in Uber. Waymo has the world’s #1 robocar technology. Uber has the world’s #1 brand in selling rides — the most important use of that technology. Together they would have ruled the world. That never happened, and is unlikely to happen now (though no longer impossible). Alphabet has instead invested in Lyft.

Absent working with Lyft or Uber, Waymo needs to create its own ride service on the scale that Uber has. Few companies could enter that market convincingly today, but Alphabet is one of those few. Yet they have never done this. You need to do more than a robot ride service. Robocars won’t take you from anywhere to anywhere for decades, and so you need a service that combines robocar rides on the popular routes, and does the long tail rides with human drivers. Uber and Lyft are very well poised to deliver that; Waymo is not.

Uber settles this dangerous lawsuit for “free” and turns Alphabet back from an enemy to a frenemy. They get to go ahead full steam, and if they botch their own self-drive efforts they still have the option of buying somebody else’s technology, even Waymo’s. With new management they are hoping to convince the public they aren’t chaotic neutral any more. I think they have come out of this pretty well.

For Waymo, what have they won? Well, they got some Uber stock, which is nice but it’s just money. Alphabet has immense piles of money that this barely dents. They stuck it to Anthony, and retarded Uber for a while. The hard reality is that many companies are developing long-range LIDAR like they alleged Anthony stole for Uber. When they built it, and Otto tried to build it, nobody had it for sale. Time has past and that’s just not as much of an advantage as it used to be. In addition, Waymo has put their focus (correctly) on urban driving, not the highway driving where long range LIDAR is so essential. While Anthony won’t use the knowledge he gained on the Waymo team to help Uber, several other former team members are there, and while they can’t use any trade secrets (and couldn’t before, really) their experience is not so restricted.

For the rest of the field, they can no longer chuckle at their rivals fighting. Not so great news for Lyft and other players.

Gallery of photos from CES 2018, and other news

I have created a gallery in Google Photos with some of the more interesting items I saw at CES, with the bulk of them being related to robocars, robotic delivery and transportation.

Click on the CES 2018 Gallery to view it. Make sure to see the captions, which will either appear at the bottom of the screen, or if you clicke the “Info” button (“i” in circle) it will open up a side panel with the caption, and then you can go through the images with arrow keys or the arrow buttons.

In the gallery you will see commentary on 3 different flying car offerings, many LIDARs, 6 delivery robots and the silliest product of CES 2018.

In other news

It’s been reported that Pony.ai got a $112M series A which shows the valuation frenzy is continuing. Pony.ai was founded by veterans of Baido (and Google Chauffeur), but what is more surprising is that their plan is not very ambitious, at least for now — cars for restricted environments such as campuses and small towns. They will go after the Chinese market first.

The U.S. Dept. of Transport will make a 3rd round of robocar regulations this summer. The first round was much too detailed, the 2nd round fixed that but said almost nothing. The 3rd round will probably be a bit closer to the middle, and will also deal with trucks, which were left out of earlier rules.

The GM/Cruise robocar interior is refreshingly spartan

GM revealed photos of what they say is the production form of their self-driving car based on the Chevy Bolt and Cruise software. They say it will be released next year, making it almost surely the first release from a major car company if they make it.

As reported in their press release and other sources their goals are ambitious. While Waymo is the clear leader, it has deployed in Phoenix, because that is probably the easiest big city for driving in the USA. Cruise/GM claims they have been working on the harder problem of a dense city like San Francisco.

What’s notable, though, about the Cruise picture is what’s not in it, namely much in the way of dashboard or controls. There is a small screen, and a few controls but little else. Likewise the Waymo 3rd generation “Firefly” car had almost no controls at all.

The car has climate controls and window controls and little else. Of course a touchscreen can control a lot of other things, especially when the “driver” does not have to watch the road.

Combine this with the concept Smart-car self-driving interior from Daimler below, and you see a change in the thinking of the car industry towards the thinking of the high-tech industry.

At most car conferences today, a large fraction of what you see is related to putting new things inside the car — fancier infotainment tools and other aspects of the “connected car.” These are the same companies who charge you $2,000 to put a navigation system into your car that you will turn off in 2 years because your phone does a better job.

It is not surprising that Google expects you to get your music, entertainment and connectivity from the device in your pocket — they make Android. It is a bigger step for GM and Daimler to realize this, and bodes well for them.

While the car is not finalized and the software certainly isn’t, GM feels it has settled on the hardware design of its version 1 car, and is going into production on it. I suspect they will make changes and tweaks as new sensors come down the pipeline, but traditionally car companies have always locked down the main elements of the hardware design a couple of years before a vehicle is available for sale.

One thing both of these cars need more of, which I know well from road tripping, is surfaces and places to put stuff. Cupholders and door pockets aren’t enough when a car is a work or relaxation station rather than something to drive.

What is not clear is if they have been bold enough to get rid of many of the other features not needed if you’re not driving, like fancy adjustable seats. The side-view mirrors are gone but sensors are in their place (it is widely anticipated that they will allow even human driven cars to replace the mirrors with cameras, since that’s better and lower drag.) Waymo’s firefly had the mirrors because the law still demands them.

GM is also already working with NHTSA to get this car an exception to the Federal Motor Vehicle Safety Standards which require the things like the steering wheel that they took out. The feds say they will work quickly on this so it seems likely. Several states are already preparing the legal regime necessary, and GM suggests it will deploy something in 2019.

Not too long ago, I would have ranked GM very far down on the list of carmakers likely to succeed in the robocar world. After they acquired Cruise they moved up the chart, but frankly I had been skeptical about how much a small startup, no matter how highly valued, could change a giant automaker. It now seems that intuition was wrong.

Tons of LIDARs at CES 2018

When it comes to robocars, new LIDAR products were the story of CES 2018. Far more companies showed off LIDAR products than can succeed, with a surprising variety of approaches. CES is now the 5th largest car show, with almost the entire north hall devoted to cars. In coming articles I will look at other sensors, software teams and non-car aspects of CES, but let’s begin with the LIDARs.

Velodyne

When it comes to robocar LIDAR, the pioneer was certainly Velodyne, who largely owned the market for close to a decade. Their $75,000 64-laser spinning drum has been the core product for many years, while most newer cars feature their 16 and 32 laser “puck” shaped unit. The price was recently cut in half on the pucks, and they showed off the new $100K 128 laser unit as well as a new more rectangular unit called the Veloray that uses a vibrating mirror to steer the beam for a forward view rather than a 360 view.

The Velodyne 64-laser unit has become such an icon that its physical properties have become a point of contention. The price has always been too much for any consumer car (and this is also true of the $100K unit of course) but teams doing development have wisely realized that they want to do R&D with the most high-end unit available, expecting those capabilities to be moderately priced when it’s time to go into production. The Velodyne is also large, heavy, and because it spins, quite distinctive. Many car companies, and LIDAR companies have decried these attributes in their efforts to be different from Waymo (which uses their own in-house LIDAR now) and Velodyne. Most products out there are either clones of the Velodyne, or much smaller units with a 90 to 120 degree field of view.

Quanergy

I helped Quanergy get going, so I have an interest and won’t comment much. Their recent big milestone is going into production on their 8 line solid state LIDAR. Automakers are very big on having few to no moving parts so many companies are trying to produce that. Quanergy’s specifications are well below the Velodyne and many other units, but being in production makes a huge difference to automakers. If they don’t stumble on the production schedule, they will do well. With lower resolution instruments with smaller fields of view, you will need multiple units, so their cost must be kept low.

Luminar and 1.5 micron

The hot rising star of the hour Luminar, with its high performance long range LIDAR. Luminar is part of the subset of LIDAR makers using infrared light in the 1.5 micron (1550nm) range. The eye does not focus this light into a spot on the retina, so you can emit a lot more power from your laser without being dangerous to the eye. (There are some who dispute this and think there may be more danger to the cornea than believed.)

The ability to use more power allows longer range, and longer range is important, particularly getting a clear view of obstacles over 200m away. At that range, more conventional infrared lid Ar in the 900nm band has limitations, particularly on dark objects like a black car or pedestrian in black clothing. Even if the limitations are reduced, as some 900nm vendors claim, that’s not good enough for most teams if they are trying to make a car that goes at highway speeds.

At 80mph on dry pavement, you can hard brake in 100m. On wet pavement it’s 220m to stop, so you should not be going 80mph but people do. But a system, like a human, needs time to decide if it needs to brake, and it also doesn’t want to brake really hard. It usually takes at least 100ms just to receive a frame from sensors, and more time to figure out what it all means. No system figures it out from just one frame, usually a few frames have to be analyzed.

On top of that, 200m out, you really only get a few spots on a LIDAR from a car, and one spot from a pedestrian. That first spot is a warning but not enough to trigger braking. You need a few frames and want to see more spots and get more data to know what you’re seeing. And a safety margin on top of that. So people are very interested in what 1.5 micron LIDAR can see — as well as radar, which also sees that far.

The problem with building 1.5 micron LIDAR is that light that red is not detected with silicon. You need more exotic stuff, like indium gallium arsenide. This is not really “exotic” but compared to silicon it is. Our world knows a lot about how to do things with silicon, and making things with it is super mature and cheap. Anything else is expensive in comparison. Made in the millions, other materials won’t be.

The new Velodyne has 128 lasers and 128 photodetectors and costs a lot. 128 lasers and detectors for 1.5 micron would be super costly today. That’s why Luminar’s design uses two modules, each with a single laser and detector. The beam is very powerful, and moved super fast. It is steered by moving a mirror to sweep out rasters (scan lines) both back and forth and up and down. The faster you move your beam the more power you can put into it — what matters is how much energy it puts into your eye as it is sweeping over it, and how many times it hits your eye every second.

The Luminar can do a super detailed sweep if you only want one sweep per second, and the point clouds (collections of spots with the distance and brightness measured, projected into 3D) look extremely detailed and nice. To drive, you need at least 10 sweeps per second, and so the resolution drops a lot, but is still good.

Another limit which may surprise you is the speed of light. To see something 250m away, the light takes 1.6 microseconds to go out and back. This limits how many points per second you can get from one laser. Speeding up light is not an option. There are also limits on how much power you can put through your laser before it overheats.

(To avoid overheating, you can concentrate your point budget on the regions of greatest interest, such as those along the road or those that have known targets. I describe this in This US patent. While I did not say it, several people reported to me that Luminar’s current units require a fairly hefty box to deliver power to their LIDAR and do the computational post processing to produce the pretty point clouds.


Luminar’s device is also very expensive (they won’t publish the price) but Toyota built a test vehicle with 4 of their double-laser units, one in each of the 4 directions. Some teams are expressing the desire to see over 200m in all directions, while others think it is really only necessary to see that far in the forward direction.

You do have to see far to the left and right when you are entering an intersection with cross traffic, and you also have to see far behind if you are changing lanes on a German autobahn and somebody might be coming up on your left 100km/h faster than you (it happens!) Many teams feel that radar is sufficient there, because the type of decision you need to make (do I go or don’t I) is not nearly so complex and needs less information.

As noted before, while most early LIDARS were in the 900nm bands, Google/Waymo built their own custom LIDAR with long range, and the effort of Uber to build the same is the subject of the very famous lawsuit between the two companies. Princeton Lightwave, another company making 1.5 micron LIDAR, was recently acquired by Ford/Argo AI — an organization run by Bryan Salesky, another Google car alumnus.

I saw a few other companies with 1.5 micron LIDARS, but not as far along as Luminar, but several pointing out they did not need the large box for power and computing, suggesting they only needed about 20 to 40 watts for the whole device. One was Innovusion, which did not have a booth, but showed me the device in their suite. However, in the suite it was not possible to test range claims.

Tetravue and new Time of Flight

Tetravue showed off their radically different time of flight technology. So far there have been only a few methods to measure how long it takes for a pulse of light to go out and come back, thus learning the distance.


The classic method is basic sub-nanosecond timing. To get 1cm accuracy, you need to measure the time at close to about 50 picosecond accuracy. Circuits are getting better than that. This needs to be done with both scanning pulses, where you send out a pulse and then look in precisely that direction for the return, or with “flash” LIDAR where you send out a wide, illuminating pulse and then have an array of detector/timers which count how long each pixels took to get back. This method works at almost any distance.

The second method is to use phase. You send out a continuous beam but you modulate it. When the return comes back, it will be out of phase with the outgoing signal. How much out of phase depends on how long it took to come back, so if you can measure the phase, you can measure the time and distance. This method is much cheaper but tends to only be useful out to about 10m.

Tetravue offers a new method. They send out a flash, and put a decaying (or opening) shutter in front of an ordinary return sensor. Depending on when the light arrives, it is attenuated by the shutter. The amount it is attenuated tells you when it arrived.

I am interested in this because I played with such designs myself back in 2011, instead proposing the technique for a new type of flash camera with even illumination but did not feel you could get enough range. Indeed, Tetravue only claims a maximum range of 80m, which is challenging — it’s not fast enough for highway driving or even local expressway driving, but could be useful for lower speed urban vehicles.

The big advantages of this method are cost — it uses mostly commodity hardware — and resolution. Their demo was a 1280×720 camera, and they said they were making a 4K camera. That’s actually too much resolution for most neural networks, but digital crops from within it could be very happy, and make for the best object recognition result to be found, at least on closer targets. This might be a great tool for recognizing things like pedestrian body language and more.

At present the Tetravue uses light in the 800nm bands. That is easier to receive with more efficiency on silicon, but there is more ambient light from the sun in this band to interfere.

The different ways to steer

In addition to differing ways to measure the pulse, there are also many ways to steer it. Some of those ways include:

  • Mechanical spinning — this is what the Velodyne and other round lidars do. This allows the easy making of 360 degree view LIDARs and in the case of the Velodyne, it also stops rain from collecting on the instrument. One big issues is that people are afraid of the reliability of moving parts, especially on the grand scale.
  • Moving, spinning or vibrating mirrors. These can be sealed inside a box and the movement can be fairly small.
  • MEMS mirrors, which are microscopic mirrors on a chip Still moving, but effectively solid state. These are how DLP projectors work. Some new companies like Innovision featured LIDARs steered this way.
  • Phased arrays — you can steer a beam by having several emitters and adjusting the phase so the resulting beam goes where you want it. This is entirely solid state.
  • Spectral deflection — it is speculated that some LIDARS do vertical steering by tuning the frequency of the beam, and then using a prism so this adjusts the angle of the beam.
  • Flash LIDAR, which does not steer at all, and has an array of detectors

There are companies using all these approaches, or combinations of them.

The range of 900nm LIDAR

The most common and cheapest LIDARS, are as noted, in the 900 nm wavelength band. This is a near infrared band, but it is far enough from visible that not a lot of ambient light interferes. At the same time, up here, it’s harder to get silicon to trigger on the photons, so it’s a trade-off.

Because it acts like visible light and is focused by the lens of the eye, keeping eye safe is a problem. At a bit beyond 100m, at the maximum radiation level that is eye safe, fewer and fewer photons reflect back to the detector from dark objects. Yet many vendors are claiming ranges of 200m or even 300m in this band, while others claim it is impossible. Only a hands-on analysis can tell how reliably these longer ranges can actually be delivered, but most feel that can’t be done at the level needed.

There are some tricks which can help, including increasing sensitivity, but there are physical limits. One technique that is being considered is dynamic adjustment of the pulse power, reducing it when the target is close to the laser. Right now, if you want to send out a beam that you will see back from 200m, it needs to be so powerful that it could hurt the eye of somebody close to the sensor. Most devices try for physical eye-safety; they don’t emit power that would be unsafe to anybody. The beam itself is at a dangerous level, but it moves so fast that the total radiation at any one spot is acceptable. They have interlocks so that the laser shuts down if it ever stops moving.

To see further, you would need to detect the presence of an object (such as the side of a person’s head) that is close to you, and reduce the power before the laser scanned over to their eyes, keeping it low until past the head, then boosting it immediately to see far away objects behind the head. This can work, but now a failure of the electronic power control circuits could turn the devices into a non-safe one, which people are less willing to risk.

The Price of LIDARs

LIDAR prices are all over the map, from the $100,000 Velodyne 128-line to solid state units forecast to drop close to $100. Who are the customers that will pay these prices?

High End

Developers working on prototypes often chose the very best (and thus most expensive) unit they can get their hands on. The cost is not very important on prototypes, and you don’t plan to release for a few years. These teams make the bet that high performance units will be much cheaper when it’s time to ship. You want to develop and test with the performance you can buy in the future.’

That’s why a large fraction of teams drive around with $75,000 Velodynes or more. That’s too much for production unit but they don’t care about that. It made people predict incorrectly that robocars are far in the future.

Middle End

Units in the $7,000 to $20,000 range are too expensive as an add-on feature for a personal car. There is no component of a modern car that is this expensive, except the battery pack in a Tesla. But for a taxi, it’s a different story. With so many people sharing it, the cost is not out of the question compared to the human driver which is taken out of the equation. In fact, some would argue the $100,000 LIDAR over 6 years is still cheaper than that.

In this case, cost is not the big issue, it’s performance. Everybody wants to be out on the market first, and if a better sensor can get you out a year sooner, you’ll pay it.

Low end

LIDARs that cost (or will cost) below $1,000, especially below $250, are the ones of interest to major automakers who still think the old way: Building cars for customers with a self-drive feature.

They don’t want to add a great deal to the bill of materials for their cars and are driving all the low end, typically solid state devices.

None

None of these LIDARS are available today in automotive quantities or quality levels. Thus you see companies like Tesla, who want to ship a car today, designing without LIDAR. Those who imagine LIDAR as expensive believe that lower cost methods, like computer vision with cameras, are the right choice. They are right in the very short run (because you can’t get a LIDAR) and in the very long run (when cost will become the main driving factor) but probably wrong in the time scales that matter.

Some LIDARs are CES

Here is a list of some of the other LIDAR companies I came across at CES. There were even more than this.

AEye — MEMS LIDAR and software fused with visible light camera

Cepton — MEMS-like steering, claims 200 to 300m range

Robosense RS-lidar-m1 pro — claims 200m MEMS, 20fps, .09 deb by .2 deg, 63 x 20

Surestar R-Fans (16 and 32 laser, puck style) to 20hz,

Leishen MX series — short range for robots, 8 to 48 line units, (puck style)

ETRI LaserEye — Korean research product

Benewake Flash Lidar, shorter range.
.
Infineon box LIDAR (Innoluce prototype)

Innoviz, mems mirror 900nm LIDAR with claimed longer range

Leddartech — older company from Quebec, now making a flash LIDAR

Ouster — $12,000 super light LIDAR.

To come: Thermal Sensors, Radars, and Computer Vision, better cameras

Top Robocar news of 2017

Credit:Waymo

Here are the biggest Robocar stories of 2017.

Waymo starts pilot with no safety driver behind the wheel

By far, the biggest milestone of 2017 was the announcement by Waymo of their Phoenix Pilot which will feature cars with no safety driver behind the wheel, and the hints at making this pilot open to the public.

The huge deal is that Waymo’s lawyers and top executives signed off on the risk of running cars with no safety driver to take over in emergencies. There is still an employee in the back who can do an emergency shutdown but they can’t grab the traditional controls. A common mistake in coverage of robocars is to not understand that it’s “easy” to make a car that can do a demo, but vastly harder to make one that has this level of reliability. That Waymo is declaring this level puts them very, very far ahead of other teams.

Many new LIDAR and other sensor companies enter the market

The key sensor for the first several years of robocars will almost surely be LIDAR. At some point in the future, vision may get good enough but that date is quite uncertain. Cost is not a big issue for the first few years, safety is. So almost everybody is gearing up to use LIDAR, and many big companies and startups have announced new LIDAR sensors and lower prices.

News includes Quanergy (I am an advisor) going into production on a $250 8-line solid state unit, several other similar units in development from many companies, and several new technologies including 1.5 micron LIDARs from Luminar and Princeton Lightwave, 128 plane LIDARs from Velodyne and radical alternate technologies from Oryx in Israel and others. In addition, several big players have acquired LIDAR companies, indicating they feel it is an important competitive advantage.

At the same time, Waymo (which created its own special long range LIDAR) has been involved in a giant lawsuit against Uber, alleging that the Otto team appropriated Waymo secrets to build their own.

Here is some coverage I had on LIDAR deals.

In more recent news, today Velodyne cut the price of their 16 laser puck to $4,000. 16 planes is on the low side as a solo sensor but this price is quite reasonable for anybody building a taxi.

Regulations get reversed.

In 2016 NHTSA published 116 pages of robocar regulations. Under the new administration, they reversed this and published some surprisingly light handed replacements. States have also been promoting local operations, with Arizona coming out as one of the new winners.

Intel buys MobilEye

There were many big acquisitions with huge numbers, including NuTonomy (by Delphi) but the biggest ever deal was the $16B purchase of MobilEye by Intel.
MobilEye of course has a large business in the ADAS world but Intel wants the self-driving car part and paid a multi billion dollar premium for it.

Uber orders 24,000 Volvos

It’s not a real order quite yet but this intent to buy $1B of cars to put Uber software on shows how serious things are getting, and should remove from people’s minds the idea that Uber doesn’t intend to own a fleet.

Flying cars get a tiny bit more real

They aren’t here yet, but there’s a lot more action on Flying Cars, or in particular, multirotor drone-style vehicles able to carry a person. It looks like these are going to happen, and they are the other big change in the works for personal transportation. It remains uncertain if society will tolerate noisy helicopters filling the skies over our cities, but they certainly will be used for police, ambulance, fire and other such purposes, as well as over water and out in the country.

A little more uncertain is the Hyperloop. While the science seems to work, the real question is one of engineering and cost. Can you actually do evacuated tubes reliably and at a cost that works?

Warner Brothers and Intel experiment with in-robocar entertainment. Is that a good idea?

Intel and Warner made a splash at the LA Auto Show announcing how Warner will develop entertainment for viewing while riding in robotaxis. It’s not just movies to watch, their hope is to produce something more like an amusement park ride to keep you engaged on your journey.

Like most partnership announcements around robocars, this one is mainly there for PR since they haven’t built anything yet. The idea is both interesting and hype.

I’ll start with the negative. I think people will carry their entertainment with them in their pockets, and not want it from their cars. Why would I want a different music system with a different interface when my own music and videos are already curated by me and stored in my phone? All I really want is a speaker and screen to display them on.

This is becoming very clear on planes, where I prefer to watch movies I have pre-downloaded on my phone than what is on the bigger screen of the in-flight entertainment system. There are several reasons for that:

  • The UIs on most in-flight systems suck really, really badly. I mean it’s amazing how bad most of them are. (Turns out there is a reason for that.) Cars will probably do it better but the history is not promising.
  • Your personal device is usually newer with more advanced technology because you replace it every 2 years. You have curated the content in it and know the interface.
  • On airplanes in particular, they believe rules force them to pause your experience so that they can announce that duty free sales are now open in 3 languages. And 20 or more other interruptions, only a couple of which are actually important to hear for an experienced flyer.

So Warner is wise in putting a focus on doing something you can’t do with your personal gear, such as a VR experience, or immersive screens around the car. There is a unique opportunity to tune the VR experience to the actual motions of the car. In traffic, you can only tune to the needed motions. On the open road, you might actually be able to program a trip that deliberately slows or speeds up or turns when nobody else is around to make a cool VR experience.

While that might be nice, it would be mostly a gimmick, more like a ride you try once. I don’t think people will want to go everywhere in the batmobile. As such it will be more of a niche, or marketing trick.


More interesting is the ability to reduce carsickness with audio-visual techniques. Some people get pretty queasy if they look down for a long time at a book or laptop. Others are less bothered. A phone held in the hand seems to be easier to use for most than something heavier, perhaps because it moves with the motion of the car. For many years I have proposed that cars communicate their upcoming plans with subtle audio or visual cues so that people know when they are about to turn or slow down. Some experiments are now being reported on this and it will be interesting to see the results.

If you ride on a subway, bus or commuter train today, the scene is now always the same. A row of people, all staring at their phones.

Advertising

Some commenters have speculated that another goal here may be to present advertising to hapless taxi passengers. After all, ride a New York cab and many others and you will see an annoying video loop playing. Each time you have to go through the menus to mute the volume. With hailed taxis, you can’t shop, and so they can also get away with doing this — what are you going to do, get out of the cab and wait for the next one?

I hope that with mobile-phone hail, competition prevents this sort of attempt to monetize the time of the customer. I definitely want my peace and quiet, and the revenue from the advertising — typically well under a dollar an hour — can’t possibly offset that for me.

DARPA challenge mystery solved and how to handle Robocar failures

A small mystery from Robocar history was resolved recently, and revealed at the DARPA grand challenge reunion at CMU.

The story is detailed here at IEEE spectrum and I won’t repeat it all, but a brief summary goes like this.

In the 2nd grand challenge, CMU’s Highlander was a favourite and doing very well. Mid-race it started losing engine power and it stalled for long enough that Stanford’s Stanley beat it by 11 minutes.

It was discovered recently a small computerized fuel injector controller in the Hummer (one of only two) may have been damaged in a roll-over that Highlander had, and if you pressed on it, the engine would reduce power or fail.

People have wondered how the robocar world might be different if they had not had that flaw. Stanford’s victory was a great boost for their team, and Sebastian Thrun was hired to start Google’s car team — but Chris Urmson, lead on Highlander, was also hired to lead engineering, and Chris would end up staying on the project for much longer than Sebastian who got seduced by the idea of doing Udacity. Google was much more likely to have closer ties to Stanford people anyway, being where it is.

CMU’s fortunes might have ended up better, but they managed to be the main source of Uber’s first team.

There are many stories of small things making a big difference. Also well known is how Anthony Levandowski, who entered a motorcycle in the race, forgot to turn on a stabilizer. The motorcycle fell over 2 seconds after he released it, dashing all of his team’s work. Anthony of course did OK (as another leader on the Google team, and then to Uber) but of course has recently had some “trouble”.

Another famous incident came when Volvo was doing a demo for press of their collision avoidance system. You could not pick a worse time for a failure, and of course there is video of it.

They had tested the demo extensively the night before. In fact they tested it too much, and left a battery connected during the night, so that it was drained by the morning when they showed off to the press.

These stories remind people of all the ways things go wrong. More to the point, they remind us that we must design expecting things to go wrong, and have systems that are able to handle that. These early demos and prototypes didn’t have that, but cars that go on the road do and will.

Making systems resilient is the only answer when they get as complex as they are. Early car computers were pretty simple, but a self-driving system is so complex that it is never going to be formally verified or perfect. Instead, it must be expected that every part will fail, and the failure of every part — or even every combination of parts — should be tested in both simulation, and where possible in reality. What is tested is how the rest of the system handles the failure, and if it doesn’t handle it, that has to be fixed.

It does not need to handle it perfectly, though. For example, in many cases the answer to failure will be, “We’re at a reduced safety level. Let’s get off the road, and summon another car to help the passengers continue on their way.”

It might even be a severely reduced safety level. Possibly even, as hard as this number may be to accept, 100 times less safe! That’s because the car will never drive very far in that degraded condition. Consider a car that has one incident every million miles. In degraded condition, it might have an incident every 10,000 miles. You clearly won’t drive home in that condition, but the 1/4 mile of driving at degraded level is as risky as 25 miles of ordinary driving at full operational level, which is a risk taken every day. As long as vehicles do not drive more than a short distance at this degraded level, the overall safety record should still be satisfactory.

Of course, if the safety level degrades to a level that could be called “dangerous” rather than “less safe” that’s another story. That must never be allowed.

An example of this would be failure of the main sensors, such as a LIDAR. Without a LIDAR, a car would rely on cameras and radar. Companies like Tesla think they can make a car fully safe with just those two, and perhaps they will some day. But even though those are not yet safe enough, they are safe enough for a problem like getting off the road, or even getting to the next exit on a highway.

This is important because we will never get perfection. We will only get lower and lower levels of risk, and the risk will not be constant — it will be changing with road conditions, and due to system or mechanical failures. But we can still get the safety level we want — and get the technology on the road.

Uber buys 24,000 Volvos, trolley problems get scarier, and liability

Uber and Volvo announced an agreement where Uber will buy, in time, up to 24,000 specially built Volvo XC90s which will run Uber’s self-driving software and, presumably, offer rides to Uber customers. While the rides are some time away, people have made note of this for several reasons.

  • This is a pretty big order for Volvo — it’s $1B of cars at the retail price, and 1/3 of the total sales of XC90s in 2017.
  • This is a big fleet — there are only 12,000 yellow cabs in New York City, for example, though thanks to Uber there are now far more hailable vehicles.
  • In spite of Volvo’s fairly major software efforts, they will be entirely on the hardware side for this deal, and it is not exclusive for either party.

I’m not clear who originally said it — I first heard it from Marc Andreesen — but “the truest form of a partnership is called a purchase order.” In spite of the scores of announced partnerships and joint ventures announced to get PR in the robocar space, this is a big deal, but it’s a sign of the sort of deal car makers have been afraid of. Volvo will be primarily a contract manufacturer here, and Uber will own the special sauce that makes the vehicle work, and it will own the customer. You want to be Uber in this deal, But what company can refuse a $1B order?

It also represents a big shift for Uber. Uber is often the poster child for the company that replaced assets with software. It owns no cars and yet provides the most rides. Now, Uber is going to move to the capital intensive model of owning the cars, and not having to pay drivers. There will be much debate over whether it should make such a shift. As noted, it goes against everything Uber represented in the past, but there is not really much choice.

First of all, to do things the “Uber” way would require that a large number of independent parties bought and operated robocars and then contracted out to Uber to bring them riders when not being used by their owners. Like UberX without having to drive the car. The problem is, that world is still a long way away. Car companies have put their focus on cars that can’t drive unmanned — much or at all — because that’s OK for the private car buyer. They are also far behind companies like Waymo and Uber in producing taxi capable vehicles.

If Uber waited for the pool of available private cars to get large enough, it would miss the boat. Other companies would have moved into its territory and undercut it with cheaper and cooler robotaxi service.

Secondly, you really want to be very sure about the vehicles you deploy in your first round. You want to have tested them, and you need to certify their safety because you are going to be liable in accidents no matter what you do. You can get the private owners to sign a contract taking liability but you will get sued anyway as the deep pocket if you do. This means you want to control the whole experience.

The truth is, capital is pretty cheap for companies like Uber. Even cheaper for companies like Apple and Google that have the world’s largest pools of spare capital sitting around. The main risk is that these custom robocars may not have any resale value if you bet wrong on how to build them. Fortunately, taxis wear out in about 5 years of heavy use.

Uber continues to have no fear of telling the millions of drivers who work “for” them that they will be rid of them some day. Uber driver is an unusual job, and nobody thinks of it as a career, so they can get away with this.

Trolley problem gets scarier

Academic ethicists, when defending discussions of the Trolley Problem claim that while they understand the problems are not real, they are still valuable teaching tools to examine real questions.

The problem is the public doesn’t understand this, and is morbidly fascinated beyond all rationality with the idea of machines deciding who lives or dies. This has led Barack Obama to ask about it in his first statement on robocars, and many other declarations that we must figure out this nonsense question before we deploy robocars on the road. The now-revoked proposed NHTSA guidelines of 2016 included a theoretically voluntary requirement that vendors outline their solutions to this “problem.”

This almost got more real last week when a proposed UK bill would have demanded trolley solutions. The bill was amended at the last minute, and a bullet dodged that would have delayed the deployment of life saving technology while trying to resolve truly academic questions.

It is time for ethical ethicists to renounce the trolley problem. Even if, inside, they still think it’s got value, that value is far outweighed by the irrational fears and actions it triggers in public debate. Real people are dying every day on the roads, and we should not delay saving them to figure out how to do the “right” thing in hypothetical situations that are actually extremely rare to nonexistent. Figuring out the right thing is the wrong thing. Save solving trolley problems for version 4, and get to work on version 0.2.

There is real ethical work to be done, covering situations that happen every day. Real world safety tradeoffs and their morality. Driving on roads where breaking the vehicle code is the norm. Contrasting cost with safety. These are the places where ethical expertise can be valuable.

Simulators take off

For a long time I have promoted the idea of an open source simulator. Now two projects are underway.

The first is the project Apollo simulator from Baidu and a new entrant called Carla is also in the game.

This is good to see, but I hope the two simulators also work together. One real strength of an open platform simulator is that people all around the world can contribute scenarios to it, and then every car developer can test their system in those scenarios. We want every car tested in every scenario that anybody can think of.

Waymo has developed its own simulator, and fed it with every strange thing their cars have encountered in 5M kilometers of real world driving. It’s one of the things that gives them an edge. They’ve also loaded the simulator with everything their team members can think of. This way, their driving system has the experience of seeing and trying out every odd situation that will be encountered in many lifetimes of human driving, and eventually on every type of road.

That’s great, but no one company can really build it all. This is one of the great things to crowdsource. Let all the small developers, all the academics, and even all the hobbyists build simulations of dangerous scenarios. Let people record and build scenarios for driving in every city of the world, in every situation. No one company can do that but the crowd can. This can give us the confidence that any car has at least at some level encountered far more than any human driver ever could, and handled it well.

Unusual liability rule

Some auto vendors have proposed a liability rule for privately owned robocars that will protect them from some liability. The rule would declare that if you bought a robocar from them, and you didn’t maintain it according to the required maintenance schedule then the car vendor would not be liable for any accident it had.

It’s easy to see why automakers would want this rule. They are scared of liability and anything that can reduce it is a plus for them.

At the same time, this will often not make sense. Just because somebody didn’t change the oil or rotate the tires should not remove liability for a mistake by the driving system that had no relation to those factors.

What’s particularly odd here is that robocars should always be very well maintained. That’s because they will be full of sensors to measure everything that’s going on, and they will also be able to constantly test every system that can be tested.
Consider the brakes, for example. Every time a robocar brakes, it can measure that the braking is happening correctly. It can measure the temperature of the brake discs. It can listen to the sound or detect vibrations. It can even, when unmanned, find itself on an empty street and hit the brakes hard to see what happens.

In other words, unexpected brake failure should be close to impossible (particularly since robocars are being designed with 2 or 3 redundant braking systems.)

More to the point, a robocar will take itself in for service. When your car is not being used, it will run over for an oil change or any other maintenance it needs. You would have to deliberately stop it to prevent it from being maintained to schedule. Certainly no car in a taxi fleet will remain unmaintained except through deliberate negligence.

Robocar/LIDAR news and video of the Apple car

Robocar news is fast and furious these days. I certainly don’t cover it all, but will point to stories that have some significance. Plus, to tease you, here’s a clip from my 4K video of the new Apple car that you’ll find at the end of this post.

Lidar acquisitions

There are many startups in the Lidar space. Recently, Ford’s Argo division purchased Princeton Lightwave a small LIDAR firm which was developing 1.5 micron lidars. 1.5 micron lidars include Waymo’s own proprietary unit (subject of the lawsuit with Uber) as well as those from Luminar and a few others. Most other lidar units work in the 900nm band of near infrared.

Near infrared lasers and optics can be based on silicon, and silicon can be cheap because there is so much experience and capacity devoted to making it. 1.5 micron light is not picked up by silicon, but it’s also not focused by the lens of the eye. That means that you can send out a lot more power and still not hurt the eye, but your detectors are harder to make. That extra power lets you see to 300m, while 900nm band lidars have trouble with black objects beyond 100m.

100m is enough for urban driving, but is not a comfortable range for higher speeds. Radar senses far but has low resolution. Thus the desire for 1.5 micron units.

GM/Cruise also bought Strobe, a small lidar firm with a very different technology. Their technology is in the 900nm band, but they are working on ways to steer the beam without moving mirrors the way Velodyne and others do. (Quanergy, in which I have stock, also is developing solid state units, as are several others.) They have not published but there is speculation on how Strobe’s unit works.

What’s interesting is that these players have decided, like Waymo, Uber and others, that they should own their own lidar technology, rather than just buy it from suppliers. This means one of two things:

  • They don’t think anybody out there can supply them with the LIDAR they need — which is what motivated Waymo to build their own, or
  • They think their in-house unit will offer them a competitive advantage

On the surface, neither of these should be true. Suppliers are all working on making lidars because most people think they will be needed. And folks are working on both 900nm and 1.5 micron units, eager to sell. It’s less clear if any of these units will be significantly better than the ones the independent suppliers are building. That’s what is needed to get a competitive edge. The unit needs to be longer range, better resolution, better field of view or more reliable than supplier units. It’s not clear why that will be, but nobody has released solid specs.

What shouldn’t matter is that they can make it cheaper in-house, especially for those working on taxi service. First of all, it’s very rare you can get something cheaper by buying the entire company. Secondly, it’s not important to make it much cheaper for the first few years of production. Nobody is going to win or lose based on whether their taxi unit costs a few thousand more dollars to make.

So there must be something else that is not revealed driving these acquisitions.

Velodyne, which pioneered the lidar industry for self-driving cars, just announced their new
128 line lidar with twice the planes and 4x the resolution of the giant “KFC Bucket” unit found on most early self-driving car prototypes.

The $75,000 64-laser Velodyne kick-started the industry, but it’s big and expensive. This new one will surely also be expensive but is smaller. In a world where many are working with the 16 and 32 laser units, the main purpose of this unit, I think, will be for those who want to develop with the sensor of the future.

Doing your R&D with high-end gear is often a wise choice. In a few years, the high resolution gear will be cheaper and ready for production, and you want to be ready for that. At the same time, it’s not yet clear how much 128 lines gains over 64. It’s not easy to identify objects in lidar, but you don’t absolutely have to so most people have not worried too much about it.

Pioneer, the Japanese electronics maker, has also developed a new lidar. Instead of trying to steer a laser entirely with solid state techniques, theirs uses MEMS mirrors, similar to those in DLP projectors. This is effectively solid state even though the mirrors actually move. I’ve seen many lidar prototypes that use such mirrors but for some reason they have not gone into production. It is a reasonably mature technology, and can be quite low cost.

More acquisitions and investment

Delphi recently bought Nutonomy, the Singapore/MIT based self-driving car startup. I remember visiting them a few time s in Singapore and finding them to be not very far along compared to others. Far enough along to fetch $400M. Delphi is generally one of the better-thinking tier one automotive suppliers and now it can go full-stack with this purchase.

Of course, since most automakers have their own full stack efforts underway, just how many customers will the full-stack tier one suppliers sell to? They may also be betting that some automakers will fail in their projects, and need to come to Delphi, Bosch or others for rescue.

Another big investment is Baidu’s “Project Apollo.” This “moonshot” is going to invest around $1.5B in self-driving ventures, and support it with open source tools. They have lots of partners, so it’s something to watch.

Other players push forward

Navya was the first company to sell a self-driving car for consumer use. Now their new vehicle is out. In addition, yesterday in Las Vegas, they started a pilot and within 2 hours had a collision. Talk about bad luck — Navya has been running vehicles for years now without such problems. It was a truck that backed into the Navya vehicle, and the truck driver’s fault, but some are faulting it because all it did was stop dead when it saw the truck coming. It did not back out of the way, though it could have. Nobody was hurt.

Aurora, the startup created by Chris Urmson after he left Waymo, has shown off its test vehicles. No surprise, they look very much like the designs of Waymo’s early vehicles, a roof rack with a Velodyne 64 laser unit on top. The team at Aurora is top notch, so expect more.

Apple’s cars are finally out and about. Back in September I passed one and took a video of it.

You can see it’s loaded with sensors. No fewer than 12 of the Velodyne 16 laser pucks and many more to boot. Apple is surely following that philosophy of designing for future hardware.

Waymo deploys with no human safety driver oversight

Credit:Waymo

In a major milestone for robocars, Waymo has announced they will deploy in Phoenix with no human safety drivers behind the wheel. Until now, almost all robocars out there have only gone out on public streets with a trained human driver behind the wheel, ready to take over at any sign of trouble. Waymo and a few others have done short demonstrations with no safety driver, but now an actual pilot, providing service to beta-testing members of the public, will operate without human supervision.

https://youtube.com/watch?v=aaOB-ErYq6Y%3Frel%3D0

This is a big deal, and indicates Waymo’s internal testing is showing a very strong safety record. The last time they published numbers, they had gone 83,000 miles between “required interventions.” While in safety driver training, we are told to intervene at any sign of a problem, these interventions are tested in simulation to find out what would have happened if there had been no intervention. If the car would have done the right thing, it’s not a required intervention.

Waymo must have built their number up a great deal from there. People have an accident that is reported to insurance about ever 250,000 miles, and to police every 500,000 miles. Injury accidents happen every 1.2M miles, and fatalities every 80M miles. In Waymo’s testing, where they got hit a lot by other drivers, they discovered that there are “dings” about every 100,000 miles that don’t get reported to police or insurance.

People have argued about how good you have to be to put a robocar on the road. You need to be better than all those numbers. I will guess that Waymo has gotten the “ding” number up above 500,000 miles — which is close to a full human lifetime of driving. Since they have only driven 3.5M miles they can’t make real-world estimates of the frequency of injuries and certainly not of fatalities, but they can make predictions. And their numbers have convinced them, and the Alphabet management, that it’s time to deploy.

Congratulations to all the team.

They did this not just with real world testing, but building a sophisticated simulator to test zillions of different situations, and a real world test track where they could test 20,000 different scenarios. And for this pilot they are putting it out on the calm and easy streets of Phoenix, probably one of the easiest places to drive in the world. Together, that gives the confidence to put “civilians” in the cars with no human to catch an error. Nothing will be perfect, but this vehicle should outperform a human driver. The open question will be how the courts treat that when the first problem actually does happen. Their test record suggests that may be a while; let us hope it is.

Where do we go from here?
This pilot should give pause to those who have said that robocars are a decade or more away, but it also doesn’t mean they are full here today. Phoenix was chosen because it’s a much easier target than some places. Nice, wide streets in a regular grid. Flat terrain. Long blocks. Easy weather with no snow and little rain. Lower numbers of pedestrians and cyclists. Driving there does not let you drive the next day in Boston.

But neither does it mean it takes you decades to go from Phoenix to Boston, or even to Delhi. As Waymo proves things out in this pilot, first they will prove the safety and other technical issues. Then they will start proving out business models. Once they do that, prepare for a land rush as they leap to other cities to stake the first claim and the first-mover advantage (if there is one, of course.) And expect others to do the same, but later than Waymo, because as this demonstrates, Waymo is seriously far ahead of the other players. It took Waymo 8 years to get to this, with lots of money and probably the best team out there. But it’s always faster to do something the 2nd time. Soon another pilot from another company will arise, and when it proves itself, the land rush will really begin.

Robocars will make traffic worse before it gets better

Many websites paint a very positive picture of the robocar future. And it is positive, but far from perfect. One problem I worry about in the short term is the way robocars are going to make traffic worse before they get a chance to make it better.

The goal of all robocars is to make car travel more pleasant and convenient, and eventually cheaper. You can’t make something better and cheaper without increasing demand for it, and that means more traffic.

This is particularly true for the early-generation pre-robocar vehicles in the plans of many major automakers. One of the first products these companies have released is sometimes called the “traffic jam assist.” This is a self-driving system that only works at low speed in a traffic jam.

Turns out that’s easy to do, effectively a solved problem. Low speed is inherently easier, and the highway is a simple driving environment without pedestrians, cyclists, intersections or cars going the other way. When you are boxed in with other cars in a jam, all you have to do is go with the flow. The other cars tell you where you need to go. Sometimes it can be complex when you get to whatever is blocking the road to cause the jam, but handoff to a human at low speeds is also fairly doable.

These products will be widely available soon, and they will make traffic jams much more pleasant. Which means there might be more of them.

I don’t have a 9 to 5 job, so I avoid travel in rush hour when I can. If somebody suggests we meet somewhere at 9am, I try to push it to 9:30 or 10. If I had a traffic jam assist car, I would be more willing to take the meeting at 9. When on the way, if I encountered a traffic jam, I would just think, “Ah, I can get some email done.”

After the traffic jam assist systems, come the highway systems which allow you to take your eyes off the road for an extended time. They arrive pretty soon, too. These will encourage slightly longer commutes. That means more traffic, and also changes to real estate values. The corporate-run commuter buses from Google, Yahoo and many other tech companies in the SF Bay Area have already done that, making people decide they want to live in San Francisco and work an hour’s bus ride away in Silicon Valley. The buses don’t make traffic worse, but those doing this in private cars will.

Is it all doom?

Fortunately, some factors will counter a general trend to worse traffic, particularly as full real robocars arrive, the ones that can come unmanned to pick you up and drop you off.

  • As robocars reduce accident levels, that will reduce one of the major causes of traffic congestion.
  • Robocars don’t need to slow down and stare at accidents or other unusual things on the road, which also causes congestion.
  • Robocars won’t overcompensate on “sags” (dips) in the road. This overcompensation on sags is the cause of almost half the traffic congestion on Japanese highways
  • Robocars look like they’ll be mainly electric. That doesn’t do much about traffic, but it does help with emissions.
  • Short-haul “last mile” robocars can actually make the use of trains, buses and carpools vastly more convenient.
  • Having only a few cars which drive more regularly, even something as simple as a good quality adaptive cruise control, actually does a lot to reduce congestion.
  • The rise of single person half-width vehicles promises a capacity increase, since when two find one another on the road, they can share the lane.
  • While it won’t happen in the early days, eventually robocars will follow the car in front of them with a shorter gap if they have a faster reaction time. This increases highway capacity.
  • Early robocars won’t generate a lot of carpooling, but it will pick up fairly soon (see below.)

What not to worry about

There are a few nightmare situations people have talked about that probably won’t happen. Today, a lot of urban driving involves hunting for parking. If we do things right, robocars won’t ever hunt for parking. They (and you) will be able to make an online query for available space at the best price and go directly do it. But they’ll do that after they drop you off, and they don’t need to park super close to your destination the way you need to. To incorporate city spaces into this market, a technology upgrade will be needed, and that may take some time, but private spaces can get in the game quickly.

What also won’t happen is people telling their car to drive around rather than park, to save money. Operating a car today costs about $20/hour, which is vastly more than any hourly priced parking, so nobody is going to do that to save money unless there is literally no parking for many miles. (Yes, there are parking lots that cost more than $20, but that’s because they sell you many hours or a whole day and don’t want a lot of in and out traffic. Robocars will be the most polite parking customers around, hiding valet-style at the back of the lot and leaving when you tell them.)

Another common worry is that people will send their cars on long errands unmanned. That mom might take the car downtown, and send it all the way back for dad to do a later commute, then back to pickup the kids at school. While that’s not impossible, it’s actually not going to be the cheap or efficient thing to do. Thanks to robotaxis, we’re going to start thinking of cars as devices that wear out by the mile, not by the year, and all their costs will be by the mile except parking and $2 of financing per day. All this unmanned operation will almost double the cost of the car, and the use of robotic taxi services (Robocar Uber) will be a much better deal.

There will be empty car moves, of course. But it should not amount to more than 15% of total miles. In New York, taxis are vacant of a passenger for 38% of miles, but that’s because they cruise around all day looking for fares. When you only move when summoned, the rate is much better.

And then it gets better

After this “winter” of increased traffic congestion, the outlook gets better. Aside from the factors listed above, in the long term we get the potential for several big things to increase road capacity.

The earliest is dynamic carpooling, as you see with services like UberPool and LyftLines. After all, if you look at a rush-hour highway, you see that most of the seats going by are empty. Tools which can fill these seats can increase the capacity of the roads close to three times just with the cars that are moving today.

The next is advanced robocar transit. The ability to make an ad-hoc, on-demand transit system that combines vans and buses with last mile single person vehicles in theory allows almost arbitrary capacity on the roads. At peak hours, heavy use of vans and buses to carry people on the common segments of their routes could result in a 10-fold (or even more) increase in capacity, which is more than enough to handle our needs for decades to come.

Next after that is dynamic adaptation of roads. In a system where cities can change the direction of roads on demand, you can get more than a doubling of capacity when you combine it with repurposing of street parking. On key routes, street parking can be reserved only for robocars prior to rush hour, and then those cars can be told they must leave when rush hour begins. (Chances are they want to leave to serve passengers anyway.) Now your road has seriously increased capacity, and if it’s also converted to one-way in the peak direction, you could almost quadruple it.

The final step does not directly involve robocars, since all cars must have a smartphone and participate for it to work. This is the use of smart, internet based road metering. With complete metering, you never get more cars trying to use a road segment than it has capacity to handle, and so you very rarely get traffic congestion. You also don’t get induced demand that is greater than the capacity, solving the bane of transportation planners.

Robocar-only highways are not quite so nice an idea as expected

Recently Madrona Ventures, in partnership with Craig Mundie (former Microsoft CTO) released a white paper proposing an autonomous vehicle corridor between Seattle and Vancouver on I-5 and BC Highway 99. While there are some useful ideas in it, the basic concept contains some misconceptions about both traffic management, infrastructure planning, and robocars.

Carpool lanes are hard

The proposal starts with a call for allowing robocars in the carpool lanes, and then moving to having a robocar only lane. Eventually it moves to more lanes being robocar only, and finally the whole highway. Generally I have (mostly) avoided too much talk of the all-robocar road because there are so many barriers to this that it remains very far in the future. This proposal wants to make it happen sooner, which is not necessarily bad, but it sure is difficult.

Carpool lanes are poorly understood, even by some transportation planners. For optimum traffic flow, you want to keep every lane at near capacity, but not over it. If you have a carpool lane at half-capacity, you have a serious waste of resources, because the vast majority (around 90%) of the carpools are “natural carpools” that would exist regardless of the lane perk. They are things like couples or parents with children. A half-empty carpool lane makes traffic worse for everybody but the carpoolers, for whom the trip does improve.

That’s why carpool lanes will often let in electric cars, and why “high occupancy toll” lanes let in solo drivers willing to pay a price. In particular with the HOT lane, you can set the price so you get just enough cars in the carpool lane to make it efficient, but no more.

(It is not, of course, this simple, as sometimes carpool lanes jam up because people are scared of driving next to slow moving regular lanes, and merging is problematic. Putting a barrier in helps sometimes but can also hurt. An all-robocar lane would avoid these problems, and that is a big plus.)

Letting robocars into the carpool lane can be a good idea, if you have room. If you have to push electric cars out, that may not be the best public goal, but it is a decision a highway authority could make. (If the robocars are electric, which many will be, it’s OK.)

The transition, however, from “robocars allowed” to “robocars only” for the lane is very difficult. Because you do indeed have a decent number of carpools (even if only 10% are induced) you have to kick them out at some point to grow robocar capacity. You can’t have a switch day without causing more traffic congestion for some time after it. If you are willing to build a whole new lane (as is normal for carpool creation) you can do it, but only by wasting a lot of the new lane at first.

Robocar packing

Many are attracted to the idea that robocars can follow more closely behind another vehicle if they have faster reaction times. They also have the dream that the cars will be talking to one another, so they can form platoons that follow even more closely.) The inter car communication (V2V) creates too much computer security risk to be likely, though some still dream of a magic solution which will make it safe to have 1500kg robots exchanging complex messages with every car they randomly encounter on the road. Slightly closer following is still possible without it.

Platooning has a number of issues. It was at first popular as an idea because the lead car could be human driven. You didn’t have to solve the whole driving problem to make a platoon. Later experiments showed a number of problems, however.

  • If not in a fully dedicated lane, other drivers keep trying to fit themselves into the gaps in a platoon, unless they are super-close
  • When cars are close, they throw up stones from the road, constantly cracking windshields, destroying a car’s finish, and in some experiments, destroying the radiator!
  • Any failure can be catastrophic, since multiple cars will be unable to avoid being in the accident.
  • Fuel savings of workable following distances are around 10%. Nice, but not exciting.

To have platoons, you need cars designed with stone-shields or some other technique to stop stones from being thrown. You need a more secure (perhaps optical rather than radio) protocol for communication of only the simplest information, such as when brakes are being hit. And you must reach a safety level where the prospect of chain accidents is no longer frightening.

In any event, the benefits of packing are not binary. Rather, in a lane that is 90% robocars and 10% human, you get 90% of the benefit of a 100% robocar lane. There is no magic special benefit you get at 100% as far as packing is concerned. This is even true to some degree with the problems of erratic human drivers. Humans will brake for no good reason, and this causes traffic jams. Research shows that just a small fraction of robocars on the road who will react properly enough to this are enough to stop this from causing major traffic jams. There is actually a diminishing return from having more robocars. Traffic flow does need some gaps in it to absorb braking events, and while you could get away with fewer in an all robocar road, I am not sure that is wise. As long as you have a modest buffer, robocars trailing a human who brakes for no reason can absorb it and restore the flow as soon as the human speeds up again.

Going faster

There is a big benefit to all-robocar lanes if you are willing to allow the cars in that lane to drive much faster. That’s something that can’t happen in a mixed lane. The white paper makes only one brief mention of that benefit.

Other than this, the cars don’t get any great benefit from grouping. I mean, anybody would prefer to drive with robocars, which should drive more safely and more regularly. They won’t block the lane the way human drivers do. They will tailgate you (perhaps uncomfortably so) but they will only do so when it’s safe. They could cluster together to enjoy this benefit on their own, without any need for regulations.

The danger of robocar-only lanes

One of the biggest reasons to be wary of robocar only lanes is that while this proposal does not say it, most proposals have been put forward in the belief that robocars are not safe enough to mix with regular traffic. That is true today for the prototypes, but all teams plan to make vehicles which do meet that safety goal before they ship.

Many dedicated lane proposals have essentially called for robocar operation only in the dedicated lanes, and manual driving is required in other lanes. If you declare that the vehicles are not safe without a special lane, you turn them into vehicles with a very limited domain of operation. Since the creation of new dedicated lanes will be a very long (decades long) process, it’s an incredible damper on the deployment of the technology. “Keep those things in their own special lanes” means delay those things by decades.

The white paper does not advocate this. But there is a danger that the concept will be co-opted by those who do. As long as the benefits are minor, why take that risk?

Do we need it?

In general, any plan that calls for infrastructure change or political change is risky because of the time scales involved. It is quite common for governmental authorities to draft plans that take many years or decades to solve things software teams will solve in months or even, at the basic level, in hours. We want to be always sure that there is not a software solution before we start the long and high-momentum path of infrastructure change. Even change as simple as repainting.

Most of the benefits that come from all-robocar highway lanes arrive without mandating it. The ability for greater speed is the main one that doesn’t. All this happens everywhere, without planning, and political difficulty. Banning human drivers from lanes is going to be politically difficult. Banning them from the main artery would be even harder.

For great speed, I actually think that airplanes and potentially the hyperloop provide interesting answers, at least for trips of more than 150 miles. The white paper makes a very common poor assumption — that other technologies will stand still as we move to 2040. I know this is not true. I have big hopes for better aviation, including electric planes, robotic planes and most of all, better airports that create a seamless transfer from robocar to aircraft entirely unlike the nightmare we have built today.

On the ground, while I am not a fan of existing rail technology, new technologies like hyperloop are just starting to show some promise. If it can be built, hyperloop will be faster and more energy efficient, and through the use of smaller pods rather than long trains, offer travel without a schedule.

On the plus side, a plan for robocar only lanes is not a grand one. If you can sell it politically, you don’t need to build much infrastructure. It’s just some signs and new paint.

Some other users for all-robocar lanes

Once density is high enough, I think all-robocar lanes could be useful as barriers on a highway with dynamic lane assignment. To do this, you would just have a big wide stretch of pavement, and depending on traffic demand, allocate lanes to a direction. The problem is the interface lane. We may not want human drivers to drive at 75mph with other cars going the other way just 4 feet away. Robocars, however, could drive exclusively in the two border lanes, and do it safely. They would also drive a little off-center to create a larger buffer to avoid the wind-shake of passing close. No trucks in these lanes!

In an ideal situation, you would get a lot more capacity by paving over the shoulders and median to do this. With no median, though, you still have a risk of runaway cars (even robocars) crossing into oncoming traffic. A simpler solution would be to do this on existing highways. If you have a 6 lane highway, you could allocate 4 lanes one way and 2 the other, but insist that the two border lanes be robocars only, if we trust them. A breakdown by a robocar going in the counter-direction at high speed could still be an issue. Of course, this is how undivided highways are, but they have lower speeds and traffic flow.

GM accepts all liability in robocars, and other news

General Motors announced this week that they would “take full responsibility” if a crash takes place during an autonomous driving trip. This follows a pledge to do the same made some time ago by Daimler, Google and Volvo and possibly others.

What’s interesting is that they don’t add the caveat “if the system is at fault.” Of course, if the system is not at fault, they can get payment from the other driver, and so it’s still OK to tell the passenger or owner that GM takes responsibility.

GM is moving on a rapid timetable with the technology they bought with Cruise not too long ago. In fact, rumours of a sooner than expected release actually shot their stock up a bit this week.

Even to this day I still see articles which ask the question, “who is liable in an accident?” and then don’t answer it as though the answer is unknown or hard to figure out. It never was. There was never any doubt that the creators of these vehicles would take responsibility for any accidents they cause. Even if they tried not to, the liability would fall to them in the court system. People have been slow to say it because lawyers always advise clients, “never say in advance that you will take liability for something!” Generally good advice, but pointless here, and the message of responsibility makes customers feel better. Would you get into a taxi if you knew you would be liable if the driver crashed?

Senate bill

In other news this week, a Senate panel passed its own version of the House bill deregulating robocars. Notable was the exclusion of trucks, at the request of the Teamsters. I have predicted since this all began that the Teamsters would eventually bring their influence to bear on automated trucking. They will slow things down, but it’s a battle they won’t win. Truck accidents kill 4,000 people every year, and truck driving is a grueling boring profession whose annual turnover sometimes exceeds 100%. At that rate, if they introduced all-automated truck fleets today, it would be a very long time before somebody who actually wanted a trucking job lost it to automation. Indeed, even in the mostly automated world there will still be routes and tasked best served by humans, and they will be served by those humans who want it.

Actually, this new-world trucking will be a much nicer job. It will be safer, and nobody will drive the long-haul cross-country routes that grind you with boredom, take you away from your home and family for a week or more while you eat bad food and sleep in cheap motels or the back of your rig.

Uber

Speaking of trucking, while I have not been commenting much on the Waymo/Uber lawsuit because of my inside knowledge, and the personalities don’t bear too much on the future of the technology, it certainly has been getting fast and furious.

You can read the due diligence report Uber had prepared before buying Otto, and a Wired article which starts with a silly headline but has some real information as well.

Other items

Luminar, the young 1.5 micron LIDAR startup, has announced that Toyota will use their LIDARs.

Lyft has added Ford, along with Google to its partner list. Since Lyft did a $500M investment deal with GM, it’s clear they don’t want to stick with just one player, even for that sum. Google may have larger sums — it does seem clear that the once happy partnership of Uber and Google is over.

Baidu announced a 10 billion Yuan investment fund for self-driving startups.

Rumours suggest Waymo may expand their Phoenix pilot to a real self-driving taxi service for the public sooner than expected.

New NHTSA Robocar regulations are a major, but positive, reversal

NHTSA released their latest draft robocar regulations just a week after the U.S. House passed a new regulatory regime and the senate started working on its own. The proposed regulations preempt state regulation of vehicle design, and allow companies to apply for high volume exemptions from the standards that exist for human-driven cars.

It’s clear that the new approach will be quite different from the Obama-era one, much more hands-off. There are not a lot of things to like about the Trump administration but this could be one of them. The prior regulations reached 116 pages with much detail, though they were mostly listed as “voluntary.” I wrote a long critique of the regulations in a 4 part series which can be found in my NHTSA tag. They seem to have paid attention to that commentary and the similar commentary of others.

At 26 pages, the new report is much more modest, and actually says very little. Indeed, I could sum it up as follows:

  • Do the stuff you’re already doing
  • Pay attention to where and when your car can drive and document that
  • Document your processes internally and for the public
  • Go to the existing standards bodies (SAE, ISO etc.) for guidance
  • Create a standard data format for your incident logs
  • Don’t forget all the work on crash avoidance, survival and post-crash safety in modern cars that we worked very hard on
  • Plans for how states and the feds will work together on regulating this

Goals vs. Approaches

The document does a better job at understanding the difference between goals — public goods that it is the government’s role to promote — and approaches to those goals, which should be entirely the province of industry.

The new document is much more explicit that the 12 “safety design elements” are voluntary. I continue to believe that there is a risk they may not be truly voluntary, as there will be great pressure to conform with them, and possible increased liability for those who don’t, but the new document tries to avoid that, and its requests are much milder.

The document understands the important realization that developers in this space will be creating new paths to safety and establishing new and different concepts of best practices. Existing standards have value, but they can at best encode conventional wisdom. Robocars will not be created using conventional wisdom. The new document takes the approach of more likely recommending that the existing standards be considered, which is a reasonable plan.

A lightweight regulatory philosophy

My own analysis is guided by a lightweight regulatory approach which has been the norm until now. The government’s role is to determine important public goals and interests, and to use regulations and enforcement when, and only when, it becomes clear that industry can’t be trusted to meet these goals on its own.

In particular, the government should very rarely regulate how something should be done, and focus instead on what needs to happen as the end result, and why. In the past, all automotive safety technologies were developed by vendors and deployed, sometimes for decades, before they were regulated. When they were regulated, it was more along the lines of “All cars should now have anti-lock brakes.” Only with the more mature technologies have the regulations had to go into detail on how to build them.

Worthwhile public goals include safety, of course, and the promotion of innovation. We want to encourage both competition and cooperation in the right places. We want to protect consumer rights and privacy. (The prior regulations proposed a mandatory sharing of incident data which is watered down greatly in these new regulations.)

I call this lightweight because others have called for a great deal more regulation. I don’t, however, view it is highly laissez-faire. Driving is already highly regulated, and the idea that regulators would need to write rules to prevent companies from doing things they have shown no evidence of doing seems odd to me. Particularly in a fast-changing field where regulators (and even developers) admit they have limited knowledge of what the technology’s final form will actually be.

Stating the obvious

While I laud the reduction of detail in these regulations, it’s worth pointing out that many of the remaining sections are stripped to the point of mostly outlining “motherhood” requirements — requirements which are obvious and that every developer has known for some time. You don’t have to say that the vehicle should follow the vehicle code and not hit other cars. Anybody who needs to be told that is not a robocar developer. The set of obvious goals belongs better in a non-governmental advice document (which this does in fact declare itself in part to be, though of course governmental) than in something considered regulatory.

Overstating the obvious and discouraging the “black box.”

Sometimes a statement can be both obvious but also possibly wrong in the light of new technology. The document has many requirements that vendors document their thinking and processes which may be very difficult to do with systems built with machine learning. Machine learning sometimes produces a “black box” that works, but there is minimal knowledge as to how it works. It may be that such systems will outperform other systems, leaving us with the dilemma of choosing between a superior system we can’t document and understand, and an inferior one we can.

There is a new research area known as “explainable AI” which hopes to bridge this gap and make it possible to document and understand why machine learning systems operate as they do. This is promising research but it may never be complete. In spite of this, EU regulations currently are already attempting to forbid unexplainable AI. This may cut off very productive avenues of development — we don’t know enough to be sure about this as yet.

Some minor notes

The name

The new report pushes a new term — Automated Driving Systems. It seems every iteration comes up with a new name. The field is really starting to need a name people agree on, since nobody seems to much like driverless cars, self-driving cars, autonomous vehicles, automated vehicles, robocars or any of the others. This one is just as unwieldy, and its acronym is an English word and thus hard to search for.

The levels

The SAE levels continue to be used. I have been critical of the levels before, recently in this satire. It is wrong to try to understand robocars primarily through the role of humans in their operation, and wrong to suggest there is a progression of levels based on that.

The 12 safety elements

As noted, most of the sections simply advise obvious policies which everybody is already doing, and advise that teams document what they are doing.

1. System Safety

This section is modest, and describes fairly common existing practices for high reliability software systems. (Almost to the point that there is no real need for the government to point them out.)

2. Operational Design Domain

The idea of defining the situations where the car can do certain things is a much better approach than imagining levels of human involvement. I would even suggest it replace the levels, and the human seen simply as one of the tools to be used to operate outside of certain domains. Still, I see minimal need for NHTSA to say this — everybody already knows that roads and their conditions are different and complex and need different classes of technology.

3. Object and Event Detection and Response, 4. Fallback, 5. Validation, 6. HMI

Again, this is fairly redundant. Vendors don’t need to be told that vehicles must obey the vehicle code and stay in their lane and not hit things. That’s already the law. They know that only with a fallback strategy can they approach the reliability needed.

7. Computer Security

While everything here is already on the minds of developers, I don’t fault the reminder here because traditional automakers have a history of having done security badly. The call for a central clearing house on attacks is good, though it should not necessarily be Auto-ISAC.

8. Occupant Protection

A great deal of the current FMVSS (Federal Motor Vehicle Safety Standards) are about this, and because many vehicles may use exemptions from FMVSS to get going, a reminder about this is in order.

10. Data Recording

The most interesting proposal in the prior document was a requirement for public sharing of incident and crash data so that all teams could learn from every problem any team encounters. This would speed up development and improve safety, but vendors don’t like the fact it removes a key competitive edge — their corpus of driving experience.

The new document calls for a standard data format, and makes general motherhood calls for storing data in a crash, something everybody already does.

The call for a standard is actually difficult. Every vehicle has a different sensor suite and its own tools to examine the sensor data. Trying to standardize that on a truly useful level is a serious task. I had expected this task to fall to outside testing companies, who would learn (possibly reverse engineering) the data formats of each car and try to put them in a standard format that was actually useful. I fear a standard agreed upon by major players (who don’t want to share their data) will be minimal and less useful.

State Roles

A large section of the document is about the bureaucratic distribution of roles between states and federal bodies. I will provide analysis of this later.

Conclusion

This document reflects a major change, almost a reversal, and largely a positive one. Going forward from here, I would encourage that the debate on regulation focus on

  • What public goods does the government have an interest in protecting?
  • Which ones are vendors showing they can’t be trusted to support voluntarily, both by present actions and past history?
  • How can innovation be encouraged and facilitated, and good communication be made to the public about what’s going on

One of the key public goods missing from this document is privacy protection. This is one of the areas where vendors don’t have a great past history.
Another one is civil rights protection — for example what powers police will want over cars — where the government has a bad history.

NTSB Tesla crash report and new NHTSA regulations to come

Tesla Motors autopilot (photo:Tesla)

The NTSB (National Transportation Safety Board) has released a preliminary report on the fatal Tesla crash with the full report expected later this week. The report is much less favourable to autopilots than their earlier evaluation.

(This is a giant news day for Robocars. Today NHTSA also released their new draft robocar regulations which appear to be much simpler than the earlier 116 page document that I was very critical of last year. It’s a busy day, so I will be posting a more detailed evaluation of the new regulations — and the proposed new robocar laws from the House — later in the week.)

The earlier NTSB report indicated that though the autopilot had its flaws, overall the system was working. This is to say that though drivers were misusing the autopilot, the combined system including drivers not misusing the autopilot combined with those who did, was overall safer than drivers with no autopilot. The new report makes it clear that this does not excuse the autopilot being so easy to abuse. (By abuse, I mean ignore the warnings and treat it like a robocar, letting it driving you without actively monitoring the road, ready to take control.)


While the report mostly faults the truck driver for turning at the wrong time, it blames Tesla for not doing a good enough job to assure that the driver is not abusing the autopilot. Tesla makes you touch the wheel every so often, but NTSB notes that it is possible to touch the wheel without actually looking at the road. NTSB also is concerned that the autopilot can operate in this fashion even on roads it was not designed for. They note that Tesla has improved some of these things since the accident.

This means that “touch the wheel” systems will probably not be considered acceptable in the future, and there will have to be some means of assuring the driver is really paying attention. Some vendors have decided to put in cameras that watch the driver or in particular the driver’s eyes to check for attention. After the Tesla accident, I proposed a system which tested driver attention from time to time and punished them if they were not paying attention which could do the job without adding new hardware.

It also seems that autopilot cars will need to have maps of what roads they work on and which they don’t, and limit features based on the type of road you’re on.

Page 2 of 3
1 2 3