Don't watch TV while safety driving

Topic: 

The Tempe police released a detailed report on their investigation of Uber's fatality. I am on the road and have not had time to read it, but the big point, reported in many press was that the safety driver was, according to logs from her phone accounts, watching the show "The Voice" via Hulu on her phone just shortly before the incident. This is at odds with earlier statements in the NTSB report, that she had been looking at the status console of the Uber self-drive system, and had not been using her phones. The report further said that Uber asked its safety drivers to observe the console and make notes on things seen on it. It appears the safety driver lied, and may have tried to implicate Uber in doing so.

Obviously attempting to watch a TV show while you are monitoring a car is unacceptable, presumably negligent behaviour. More interesting is what this means for Uber and other companies.

The first question -- did Uber still instruct safety drivers to look at the monitors and make note of problems? That is a normal instruction for a software operator when there are two crew in the car, as most companies have. At first, we presumed that perhaps Uber had forgotten to alter this instruction when it went form 2 crew to 1. Perhaps the safety driver just used that as an excuse for her looking down since she felt she could not admit to doing -- watching TV. (She probably didn't realize police would get logs from Hulu.)

If Uber still did that, it's an error on their part, but now seems to play no role in this incident. That's positive legal news for Uber.

It is true that if you had two people in the car, it's highly unlikely the safety driver behind the wheel would be watching a TV show. It's also true that if Uber had attention monitoring on the safety driver, it also would have made it harder to pull a stunt like that. Not all teams have attention monitoring, though after this incident I believe that most, including Uber, are putting it in. It might be argued that if Uber did require drivers to check the monitors, this might have somehow encouraged the safety driver's negligent decision to watch TV, but that's a stretch. I think any reasonable person is going to know this is not a job where you do that.

There may be some question if a person with such bad judgement should have been cleared to be a safety driver. Uber may face some scrutiny for that bad choice. they may also face scrutiny if their training and job evaluation process for the safety drivers was clearly negligent. On the other hand, human employees are human, and if there's not a pattern, it is less likely to create legal trouble for Uber.

From the standpoint of the Robocar industry, it makes the incident no less tragic, but less informative about robocar accidents. Accidents are caused every day because people allow themselves ridiculously unsafe distractions on their phones. This one is still special, but less so than we thought. While the issue of whether today's limited systems (like the Tesla) generate too much driver complacency is still there, this was somebody being paid not to be complacent. The lessons we already knew -- have 2 drivers, have driver attention monitoring -- are still the same.

"Disabled the emergency braking."

A number of press stories on the event have said that Uber "disabled" the emergency braking, and this also played a role in the fatality. That's partly true but is a very misleading vocabulary. The reality appears to be that Uber doesn't have a working emergency braking capability in their system, and as such it is not enabled. That's different from the idea that they have one and disabled it, which sounds much more like an ill act.

Uber's system, like all systems, sometimes decides suddenly that there is an obstacle in front of the car for which it should brake when that obstacle is not really there. This is called a "false positive" or "ghost." When this happens well in advance, it's OK to have the car apply the brakes in a modest way, and then release them when it becomes clear it's a ghost. However, if the ghost is so close that it would require full-hard braking, this creates a problem. If a car frequently does full-hard braking for ghosts, it is not only jarring, it can be dangerous, both for occupants of the car, and for cars following a little too closely behind -- which sadly is the reality of driving.

As such, an emergency braking decision algorithm which hard brakes for ghosts is not a working system. You can't turn it on safety, and so you don't. Which is different from disabling it. While the Uber software did decide 2 seconds out that there was an obstacle that required a hard brake, it decides that out of the blue too often to be trusted with that decision. The decision is left to the safety driver -- who should not be watching TV.

That does not mean Uber could not have done this much better. The car should still have done moderate braking, which would reduce the severity of any real accident and also wake up any inattentive safety driver. An audible alert should also have been present. Earlier, I speculated that if the driver was looking at the console, this sort of false positive incident would very likely have been there, so it was odd she did not see it, but it turns out she was not looking there.

The Volvo also has an emergency braking system. That system was indeed disabled -- it is normally for any ADAS functions built into the cars to be disabled when used as prototype robocars. You are building something better, and you can't have them competing. The Volvo system does not brake too often for ghosts, but that's because it also doesn't brake for real things far too often for a robocar system. Any ADAS system will be tuned that way because the driver is still responsible for driving. Teslas have been notoriously plowing into road barriers and trucks due to this ADAS style of tuning. It's why a real robocar is much harder than the Tesla autopilot.

Other news

I've been on the road, so I have not reported on it, but the general news has been quite impressive. In particular, Waymo announced the order of 63,000 Chrysler minivans of the type they use in their Phoenix area tests. They are going beyond a pilot project to real deployment, and soon. Nobody else is close. This will add to around 20,000 Jaguar electric vehicles presumably aimed at a more luxury ride -- though I actually think the minivan with its big doors, large interior space and high ride may well be more pleasant for most trips. The electric Jaguar will be more efficient.

Comments

A rough calculation using a taxi/resident ratio of 1/100 gives Phoenix a total fleet of 16,000 taxis. This suggests that either they are planning to partly replace private ownership of vehicles in Phoenix, or they are ready to move other cities.
The coverage of the vehicle purchase didn't give any timelines that I saw, so it's hard to know when they are expecting delivery of all these vehicles.
Do you have any more info that you are at liberty to divulge?

It is broadly expected they will go to other cities, but it is unknown how many. Oddly, California has a flaw -- it does not allow ride sharing (group riding like uberpool) or charging for rides at this time, so they can't test business models or other important things there.

I presume the plan is to be cheap though, half Uber's price, which means you want a fleet larger than the uber/lyft/taxi fleet to serve a city. Perhaps much larger. New York has 100K taxi/black car/uber/lyft I would guess, perhaps more. Just 13K taxis due to regulation.

It seems like Uber could randomly audit the internal camera we know they have on the driver and check some of that footage occasionally (heck do a rough ML classifier to help) to see if the drivers are paying attention. If interior footage of them is found watching TV there should be a zero tolerance policy -- fired. It does seem that either Uber is extremely negligent for having their driver look at the telemetry console instead of doing the safety driving OR the driver is extremely negligent for watching TV on the job. Yes Uber could have done better, but that's what they're presumably researching; dwelling on that is beside the point. What we need to find out is if Uber had bad safety driver policies or a bad safety driver. If the latter, throw her under the bus ASAP.

If Uber does continue with testing, they should definitely talk to the people at Lytx.com.

Uber clearly was not doing that. Others do. I bet Uber will do it when and if they resume, and I bet anybody who isn't doing driver monitoring is working on getting it in soon. At the same time, don't forget that those other than Uber, particularly Waymo, have an exemplary safety record with their current procedures.

Perhaps the safety driver should be prohibited to have a phone. It's a firing offense for a driver on the MBTA (Boston subway etc.) to have a phone in their possession.

The safety driver had her own phone and an Uber supplied phone. They certainly need the latter, for it was used to call 911 and Uber HQ after the incident. You want that portable, in case the accident is serious enough that the driver must leave the car.

But yes, they don't need their personal phone until break time. It's less of a problem with two safety drivers.

Looked back at the NTSB report, and it only said that the driver reported looking at the status console, and reported not using her phone. I would have expected them to mention that an investigation of the validity of that was still ongoing, but I guess not. (Is it possible the police didn't share that with them? I don't know.)

Yes, perhaps they are not working closely together. And Uber itself may be working less closely with them too. There is also the leak reported by "The Information" saying that the car classified her as a false positive. When I asked the reporter why the leak differed so much from the NTSB report, he indicated confidence in the leak even still, so there may be more contradictions.

There may not be any meaningful difference between "classified her as a false positive" and "classified her as a positive, all of which are treated as false positives, because emergency braking is disabled/unavailable".

Yes, I thought of that, but it's a stretch on the vocabulary. If everything is a false positive, then nothing is. A more common use of the term false positive would be when you identify things like birds, blowing trash, exhaust smoke as something you don't need to stop for. A true false positive of course is something that you actually (incorrectly) treat as an obstacle and try to avoid.

Stretch on the vocabulary, or the result of the telephone game if the source didn't have first-hand knowledge. Hopefully we'll have more accurate information when the final report comes out.

Another thing I'm interested to see is whether it's true that "Uber doesn't have a working emergency braking capability in their system." Seems unlikely, as it's fairly trivial to slam on the brakes (which, by definition, would be an emergency braking maneuver).

Although, one thing I've wondered is whether or not ABS is available when the Uber AI tells the car to "slam on the brakes." If it isn't, maybe *that* is why they disabled emergency braking maneuvers.

Generally it would be. Most of the designs I have seen have the car pretend to be the brake pedal potentiometer. The car's job is to do the rest after that. While people like George Hotz famously like to reverse engineer the CANBUS messages to the brake system, this is risky as car companies feel free to change those protocols and refuse to document or support them.

As far as "bad judgement" being a pattern, the driver was a convicted felon (convicted of a crime of moral turpitude - armed robbery). I'd say that alone makes the hiring for a safety position negligent, but it's not as egregious as it would have been had the driver been using the console as instructed.

As for the driver, she might soon be facing a new charge of a felony involving moral turpitude - manslaughter.

The safety driver performed her job incorrectly, like many other employees do.

The manufacturer and operator of the device engineered it so that the human was a safety critical component of the system without using any of the techniques that other industries like trains, aviation, or factory automation use to overcome the well known failures of humans in safety critical roles.

We must not get distracted by this one safety driver's behavior. She is the "one bad apple" that indicates that the system is corrupt: that at least one team of self driving car engineers took unacceptable risks with public safety, and that our government did not protect us from this corruption.

This is still extraordinarily bad news for the self driving car industry, and I still hope it is the death warrant for Uber, and possibly criminal charges if senior engineers created an environment for criminal negligence.

-jeff

Reports are they hope to restart testing later this summer, so there may be no death penalty for their project.

I do agree Uber should take grief for how they managed and selected safety drivers. They settled the civil cases so it is unsure what the state will or can do, we'll see.

They did not do what aviation does but I don't feel that's the right standard. For better or worse, we have a pretty low standard on what sort of person can operate a vehicle on the roads, even carrying passengers. It's a very old and established system, and it does result in deaths, so we may talk about changing it, but we just don't put much of a bar on getting a licence to drive.

Sorry, but this article, and most of the comments on it, are way off base. Humans are terrible at paying attention! Why is everyone ignoring the fact that over a million people die every year in car accidents?? No human can ever pay full attention for long stretches of time where nothing is happening. About 6000 pedestrians were killed last year - by human drivers! Autonomous cars are the only hope for reducing the carnage we are currently experiencing.

90% of those fatalities are in low and middle income countries that aren't going to get autonomous cars any time soon and would greatly benefit from many other more affordable improvements.

And lots of humans pay full attention for long stretches of time where "nothing" is happening. Truck drivers in the US have a little over 1 fatality per *100 million* miles. And they're driving vehicles that are much more dangerous than that Uber car.

Uber tried to cut corners in many ways. That's why that fatality occurred.

Ultimately, autonomous vehicles likely will greatly reduced fatalities. However, this will happen whether Uber is involved or not. Moreover, even if you are building something that will save lives, that isn't an excuse for being reckless and putting lives at risk while building it.

Add new comment