Uber's quest for a "smoother ride" brought them down
A detailed new report on Uber by Business Insider contains a variety of leaked quotes from insiders confirming much of what we had heard or feared about Uber's technical failure, with a few important new details.
I will reiterate that the Uber insiders who sought to blame their terrible safety driver, who was watching a TV show instead of the road, for the incident are partly right. Since all prototypes will make mistakes which could lead to dangerous accidents on a regular basis, the primary fault lies in not having a good safety driver protocol and a not having a good safety driver. That said, the worse your car's software is, the more chances there are that a safety driver failure can lead to doom, and so the secondary causes, the technical ones, are of interest. But even with much blame on the safety driver herself, the decisions which put a negligent person in that chair, on her own, are the fault of Uber's protocols and management. The attempt to put blame on the victim described in the article is not valid, and simply the result of panic and inability to accept blame of one's self.
The most interesting new detail is that the team had been directed to produce a car with a smoother ride. That means less jarring mistakes such as sudden brake jabs, swerves, and even mild brake jabs. That is a worthwhile long term goal -- customers will not ride in a car that is not comfortable to ride in -- but it is a goal once you have made sure it won't compromise safety in a significant way. Every driver (human and software) makes this compromise. If you hit the brakes hard any time you were uncertain about anything on the road, you would not be a practical driver. You have to find a way to manage your uncertainty, and know the difference between ordinary uncertainty (is that person on the curb about to jump into the road?) and more meaningful uncertainty (is that set of LIDAR points roughly 5 feet high a swirling ball of leaves or a pedestrian?) In the first case, since you see pedestrians at the curb all the time, you need to think it's quite probable they will step out before you will brake for them. For the cloud of leaves, you will brake more often until you have a way to be sure you know the difference between (rare) clouds of leaves and (more common) pedestrians.
So you can't brake or slow for everything, and you must decide what to brake for. You can make your ride smoother by just turning down the dial and braking for less.
The report says that with a big demo coming up with Uber's new CEO, Dara Khosrowshahi, the order came down to have a smoother ride. To not have more than one incident per demo of the car doing something stupid, like braking for a ghost, or stopping because it doesn't know what to do. Early cars do "stupid" things fairly often, and when people see a demo, those things convince the viewer that the car is not very good yet. But most people will forgive a single one (though they probably should not.) They won't forgive 3.
The demo for DK was crucial. The whole existence of the project was in the balance. If he took a ride and came away feeling the project was in bad shape and foundering, he might well kill it, or at best sell it. If he came away impressed, he would support and boost it. Big stakes. Leading to a big mistake.
With good safety drivers, you could get away with tuning the car this way. The safety drivers would be ready for the real situations only humans understand. They would know that it's really a pedestrian in the street. The CEO getting the demo (who would have the team leaders as safety drivers) would never notice how they saved the day, unless they had to intervene too many times. But safety drivers are told to intervene at any sign of risk, so it's normal.
So they dialed it down. In particular, they decided to not allow the vehicle to do hard "emergency" braking. That would be left to the safety driver. (In reality, the safety driver would not do hard emergency braking very often, because humans understand the road better than computers and understand what's going on further out and sooner. So in the case at hand, a pedestrian in the middle of the road, a human driver would notice them immediately, and notice quickly that the car is not even slowing. The safety driver would then apply the brakes manually sooner than the system might, and thus brake more gently or even drive around the situation.) Strictly, a robocar should not do what you would consider "emergency" braking -- if it detects an obstacle it might hit, it should do braking. Sharp braking would only occur in the event of a perception failure, or something appearing out of nowhere, like a sudden cut-in or pedestrian stepping off a curb.
A car that does hard emergency braking with any frequency is not just an uncomfortable car to ride in, it's actually a bit dangerous since other drivers follow too closely all the time. (You could and possibly should program a car to not do hard emergency braking unless it is very sure it's needed when there is somebody on your tail, which is to say your threshold of how sure you have to be might depend on whether somebody is close on your tail or not. Brake for ghosts when nobody is behind, brake less hard, and only for high confidence obstacles when doing so is very likely to cause a rear-ending. This is something robots actually should be able to do much better than people could.)
Uber's system was emergency braking too much. It didn't really work safely. So it could not be turned on, and definitely could not be turned on for a CEO demo.
Another snippet of interest in the article is a statement that the system was having problems tracking obstacles. When the sensors and perception system of a car detect something important on the road -- commonly called an obstacle, in that you must not hit it -- one of their most important jobs is to track the motion of that obstacle. What direction is it moving, and how fast. Is it likely to change direction and how?
At the most basic level, you just identify how it is moving and presume it will continue on that course with minor changes. At a higher level you try to identify what it is to improve the "cone" of possible things it might do. For example, cars don't suddenly go sideways, but pedestrians can. Cars can go 60mph, pedestrians can't.
For everything that's out there, you want to know how it's going to move. Most importantly, you want to know how likely it is it might move to intersect you! The worst case would be a direct collision course, the obstacle is going the same place you are at the same time.
As a human, you do this all the time, almost unconsciously. When you see that impeding collision course, you brake or swerve. But when you see even a "maybe" you sometimes slow until you know more (particularly with pedestrians and bicycles.) On the other hand you often do nothing, following the implicit contract of the road, that the car ahead of you will not suddenly cut you off or that people on cross streets won't run red lights and stop signs, even though physically nothing prevents that.
Uber's victim was just walking across the road. (There is some speculation, not yet confirmed, that she possibly paused in the Uber's lane, frozen "deer in the headlights" style, rather than continuing on. Uber's software did not figure out quite what she was at any time. That's bad, but actually not that uncommon when obstacles are at a long distance. What it should have figured out in any situation, even not knowing anything about her, was that she was on a path across the road, right into their lane at the worst time.
It failed. And apparently it is still sub-par in this area, according to sources.
One safety driver
It is confirmed in this article that the drop from two safety drivers to one was motivated by the big demo, the desire to get as many miles of testing in before it.
Generally, this has not made sense. Safety drivers, even though they should have some decent training, are not super-skilled rare individuals. You can hire them anywhere. Almost always, the limiting factor on testing for a team is how many test vehicles they have, not how many safety drivers they have. Building test vehicles requires rare people, driving does not. So it should not be the case that you can get more testing done by dropping to one drier. It only means you get to do it cheaper. Cheaper is not on the goal list for teams spending hundreds of millions of dollars to get their products safe, faster.
The one thing it can do is get you more testing immediately, like in the next 2 weeks. If you are under-using your vehicles (though it's not clear why a rich team would do that) you can scale up testing by cutting to one driver until you have the time to hire more, since hiring and training is not instant.
This big demo may have been the reason for that extreme hurry.
Note that for a very advanced team, like Waymo, the cut to one driver will also be made, but that's because you're on the way to zero safety drivers, and 1 is reasonable if you're almost at zero. This does not apply to anybody but Waymo.
The scary part of the report is that the insiders claim that in spite of Uber's very nice report about how good they are now many of the problems are still present, and not on the way to being fixed. That we'll have to see.