Uber's quest for a "smoother ride" brought them down

Topic: 
One of Uber's Volvos, if they ever get back on the road

A detailed new report on Uber by Business Insider contains a variety of leaked quotes from insiders confirming much of what we had heard or feared about Uber's technical failure, with a few important new details.

I will reiterate that the Uber insiders who sought to blame their terrible safety driver, who was watching a TV show instead of the road, for the incident are partly right. Since all prototypes will make mistakes which could lead to dangerous accidents on a regular basis, the primary fault lies in not having a good safety driver protocol and a not having a good safety driver. That said, the worse your car's software is, the more chances there are that a safety driver failure can lead to doom, and so the secondary causes, the technical ones, are of interest. But even with much blame on the safety driver herself, the decisions which put a negligent person in that chair, on her own, are the fault of Uber's protocols and management. The attempt to put blame on the victim described in the article is not valid, and simply the result of panic and inability to accept blame of one's self.

Smooth Ride

The most interesting new detail is that the team had been directed to produce a car with a smoother ride. That means less jarring mistakes such as sudden brake jabs, swerves, and even mild brake jabs. That is a worthwhile long term goal -- customers will not ride in a car that is not comfortable to ride in -- but it is a goal once you have made sure it won't compromise safety in a significant way. Every driver (human and software) makes this compromise. If you hit the brakes hard any time you were uncertain about anything on the road, you would not be a practical driver. You have to find a way to manage your uncertainty, and know the difference between ordinary uncertainty (is that person on the curb about to jump into the road?) and more meaningful uncertainty (is that set of LIDAR points roughly 5 feet high a swirling ball of leaves or a pedestrian?) In the first case, since you see pedestrians at the curb all the time, you need to think it's quite probable they will step out before you will brake for them. For the cloud of leaves, you will brake more often until you have a way to be sure you know the difference between (rare) clouds of leaves and (more common) pedestrians.

So you can't brake or slow for everything, and you must decide what to brake for. You can make your ride smoother by just turning down the dial and braking for less.

The report says that with a big demo coming up with Uber's new CEO, Dara Khosrowshahi, the order came down to have a smoother ride. To not have more than one incident per demo of the car doing something stupid, like braking for a ghost, or stopping because it doesn't know what to do. Early cars do "stupid" things fairly often, and when people see a demo, those things convince the viewer that the car is not very good yet. But most people will forgive a single one (though they probably should not.) They won't forgive 3.

The demo for DK was crucial. The whole existence of the project was in the balance. If he took a ride and came away feeling the project was in bad shape and foundering, he might well kill it, or at best sell it. If he came away impressed, he would support and boost it. Big stakes. Leading to a big mistake.

With good safety drivers, you could get away with tuning the car this way. The safety drivers would be ready for the real situations only humans understand. They would know that it's really a pedestrian in the street. The CEO getting the demo (who would have the team leaders as safety drivers) would never notice how they saved the day, unless they had to intervene too many times. But safety drivers are told to intervene at any sign of risk, so it's normal.

So they dialed it down. In particular, they decided to not allow the vehicle to do hard "emergency" braking. That would be left to the safety driver. (In reality, the safety driver would not do hard emergency braking very often, because humans understand the road better than computers and understand what's going on further out and sooner. So in the case at hand, a pedestrian in the middle of the road, a human driver would notice them immediately, and notice quickly that the car is not even slowing. The safety driver would then apply the brakes manually sooner than the system might, and thus brake more gently or even drive around the situation.) Strictly, a robocar should not do what you would consider "emergency" braking -- if it detects an obstacle it might hit, it should do braking. Sharp braking would only occur in the event of a perception failure, or something appearing out of nowhere, like a sudden cut-in or pedestrian stepping off a curb.

A car that does hard emergency braking with any frequency is not just an uncomfortable car to ride in, it's actually a bit dangerous since other drivers follow too closely all the time. (You could and possibly should program a car to not do hard emergency braking unless it is very sure it's needed when there is somebody on your tail, which is to say your threshold of how sure you have to be might depend on whether somebody is close on your tail or not. Brake for ghosts when nobody is behind, brake less hard, and only for high confidence obstacles when doing so is very likely to cause a rear-ending. This is something robots actually should be able to do much better than people could.)

Uber's system was emergency braking too much. It didn't really work safely. So it could not be turned on, and definitely could not be turned on for a CEO demo.

Tracking targets

Another snippet of interest in the article is a statement that the system was having problems tracking obstacles. When the sensors and perception system of a car detect something important on the road -- commonly called an obstacle, in that you must not hit it -- one of their most important jobs is to track the motion of that obstacle. What direction is it moving, and how fast. Is it likely to change direction and how?

At the most basic level, you just identify how it is moving and presume it will continue on that course with minor changes. At a higher level you try to identify what it is to improve the "cone" of possible things it might do. For example, cars don't suddenly go sideways, but pedestrians can. Cars can go 60mph, pedestrians can't.

For everything that's out there, you want to know how it's going to move. Most importantly, you want to know how likely it is it might move to intersect you! The worst case would be a direct collision course, the obstacle is going the same place you are at the same time.

As a human, you do this all the time, almost unconsciously. When you see that impeding collision course, you brake or swerve. But when you see even a "maybe" you sometimes slow until you know more (particularly with pedestrians and bicycles.) On the other hand you often do nothing, following the implicit contract of the road, that the car ahead of you will not suddenly cut you off or that people on cross streets won't run red lights and stop signs, even though physically nothing prevents that.

Uber's victim was just walking across the road. (There is some speculation, not yet confirmed, that she possibly paused in the Uber's lane, frozen "deer in the headlights" style, rather than continuing on. Uber's software did not figure out quite what she was at any time. That's bad, but actually not that uncommon when obstacles are at a long distance. What it should have figured out in any situation, even not knowing anything about her, was that she was on a path across the road, right into their lane at the worst time.

It failed. And apparently it is still sub-par in this area, according to sources.

One safety driver

It is confirmed in this article that the drop from two safety drivers to one was motivated by the big demo, the desire to get as many miles of testing in before it.

Generally, this has not made sense. Safety drivers, even though they should have some decent training, are not super-skilled rare individuals. You can hire them anywhere. Almost always, the limiting factor on testing for a team is how many test vehicles they have, not how many safety drivers they have. Building test vehicles requires rare people, driving does not. So it should not be the case that you can get more testing done by dropping to one drier. It only means you get to do it cheaper. Cheaper is not on the goal list for teams spending hundreds of millions of dollars to get their products safe, faster.

The one thing it can do is get you more testing immediately, like in the next 2 weeks. If you are under-using your vehicles (though it's not clear why a rich team would do that) you can scale up testing by cutting to one driver until you have the time to hire more, since hiring and training is not instant.

This big demo may have been the reason for that extreme hurry.

Note that for a very advanced team, like Waymo, the cut to one driver will also be made, but that's because you're on the way to zero safety drivers, and 1 is reasonable if you're almost at zero. This does not apply to anybody but Waymo.

Legacy

The scary part of the report is that the insiders claim that in spite of Uber's very nice report about how good they are now many of the problems are still present, and not on the way to being fixed. That we'll have to see.

Comments

Confirmed, finally, that they disabled emergency braking, and that this was a but-for cause of a woman's death?

I remain shocked that there haven't been criminal charges against Uber for this action.

While I have written that I did not think the word "disabled" is correct, my position moves a little from these leaks. The word is still not accurate, it is not like there is a switch "turn emergency braking on or off." It may or may not still be true that they simply did not have usable emergency braking, in which case nobody would "turn it on" because it doesn't work and is unsafe. However, it becomes more possible that we could learn that they did have a usable level of emergency braking, which was dialed down because it made the demos worse, which would be closer to the idea of disabling it. But I still doubt it is that binary.

Everything points there being an override, where if one part of the car's software tells the car to apply the brakes harder than a certain amount, another part of the software overrides it so that the car doesn't do it.

This was fairly clear from the NTSB report, which clearly said that one system in the car detected a need to engage in an emergency braking maneuver, but the car was programmed not to do them. What this article adds, which was already rumored by insiders, was that these maneuvers had been enabled previously, but were disabled (there's really no other word for removing functionality that was previously present) in order to impress executives in an upcoming demonstration.

To give the more charitable version, imagine you built a system with low resolution radar. The radar would report returns from cars stopped on road ahead of you, and that would trigger hard braking. This is, in fact, how early emergency braking systems worked. However, the radar also gets returns from bridges and overshead signs. As such, this system, when driving, would slam on the brakes every time you went under a bridge (in addition to more rarely slamming on the brakes for a car stopped in the lane in front of you.) Such a system "works" but is not deployable. So if you don't use it, is that "disabling the AEB?"

In fact, this is exactly what happened with early AEB. They use Doppler to measure the speed of cars. A very slowly moving car in the lane in front of you signals the need to brake, particularly a car that slowed down from driving fast all the way to stopped. A fully stopped object gets ignored, not because you don't see it, but because as a fully stopped object, you don't know where it is and what to do. They improved the resolution so they could tell a car on the shoulder from one in your lane (most of the time) but still have an issue with stopped cars. Which is why, to this day, we can see Teslas smashing into the side of a truck crossing the road or into a road divider.

Has Tesla "disabled" AEB? No, they just don't have one that works on objects not moving towards you or away from you. So they can't use it.

However, in the Uber case, we're waiting for confirmation of what the leaks suggest. That they did in fact have a workable AEB system and deliberately dialed it down. This is suggested by the leaks.

However, this is not on its own a crazy thing to do, if you have competent safety driving. Well, almost. Prototype driving involves understanding there are things the system can't yet handle -- that's why it's a prototype -- and depending on the safety drivers to deal with them. Even if they were driving without a workable pedestrian detection or AEB handling, it's hard to think of a reason why the vehicle would not, when it detected a probable but not certain obstacle, give an audible alert or other signal to the safety driver, to make sure they are on their toes.

Consider that there is no pedestrian detection (or even car detection) in standard cruise control. You can still go on the road with that because you know that the car can't handle that and it's the driver's job. And that seems to be working. When you have standard cruise control you haven't "disabled" pedestrian detection. Even early adaptive cruise control, which has car detection (but not stopped car detection) does not have pedestrian detection. It is meant for highways where pedestrians are forbidden.

If you have an AEB system that slams on the brakes every time that you go under a bridge, and you "don't use it," then yes, you've disabled AEB. Whether or not that was a good idea is another question, but yeah, you've disabled it.

I don't believe Tesla has disabled AEB, because Tesla has AEB working in some scenarios.

I think this is fairly obvious. You have something. You turn it off. You've disabled it. What is less obvious, but which I also believe, is that it's a mistake. These systems are not cruise control. They are not driven solely on access controlled highways. If they can't handle the situation in question, they shouldn't be on the road. In my opinion.

Add new comment