Did Uber really botch the switch to one driver and a timeline of the failure

Topic: 

Yesterday I examined some of the details released by the NTSB about the Uber fatality. Now I want to dig deeper with speculation as to the why. Of course, speculation is risky, though I can claim a pretty good track record. When I outlined various possible causes of the incident just after it, I put 4 at the top. I figured that only one might be true, but it turned out that two were (Misclassification as a bicycle, and the car wanting to stop but being unable to actuate the brakes) though I did not suspect Uber deliberately blocked the car from doing hard stops. So I'll try my luck at speculating again.

A probable timeline of error

  • Coming in: Uber is in right lane, planning to enter right turn lane when it opens up
  • 6 seconds out: Victim perceived with LIDAR and radar. Victim is walking across the two left turn lanes. (Unknown: Were LIDAR and radar targets properly recognized as the same thing, ie. fused?)
  • Victim is classified as an unknown obstacle with no forward velocity, in the left turn lanes. No action needed, things are often stopped in the left turn lanes.
  • Unknown time: Classification of victim changes from unknown to vehicle (ie. car.) Cars are not expected to move sideways. "Car" is either in left turn lane or may be by this point in left driving lane. Mistake made in measuring "car s" vector of movement. System decides not to slow because "car" it sees is stopped in the left lane, approaching a light, and the Uber is in the right lane and heading to the right turn lane. It does not, it seems, identify the left-to-right trajectory.
  • Unknown time: Victim is reclassified as a cyclist. If classified as cyclist in left driving lane, vehicle should slow, though typical cyclists in the left lane are heading for the left turn lane, which could explain a decision not to slow. Uber car must not have had correct velocity vector still, since that vector would show high risk of incursion into the Uber's planned path.
  • 1.3 seconds out: Vehicle identifies victim as cyclist in the Uber's lane (right lane,) with likely impact. Vehicle finally makes a correct decision that emergency brake is needed. Due to problems with false positives in the past, system is programmed not to engage brakes in this situation.
  • 1.3 seconds out: Poor system design means alert about need for emergency brake is not conveyed to driver. If there are warnings on the screen (which the driver is looking at) she does not immediately comprehend them.
  • 0 seconds: Impact
  • Shortly after: Driver applies hard brakes, too late.

It is not yet clear from the report exactly when the vehicle switched from thinking the victim was a car to thinking she was a cyclist. At 1.3 seconds out she would have been in the right lane, but only partly. A typical fit person crossing a street will travel about 7 feet in 1.3 seconds according to research, while lanes are typically 10-12 feet wide. She was hit about 8-9 feet into the lane, from the look of it, on the right side of the Volvo.

There are explanations as to why the vehicle might not consider a car in the left lane (which is primarily how it identified her) as no reason to slow from its plan to enter the right turn lane. I say explanations because there were also anomalies which it could have tried to understand, so the explanations and not excuses. Why does the "car" appear to be moving slowly sideways to the right -- which cars don't do? Radar would have told it the obstacle was not moving forward. Cars are often seen stopped at lights, even well ahead of lights, but we humans know this is only when other cars are in front of them if well ahead of the light. (It is also possible the system did not realize the radar target and LIDAR target were the same obstacle. All systems try to do that, but they are not perfect at it.)

If the classification switched to bicycle shortly before 1.3 seconds, then that failure did not play a role, since at that time it decided on emergency stop. If it switched to bicycle well before the 1.3 seconds, again there is an explanation, but a not acceptable one, for not slowing. One does see bicycles in the left lane at this point in this type of intersection. Usually they are heading for the left turn lane. However, the past movement of the "bicycle/car/unknown" should have told the system this is not what was happening. A stopped bicycle in the left lane should trigger caution and slowing down, even in the right lane. You pass a bicycle on the right only with caution.

It is also possible that the system did not connect the unknown, car and bicycle as the same object. Robocars see the world as a series of frames, like a movie. They must depend on their software to say, "This obstacle I see at location X is the same thing I saw 1/10th of a second ago at location X+delta, and the same one I saw 3/10ths of a second ago at X+3*delta. I didn't identify anything 2/10ths of a second ago but that was probably just a transient mistake." For humans this is trivial, but not for software, and such systems do make mistakes. Normally with only a little bit of time, they build a good model of what they see and how it's moving. Perhaps the Uber system did not.

It is also strange that the NTSB report does not contain in any way the report leaked from Uber much earlier by the news site "The Information." That report said the system classified the pedestrian as a "false positive." Ie. "we see something, but we are confident it's one of the things we should ignore, like sensor ghosts, blowing debris, birds or nearby radar targets." This would be a very important fact, and should not be left out of even a preliminary report if true. It would, however, offer an explanation for why Uber's system ignored until too late what should have been a fairly obvious trajectory of crossing the road, putting whatever the obstacle was on a collision course with the Uber's planned path.

Still to learn

Some important things still to learn:

  • How well did the system "fuse" the radar and LIDAR, ie. did it identify that the radar pings and LIDAR points were from the same thing? Radar accurately tells you how fast the object is moving relative to you, LIDAR knows where it is. Both know how far it is.
  • What did the visual component of the perception system do, and when, and did it get properly fused with the other systems? Results from the LIDAR can tell the vision systems where to look for objects of interest if all is working well.
  • Given that it had radar, and knew the victim was not moving up or down the road, why did not that help classify the victim? Were there other non-moving radar pings in the same general direction?
  • What was on the display screen? In videos of Uber cars, they show a typical visualization of the perception system, where it shows on the screen where it sees cars, bicycles and pedestrians. Did they not have this, and if so, why didn't the safety driver see that?

An unplanned and badly executed switch from 2 safety drivers to 1

Almost everybody agrees that Uber should not have had only one safety operator in the vehicle. It is speculated that they made that switch because they were in a rush for a demo, and this was the quickest way to rack up more miles quickly. For many teams, you could not do this -- your limiting factor is how many cars you have, not how many drivers you have. It is not something a well funded team would do to economize. But I am guessing Uber was not using all vehicles in full shifts and made this change so they could go to full testing of every car 24/7. (Given they were testing late Sunday night, one presumes that means they were 24/7.)

When you use two people, one has the wheel with primary duty of watching the road. The other has primary duty of monitoring the software, looking at the road from time to time. On the screen, you can see if the system is classifying obstacles correctly. Your human perception is superior to the robot's, so even looking at the screen you can see when it's making mistakes. You can tell the other safety driver to watch out because the system keeps flipping its mind about whether it sees a pedestrian or a bicycle or a car. You are also looking at diagnostics coming from the system and interpreting what they mean.

When Uber collapsed the two roles, I wonder if they did it without thinking. In particular, somebody without proper prudence, just had the solo safety driver perform both roles. This is a dumb idea, but it might happen in a rushed change.

Likewise, the lack of audible alerts to serious events could be a result of this change. With 2 operators, the software operator, looking at the screen, is a slower, but smarter, audible alert. If that operator shouts out or says something is wrong, that will put the driver with the wheel into alert. They switched to one driver without replacing that human based alert system.

It's pretty clear, of course, that you should not switch to one driver when your car is not yet very good. But if, for some reason, you decided you wanted to make that unwise choice, you would do it better with deliberation -- changing emergency brake policy, changing audio alert systems and making sure the solo drivers know to stop looking at the diagnostic console -- even training them not to do that.

I will admit that when I have been manually driving a self-driving car, I have made glances at the software diagnostic console. I know enough not to stare at it -- I am manually driving after all -- but it is seductive. You can't help but be curious as to how the system is perceiving the world. (You can usually drive these vehicles in manual but still leave the perception system running and displaying its view of the world.)

You would need to actively discourage it, or simply turn it off. But Uber was rushed and didn't think to do that.

How could it not emergency brake?

Perhaps the most common question I have seen is how they could possibly have programmed the system to deliberately not brake once it decided emergency braking was necessary.

The answer, implied in statements and leaks, is that Uber's system often decides to emergency brake for no valid reason. If this happens often enough, there is no choice but to not let it actually apply the brakes. Or at least that is the simple solution. You can't put a car on the road that is regularly hard braking for ghosts. It's not safe or pleasant.

Uber decided that since it could not do that, it would rely on safety drivers for emergency braking. After all, there are hundreds of millions of cars out there all relying on human drivers for their emergency braking. It's a system that generally works. The problem with that, of course, was the fact that the safety driver was looking at the console.

"Use the human" is generally acceptable, but of course Uber could have done better. There are two things they should have done: * The system should send an audible alert, and even a short brake jab alert. (Short brake jab is very attention getting.) * The system should have levels of confidence in its emergency braking decisions, and there should be some level above which there are almost no false emergency braking events, and at that level it should still be able to actuate the brakes.

The second one can be hard in a prototype car. In this case, we have a car that has suddenly realized it is confident it has a bicycle in front of it and is going to hit it. Yet it had no cognizance of that before, because it does not associate this bicycle ahead of it with the car in the left lane it thought was there. That has many of the hallmarks of a false positive, and you don't have too much time to get more sensor readings and be sure. So it might not qualify.

Comments

Uber engineer: "You can't put a car on the road that is regularly hard braking for ghosts."

Uber executive: "Let's disable the brakes, then."

I do think the common phrase I have seen about "disabling" the brakes is misleading. (The Volvo AEB system is more truly disabled.) They judged that the system is not capable of making emergency brake decisions, and so decided not to use that. The brakes were not disabled. They made a decision, as they have for every other thing the car can't do, to leave that to safety driver oversight. Every team does that in principle, though I don't know how many have decided to not have any emergency braking. It's a sign of an immature car, to be sure. But immature cars need to turn into mature cars, and the only way we know to do that is to test with safety drivers taking up that slack.

Yeah, I was mostly making a joke.

If the only way you know to turn immature cars into mature cars is to test with safety drivers, you're not very creative, though.

You don't need to run in autonomous mode to test whether or not your system sees ghosts. You don't need to run in autonomous mode to test whether or not your system categorizes pedestrians as vehicles. Etc.

Yes, I do know you think that. If you think you can do it, you should build a startup which tests without safety drivers and when all the other teams get taken off the road for reckless use of safety drivers, you will make $1 Trillion!

Do you think it was necessary for Uber to run in autonomous mode in order to test whether or not the system sees ghosts?

Do you think that Uber couldn't improve its system at all without using autonomous mode?

I can agree that there's some marginal benefit to using autonomous mode with safety drivers. There's also a marginal benefit to using autonomous mode without safety drivers. But there's also a risk.

While I don't know the plan of all teams, I believe teams follow all these approaches. They drive on test tracks. They drive in sim. They drive with safety drivers. They drive just collecting data. The latter, however, becomes more and more rare as the incremental value of that decreases.

Once a team has concluded that their car, with the right safety driving, is safe on the road, there is little point in shadow driving of this type. You learn less from it, the risk is -- and this is important -- no different, and since cars are expensive, every hour of shadow driving is an hour of full test driving not done.

Shadow driving is slightly cheaper as you can do it with just one team member, while safety driving (done properly) needs two. But the car is the limiting resource, not the team members.

Again, this is based on the assumption -- false for Uber -- that safety driver testing is no more dangerous than ordinary human driving. If it is, and the track record for most companies other than Uber suggests that it is, then there is no motivation for shadow driving. You learn much less, you get less proof of performance under your belt, and you save very little money and you don't reduce risk to the public.

Of course, if operating with safety drivers is more dangerous than having a person just drive around recording data, then one could argue it is not the best and obvious choice. And for Uber, there is a good argument for reasons I have outlined.

If you have data that shows that, for the major teams, other than for Uber, operation with safety drivers is unsafe, then let's look at it. Otherwise, let's give this a rest. Of course, since Waymo has driven more miles than everybody put together, they have most of the data. We know they have had only one minor incident of fault in close to 6 million miles of operation. (And the human driver made the same mistake in that incident.) That is actually an astonishingly good record, suggesting that in their case, autonomous testing with safety driver is considerably safer than ordinary driving.

Now you're changing the goalposts. I was specifically talking about Uber in my comments. Uber was the specific example of an "immature car" that I was talking about. This post is about Uber.

Your comment was, "You can't put a car on the road that is regularly hard braking for ghosts." You then used this for justifying not hard braking. My tongue-in-cheek comment was meant to suggest that I think a better solution is not to put such a car on the road (in autonomous mode).

A car that is regularly hard braking for ghosts, with a safety driver and with hard braking disabled, is more dangerous than a car which is driven by an ordinary driver (with equivalent training and experience to the safety driver). At least for some value of "regularly." This is because safety driving for an immature system is *much* harder than ordinary driving, especially if you want to do it in a way which is significantly more beneficial than ordinary driving (i.e. while resisting the urge to take over whenever the car drives slightly, but not too much, more dangerously than you would).

Uber was, it seems clearer and clearer, not ready to move to one safety driver, and had bad policies and training for safety drivers.

Other teams have better safety driver systems, and have tested cars with flaws similar to this. Of course they have not published the list of such flaws, but all tested cars, in their early days, don't handle a variety of situations.

It is not the decision to safety drive test a car which is not good enough to be trusted to emergency brake that is the problem. It is the decision to have a seriously poor safety driver regiment that is the problem.

Of course, the better the car performs, great. But they all start from a low level and move to a higher level. I think Uber should face consequences for their actions -- and they have, among other things being banned from almost everywhere for now, including Arizona. But the mistake would be to blame them for testing a prototype car that has bugs. That's what everybody starts with. And everybody else has pretty much managed, as far as we can tell so far.

And, by the way, if in court somebody shows that Uber knew their system was significantly more dangerous that industry practices, they will pay quite a lot for that in the trial.

I don't blame them for testing a prototype car that has bugs. I blame them for testing, in autonomous mode on public roads, a prototype car that was not ready for such testing. A car which is not good enough to be trusted to emergency brake is likely to be such a car. But Uber's cars were not good enough in many other ways as well.

As far as moving to one safety driver, you really can only have one safety driver. Uber moved to less than one safety driver, by requiring the safety driver to also perform tasks other than safety driving.

Nobody is likely to have standing to take Uber to court, except for the state and federal government, at least one of whom hopefully will do so.

OK, so I haven't heard if you think autopilots should be prohibited, but the rule you suggest would say that it is OK for Tesla, Audi and various other companies to sell autopilot products, which can't see traffic lights or stopped fire trucks or the broadside of a semi-trailer or a host of other obstacles, and let untrained customers drive them for hundreds of millions of miles, but you don't want to allow robocar developers to drive far more capable vehicles with trained professional drivers?

"Far more capable" is contextual. These robocars are not "far more capable" when it comes to ordinary driving. Tesla autopilot

It is OK for car companies to sell autopilot products which can easily be used safely. It's not OK for a ridesharing company to use robocars in situations in which they can't easily be used safely.

It's possible that Tesla, Audi, and/or various other companies are selling autopilot products which don't perform adequately within the situations for which they are marketed to work. If so, then I don't think it's OK for these products to be sold (until they are fixed). Tesla's autopilot, specifically the lane assistance part of it, might be an example of that (or might have been before certain fixes have been made). But maybe not. I haven't really looked into enough of the details to say. What I'd want to know is what are the situations it's failing in, and are these situations where a driver should know not to use it. (I also haven't looked deeply into the details of Tesla's problems with fire trucks, nor have I examined Tesla's marketing materials to see which situations it markets autopilot as being useful in.)

I think it's generally the case that any of the robocar offerings from serious vendors start out more capable than Tesla Autopilot started out. But pretty much every definition, regardless of context.

What I'm saying is, "If it would be legal/safe to sell the prototype robocar as a driver assist autopilot, and let customers drive around with it the way they drive around with Teslas, then how could it be unsafe/illegal for trained professional safety drivers to drive around with it?"

More capable of what?

Drive around with it how? It's fine to drive around with it only using autonomous mode in those situations where autonomous mode is safe. If it's safe in stop and go traffic, go ahead. If it's safe on a limited-access highway, go ahead. The problem is using autonomous mode in situations where you know that it's severely flawed.

Granted, there's a bit of dishonesty going on where Tesla tells drivers not to do certain things knowing full well that they're going to do them. It's hard to litigate against that, though.

Also, are we comparing the capability of Tesla Autopilot to the capability of Uber autonomous mode *with emergency braking "disabled"*?

Will Tesla Autopilot ever intentionally crash into something it detects 25 meters ahead?

Absolutely Teslas will crash into things they detect. Surely you have seen the several recent accidents, including fatalities, where the Tesla crashed directly into obstacles at high speed that are clearly detected on its radar in particular. Telsa has a situation similar to the problem described for Uber here. Telsa's systems produce constant false positives, and would be making the Tesla brake all day, so because of that it does not brake for things like the broadside of a semi-trailer, or a highway divider, even though they are clear as a bell on the radar. The problem is that with radar, you don't quite know where things are, and so you will pass road signs, bridges and other bits of stuff on the road and get a radar return from them, similar to the one you get from right in front of you. Similar enough that Tesla has no choice but to not brake for those sorts of radar targets. And thus it has plowed into things, killing the driver.

Tesla's cameras also see these objects of course, in the sense of detecting them. Their software is not good enough to identify all of them. Some it identifies fairly well (tail end of car) and some poorly, and some, presumably unreliably.

Every system (except Waymo's) has things that it is not reliable detecting. That's why it either needs a driver/supervisor, or a safety driver. The Tesla has many more things in the "I don't detect this at all" category -- like traffic lights. But it has its share of "I can detect this sometimes" stuff.

Also, it is wrong to describe what the Uber did as "intentionally crashing." What Uber has is a system deemed not reliable enough at determining that emergency braking is needed. What this means is that often it will decide that it is needed, when it is not. It is more correct to say it is intentionally not emergency braking for things it can't reliably detect. If the car "knew" there was an obstacle, it would not drive into it. The problem is that while the software has concluded there is an obstacle, the programmers know that knowledge is too uncertain, and so they code that in.

I think they could have done a much better job of course, the but principle itself is not flawed.

I think you're just plain wrong about this. The car saw an object. It classified it as a bicycle. It determined it was going to crash into it unless it slammed on the brakes. A separate system then cancelled the order to slam on the brakes.

But I'd like to hear more about these false positives with LIDAR, because my understanding (and what you've said before), is that it isn't prone to false positives of this magnitude (falsely detecting a bicycle-sized object at close range for 1.3 seconds).

Radar, of course, "doesn't know where things are." So it's not at all comparable.

From the report I think it's clear that someone at Uber had turned off all "emergency braking" (there's a definition in the report) regardless of which systems were involved or how reliable the particular detection was. It was an incredibly stupid (and evil) thing to do, but I could see it being done if you make the decision to implement a half-assed solution to a problem of emergency braking for ghosts, instead of pushing back a deadline that you're under pressure to make. Hopefully the full report will have more details about who was involved in making this decision and why it was made.

While none of us have info on the architecture of the Uber system, I am pretty sure I would not use that language. I think it's probably something similar to Tesla and radar. Radar has poor resolution. When there is something right in front of you, stopped on the road, you get a clear radar return from it. The problem is you also get radar returns from things near the road, most notably bridges and overhead signs (because most auto radars have very poor resolution in the vertical) but also some things on the side of the road. Things that are moving are super easy to distinguish in radar. But not things that are stopped because the whole world is stopped and bouncing radar waves back at you.

So, your radar sees a truck in front of you. Nice strong returns. Do you hit the brakes? No, because 50 times a day you also go under a bridge, and you get the same signal from your radar. There are techniques that can help, like having a map of where all the bridges are. But what if there's a truck under a bridge? Or, of course, you get better radar or use other sensors or research new answers. Which is what people do. But so long as your system is going to slam on the brakes under even 1% of the bridges it drives under, you can't let it slam on the brakes -- even when it's a truck and not a bridge. That's why Teslas have run into trucks. Not because they don't detect them with their sensors.

The NTSB report says that Uber's system is unreliable as well. Not in the same way as Tesla, because it's a different system with more sensors and hopefully, lots more sophistication. But that doesn't stop it from being too reliable to be given control of hard braking. Which means, it will not have control of hard braking when it thinks there is a bicycle in front of the car.

So I would not characterize it as "a separate system cancelled the order." I think it's more likely that the main planning system just has code to say it can't trust the perception system short range.

Remember, this is for emergency braking, not braking. If you need to do emergency braking, there are really only two reasons.

  1. You've made a big mistake, and only detected something much later than you should have.
  2. Somebody else has made a mistake, and appeared out of nowhere in your right of way

The first one the safety drivers are supposed to be monitoring for. They are supposed to say, "That's odd, there is a person on the road and the car's not slowing." And correct the mistake before it's an emergency.

The rules above are true for humans. Except humans get one extra source of emergency braking that robots should not have -- we follow too closely. Pretty much every time I recall needing to slam the brakes it was either because I was following too closely and/or I was glancing away when traffic slowed quickly in front of me. Never hit anybody, glad to say. Emergency braking is a "should not happen" and it means the system is not operating correctly. This explains, but possibly does not excuse, why they would want to leave it to the safety driver, even above the false positive problem.

Lidar is nothing like radar. It can tell the difference between a bridge and a truck. What would be a possible failure which would make the lidar detect a bicycle sized object for a full 1.3 seconds (over a dozen revolutions) when nothing of significance was there? It seems clear that the problems Uber was having were *not* problems specifically with the lidar.

I think your second explanation is what happened. Someone said that emergency braking should never be happening in the first place, and just categorically turned it off. This was an incredibly bad decision, and it cost a woman her life.

As I said, Tesla's problems are different from Uber's. But the rough category is the same. "Our system is not reliable enough at deciding when emergency braking is necessary, and so it is not given that ability." You had asked whether Teslas are detecting things and slamming into them anyway, and the answer is yes. Uber is also doing that, but with different tools and sensors.

The question at hand was, since all the existing autopilots, traffic jam assists, adaptive cruise control and lanekeep systems have tons of things they don't do because they are early stage products, and we allow untrained consumers to drive around supervising, would would be the grounds for saying it is wrong for a robocar development team to take their car, which is superior than all those systems, but still not doing everything, and driving it around with professional safety drivers?

The answer is, there are no grounds for saying it's wrong, other than a criticism over how they manage the safety drivers, which Uber definitely deserves and is getting, here and everywhere else.

You suggested that Tesla's cars are not given the ability to engage in emergency braking. I don't think that is true.

According to the report, Uber had emergency braking turned off.

They are different, as I have said. I am saying that Tesla has many instances of things that are detected by its sensors, sometimes detected very clearly by its sensors, but, to use the strange vocabulary here, emergency braking is "turned off" for those.

(As I say, that vocabulary is strange, and I have seen it a bunch of late because the Volvo Uber uses has its own, Citysafe brand emergency braking system, and that one is "disabled" like all the features of CitySafe. The Uber does not have a reliable emergency braking system, it's not a question of whether they could turn it on or off. Tesla's don't reliably react to big trucks crossing the road in front of them, so they don't do it.)

The vocabulary is "strange" (I'd say wrong) as used to describe not emergency braking when you don't know where (and or what) a detected object is. It's not strange to say that emergency braking is "disabled" or "turned off" when it won't be done under any circumstances whatsoever.

According to the preliminary report, Uber completely disabled emergency braking, and they didn't even replace it with anything. This was an incredibly bad decision, which cannot be justified, and someone is dead as a direct result of it.

Disabling the Citysafe emergency braking system is probably also a bad decision, but I'd like to hear more about why they needed to do that before I could say for sure.

My concern with the vocabulary is that you can't turn off or disable what you don't actually have. The Uber does not have a practical emergency braking system, not yet. The Tesla doesn't have a reliable detection system for trucks and the other things it has run into. You can say they "disabled" them but that's really a poor way to describe it. They just don't have it yet. They did, in a sense, "disable" the Volvo Citysafe capabilities, because it's really not practical to have them running at the same time. Citysafe also does not detect everything, because it is a driver assist system, it is not expected to. As ADAS, Volvo can tune citysafe to only brake when it is very sure, at the cost of not braking for some things it should brake for. It can do that because it's ADAS.

Uber could have done the same, tuned their breaking calculations to require high certainty, and not to brake on near-high certainty obstacles. In fact, perhaps that is what they have done; it's not always clear what makes it through these reports. They may wish they did, or found a way to leave the Citysafe on. (It is probably difficult to leave it on because you don't want two systems trying to torque the wheel or apply the pedals at the same time.)

It is more correct to say, "Uber does not have emergency braking good enough to deploy safely, so has not deployed that" than to say, "Uber disabled their emergency braking."

Why do you think Uber doesn't have a practical emergency braking system? What specifically is impractical about it? Surely they have the ability to hit the brakes hard. And even if they can't (for some strange reason) hit the brakes to decelerate faster than 6.5 meters per second squared, they could have hit them to decelerate at 6.4 meters per second squared, couldn't they?

As an aside, why is impractical to have Citysafe running "at the same time"? I'm curious about this one, because it seems like it'd be a useful backup system in case everything else fails. But maybe there's something I'm not thinking of, which is why I ask.

As far as Tesla, right, they didn't "disable" their detection system. Whether or not those systems are reliable now, I don't know. I'd guess they have fixed the bugs which led to the crashes that occurred when autopilot was being used in a situation it's supposed to be used in.

Well they said it. An emergency brake system is not usable if it brakes with any frequency for false positives. Uber says that their system makes too many errors when wanting to trigger an emergency brake, so they are unable to use it for that. Of course, Uber doesn't have an "emergency brake system" per se. It has a self-drive system, which detects obstacles and applies braking when needed to avoid them. However, it seems that in their tests they have found that for the portion of braking avoidance decisions which require more than 0.6 G, it has a dangerous false positive rate, so they don't use it there.

The typical "AEB" system in the Adas world is a standalone system which is only used for emergency braking. Typical designs actually wait to hit the brakes, giving the driver a chance to hit them. For example, usually this was built starting with FCW systems, which gave an alert if the driver should brake, and then finally if the driver doesn't brake, it triggers the braking. Giving an alert would have been an obvious good idea for Uber. In some cars the ACC, FCW and AEB are really different systems, even though they all do similar things, from what I have heard. The ACC's job is to brake gently when there is an obstacle ahead, and to replace the driver on that. The FCW is to alert the driver if the driver is misisng something. Or perhaps if the ACC is. This is also because early ACC did not handle stopped cars and was really only for use on highways. There were a lot of these built so I don't know the details of all the different ones.

The word disabled is again misleading. From the photo, it seems that Uber doesn't even drive the left turn at this intersection, since those left turn lanes don't show on its map. (I have wondered if this might have played a role in the failure here, since if the lanes are not on the map, the victim in those lanes would be treated as on-sidewalk, and not on-road, at least until getting to the right lane.)

But anyway, the car has no ability to drive those lanes. They did not "disable left turn." It just doesn't work yet. Emergency braking on the Uber doesn't work yet, ie. work safely enough.

Sounds like the emergency braking system works fine, but it was disabled because it was being triggered too often.

That was a stupid thing to do, and it killed someone.

I guess we will never agree. You seem to think there is an emergency braking system. There isn't one. (Except in the Volvo Citysafe.) So let's give it a rest.

Specifically, I believe that the Uber had the ability to detect that an emergency braking maneuver was necessary to mitigate a collision, and had the ability to engage such an emergency braking maneuver, except for the fact that emergency braking maneuvers were not enabled while the vehicle is under computer control.

But the NTSB report says Uber does not believe that. I mean obviously it can press the brakes hard. And obviously it didn't do so. That's because Uber says they don't have a sufficiently reliable ability to detect when it's necessary. You can say they are lying, I suppose.

I didn't claim they are lying, just that they made a huge mistake.

Made, past tense, by the way. I assume that if Uber ever puts its cars back on the road in autonomous mode, that they will enable emergency braking maneuvers. It's just too obvious that the risks of not enabling them far outweigh the benefits, if you value human life above corporate profits.

Yes, they will improve it. But the biggest thing they have to do is improve their safety driver operations because improve the tech as they may, the tech will fail again. Everybody's does. I would venture that most cars, certainly earlier in their road testing, did things which would have, absent the safety driver, caused a fatal accident. Uber's error is they both had bad safety driver protocols, a bad safety driver, and an accident that a car of any maturity should not have had. In that order.

I mostly agree with that. The most egregious mistake was asking the safety driver to monitor the cars diagnostics. I'd put as second most egregious not doing *anything* to mitigate a crash when an imminent one is detected with high probability. (Alternatively, if the car can't detect with high probability, using LIDAR and a really simple algorithm to measure velocity, a "large object" 25 meters away in your lane not moving forward, then that is the egregious mistake. But the way I read the report the woman was detected, and it must have been with high probability if they bothered to calculate such a thing.)

There were lots of other errors, but they were harder to see ahead of time than those two.

A combination of errors caused the fatality. Any one of them alone probably wouldn't have done it. But that's probably true with most vehicle fatalities. If it weren't, we'd have a lot more, because humans make many mistakes on a regular basis. These cars have to built and operated under the assumption that mistakes are going to be made.

Add new comment