NTSB Report implies serious fault for Uber in fatality

Topic: 

The NTSB has released its preliminary report on the fatality involving the Uber prototype self driving car. The NTSB does not attempt to assign blame, but there are some damning facts in the report.

Perception and planning failure

The high level problem is the Uber perception system failed. She was, as I and manyo thers predicted, detected over 100m away by radar and LIDAR. It does not directly say when the camera systems detected her. She was first classified as an unknown obstacle (as is common for the first, distant detections of something) and then as a vehicle, but then as a bicycle. (She was walking a bicycle.) In my analysis of possible causes written immediately after the accident I suggested that mis-classification as a bicycle was perhaps my most likely guess, and another one of my top 4 was no communications between the system and the brakes.

It has not been revealed if it classified her as a bicycle going down the road (as I speculated.) If they had classified her as a bicycle crossing the road, then she should have been treated as a major collision risk.

The Uber, we learn, as some readers suspected, was planning a right turn. As such, it would not brake for a bicycle that is in the lane to the left which is continuing on. Though everybody knows that passing a bicycle on the right when going to make a right turn is a risky move which should be done with high caution.

The investigators say the victim is visible in the camera videos but don't talk much about when that happened and what the visual parts of the perception system did.

Emergency stop failure

The Uber system, after making this error, realized 1.3 seconds out that it should emergency brake. However, it does not do this! It relies on the safety driver, who was not looking. It did not give an audible alert, or apparently, even diagnostics on the screen, to indicate the need for the emergency braking.

This is another tragedy. It turns out that at 38mph in 1.3 seconds you go 22m. 22m is precisely the stopping distance for a hard brake at 38mph. In other words if the car had activated emergency braking then, it would have just barely touched her, or stopped with inches to spare. Swerving could also help, though you generally don't swerve and brake at the same time, and robocars don't like the risk of swerving.

Uber does not emergency brake because, it appears, their system has too many false positives and the vehicle would be impossible to ride in if it did hard braking very frequently. Sudden hard braking (where you would need a bumper sticker that says, "I brake for ghosts") has safety consequences as well, particularly if people don't have seatbelts on, or they are holding hot drinks or laptops, or there is somebody on your tail. However, while it may be acceptable to leave braking decisions to a safety driver, it is very odd that they would not signal some sort of alert -- a sound, a message on the screen, a flash of lights or even a light brake jab of the sort that gets anybody's attention.

Of course, if it had alerted the safety driver 1.3 seconds, that driver would probably need 0.5 seconds or more to react, and so would have hit the pedestrian, but at a much slower speed, possibly not causing death.

In effect, my other guess as to the source of the failure was true, the car wanted to brake but did not actuate the brakes -- but I certainly didn't guess that this would be because Uber had deliberately blocked that.

Safety driver failure

These perception and planning errors are bad, but will be expected to happen in prototype cars, and be caught and corrected by the safety drivers. The more damning information talks about why that did not happen.

The safety driver was not looking at her phone. Instead she was looking at a screen in the center console with diagnostic information from the self-drive system. The report says:

"The operator is responsible for monitoring diagnostic messages that appear on an interface in the center stack of the vehicle dash and tagging events of interest for subsequent review."

This is not an appropriate task in a vehicle with only one person on board. Other reports have suggested Uber switched to just one safety driver to speed up testing before an upcoming big demo. They may have made a serious mistake in not properly adjusting their procedures.

The Volvo is equipped with a driver alertness monitor, which is disabled in self-drive mode. (As is the Volvo's emergency braking system and other such systems.) Possibly the driver monitor has to be disabled due to the fact that safety drivers were asked to look at their consoles, or simply because all the Volvo systems are turned off.

More to come

In other "not so surprising" news, Uber announced just before release of this report that they would shut down testing in Arizona for good, and focus on their Pittsburgh advanced tech headquarters.

I'll update this article during the day with more details. As predicted, multiple things went wrong to cause this fatality. Some are things you would expect to go wrong and have plans for. (Not very related is the fact the pedestrian was high and did not look towards the Uber until the very end. That is tragic for her, but we must expect there will be such pedestrians on the road.)

Some are things that you would not expect to go wrong, however. While Uber and everybody else will learn from these mistakes, there needs to be a deeper investigation as to why Uber told safety drivers driving alone to look at their consoles, why there was no audible or visual alert about the pending problem, and why the decision to never use emergency braking or other avoidance was made. (Even if false positives made it unsuitable at certain thresholds, once you are very close there comes a time when you will not get many false positives and should use it.) In addition, the lack of driver monitoring is an error, though it's one that many teams make, but presumably will not make in the future.

Comments

I think if authorities had known in general the way the system was designed, as described in this report, they would not have permitted such testing at full speed and with only one human driver. This argues for release of such design descriptions, such as:
1) is the car's existing emergency braking system disabled? If so, why?
2) does the car do emergency braking? emergency swerving? any braking at all in emergencies?
3) does the car reduce acceleration when detecting an unspecified obstacle?
4) does the car notify the human driver of an impeding emergency?
5) if only one human driver, do they have any distracting duties?
6) at what distance can the car definitively identify pedestrians?

I doubt authorities would have had knowledge to make this appraisal. Remember, that one does eventually have to get good enough that you can work with just one safety driver (as Waymo did, and then they went to one not at the wheel, and finally no safety driver.) The key is to figure out "when is it OK to drop to one driver?" And frankly, this is still being worked out, I think. Now, I think most will agree that it's not OK to drop to one driver if that driver is supposed to monitor the software and not the road in a car with a poor reliability rate like the uber, but would somebody have known to write that regulation?

There are two sources of emergency braking. The built in Volvo-sold one is disabled, as are all the built in systems, during self-drive. This is normal for all cars as far as I know. The Uber system has an ability to do emergency braking. However, this is "tuned" to require, it appears, much too high confidence. Uber depends on the safety driver to brake. This could be OK, if you had an audible alert for the safety driver, and you were monitoring attention of the safety driver, and you did not assign the safety driver the task of looking at the screen.

Emergency swerving is risky and is not common, from what I understand. Again, that is up to the safety driver. Few teams are confident enough that in an event, they would want their car to do emergency swerving. Not today. In the future. For now, the better plan is to not put the public at risk with emergency swerving when you have a human on board who can do that.

It is pretty normal to not be able to identify something at the outer range of sensors. Cars who slowed for that would slow all the time. However, if it remains still unidentified for too long, slowing could be prudent, but I don't know what different teams do in that situation, or under what thresholds.

4 -- it did not, and that is particularly poor planning.

5 -- will be one of the key questions here. I suspect that Uber's demo rush might have caused this error. When you have two crew on board, the one in the passenger seat is supposed to monitor the software quite a lot, and the one behind the wheel keep eyes on the road. I think they went to one driver without figuring out what to do about those two tasks.

6 -- This varies based on sensors and situations. It can be OK to not detect that the obstacle is a pedestrian if you still stop for it because you don't know what it is. Not stopping because you identified it as not a problem is of course a big problem, and was the case here. Broadly, a car should be able to reliably identify something it should stop for at something greater than its current stopping distance (combining decision time and braking time given road conditions.) Ideally a fair bit better than the stopping distance, but that's obviously a minimum, except in situations (person jumps out from behind parked van) when this is impossible.

I don't understand how it could not have detected the victims velocity (since the car did see her). Pedestrian or bicycle wouldn't matter, should it not have been clear it was something moving across the street crossing the planned path of the car, and therefore something you had to break for?

If they correctly tracked her velocity, which they should have from lidar points, they should have identified her as a pedestrian, or at worst a cyclist, slowly crossing the road outside a crosswalk.

Since they tagged her as a cyclist on a bicycle, I can only speculate that since they were at a spot where you might not expect a cyclist to be riding laterally across the road, that somehow they favoured classifying her as a cyclist riding in the lane. If they had her proper vector, their modelling system should have plotted a path that intersected theirs, causing immediate slowdown. If they got the vector entirely wrong, ie. thought she was riding, then they did a quasi reasonable thing, which was planned to pass her on the right when going into the right turn lane. In reality, that is not the right thing, one always takes extra caution when moving to pass a bicycle on the right for your right turn, which was not done here. Neither explanation is good for Uber.

People are focusing on the driver’s assigned task of monitoring the diagnostics on the screen in the center console, which is fair.

For me, the bigger surprise was that the system was not designed to brake for obstacles in emergencies, and instead, relied on the safety driver. This would make the system L2 if we believe Uber, because it was not designed to complete full OEDR. So essentially the same as Tesla, and not really a SDC. Crazy!

I wonder when this “no emergency braking” feature was added. If it was there from the beginning, it seems to put more blame on the safety driver. If it was added recently, in addition to tuning to prevent false positives, because Uber was preparing for their CEO’s visit/demo, and they knew that they had issues with false positives, did they adequately explain this to all of their safety drivers? The latter seems more likely, though it’s obviously just a guess at this point.

I address that a bit above, but my theory is this.

Soft braking is not very disruptive, and can be done at any time you feel a need for caution. Humans do this. Full-hard braking is highly disruptive -- you may get rear ended (though you should know if somebody is on your tail or not if you are a robot) and you may cause disruption or in rare cases injury inside the vehicle.

As such, you may need a higher threshold of certainty before executing an emergency hard brake. If you do a hard brake in error, there are consequences, but there are not major consequences for a mild brake in error.

This may have led Uber to decide, let's leave hard braking to the humans. If we let the machine do it, and we are braking for ghosts even a few times a day, the ride is a scary and dangerous one. So leave it to the humans. What makes less sense is not having an alert system to have the system aid the human.

I have a second theory for that. This system (like most) was designed for two parties, a safety driver and software operator. The software operator is, when not watching the road, watching the software console. There, they should see alerts and then yell at the safety driver to watch out. This adds a second of delay to such alerts, but it may be reasonable.

My theory is that they did the switch to one driver prematurely and improperly, because they were rushed. So they did not properly design the system to have only one driver. Frankly, the switch to one driver, while it will happen, should happen only in a mature project with has a very good intervention rate, and with aids to help the driver and monitor the driver's attention. Until the intervention rate is approaching deployment.

Doing absolutely nothing when you have "determined that an emergency braking maneuver was needed to mitigate a collision" doesn't make any sense at all. Yes, full hard braking is disruptive, and maybe it shouldn't be done unless you're fairly certain that it's necessary. But 1) It seems that the vehicle was fairly certain that it was necessary. It detected a human and a bicycle (on or beside doesn't really matter) in its lane 25 meters ahead. And 2) Even if you don't want to do a full hard brake, because it's disruptive, why not at least do a mild brake?

The vehicle detected a human and a bicycle in its lane 25 meters ahead and just drove straight into her without even slowing down. The vehicle detected that a collision was imminent, and made no attempt to even reduce the speed of the impact. There's no sense to it whatsoever. Not if you care about human life, anyway.

Uber made so many huge mistakes. The view at 1.3 seconds prior to impact was one of them, but they should have slowed far prior to that point. Arizona law requires that a motorist leaves at least three feet between a motor vehicle and a bicycle when overtaking. So, "rewind the tapes" back to the point where there was about three feet between the purple shaded area and the bicycle. Why didn't the vehicle start slowing down at that point?

From 6 seconds before impact until impact the vehicle slowed only 4 mph.

Obviously the software was absolutely terrible. It was only a matter of time before something like this happened.

We are just guessing here, but it seems that while we might feel the vehicle was certain at 1.3 seconds out, Uber has decided the vehicle is not reliable enough to be given hard braking authority. While I don't like too many parallels between Tesla and systems like Uber's, there is one here. Teslas don't brake for highway crumple zones and the broadside of trucks, even though they detect them, because they also detect a lot of things they should not brake for, and want to break for them. Their product would be useless if it hit the brakes for every ghost it saw, and so they design it not to brake for what seem like obvious things, and tell the driver to stay alert.

For an entirely different purpose, Uber has done the same. They can't trust the car with control of the brakes, and so they have told the driver to stay alert and brake. And this time it's a paid driver. But they screwed this up, obviously. The big difference is that Tesla is not trying to make a car that sees everything, and Uber is. And Uber should really have a "scale" of how certain it is that it needs to brake, and leave the low end of the scale to the safety driver, and hit the brake hard for "Oh no, there's somebody in front of the car and we're pretty sure of it."

They did not do this.

"Uber has decided the vehicle is not reliable enough to be given hard braking authority."

1) Then it isn't reliable enough to be used in autonomous mode.
2) That doesn't explain why it didn't brake less-than-hard.

"They can't trust the car with control of the brakes, and so they have told the driver to stay alert and brake."

And they also told the driver to "monitor diagnostic messages that appear on an interface in the center stack of the vehicle dash and tag events of interest for subsequent review."

"And this time it's a paid driver."

Yep. So they're at fault for *both* the car's mistake *and* the driver's mistake. Although, given the new information that the driver was *supposed* to be monitoring diagnostic messages and tagging events, the driver's main mistake was doing what she was told to do by Uber.

I really don't see the comparison to Tesla's system at all. Tesla doesn't, so far as I know, detect a highway crumple zone in front of the car. Tesla at best detects something that might be in front of the car. Uber detected an unknown object, then a vehicle, then a bicycle in front of the car. I'm not sure if the second detection was right or not (a bicycle *is* a vehicle, but maybe NTSB said "vehicle" when the car actually thought it saw a motor vehicle), but the first and third were correct. What was flawed was the logic, which said "slam into it" instead of "hit the brakes."

Just to understand, every car will go through early stages where it is not reliable enough to do a lot of things, and depends on its safety drivers for them. If you want to say such cars are not reliable enough for testing in autonomous mode, then I don't think any of the teams out there could have gotten where they are. Everybody leaves the test track with a car that still can't do a lot of things. Uber's mistake, we now know, was not in having a prototype car with failures, but in not having a good safety driver system to deal with them.

No, she was not a bicycle (but which I think they mean a person on a bicycle.) She was a pedestrian. Their perception system got it pretty wrong. But again, you can't expect perception systems not to get things wrong. They do, until your car is close to deployment quality.

As to why it did not slow down -- that is indeed a flaw. It should have slowed because it wasn't understanding. It is one thing to not perceive well, it is worse to not realize you are perceiving badly. And more to the point, it should have understood that even if it had super high confidence it was a bicycle, it should have been a situation where you slow. However, it did not brake hard because (I am presuming) it was treating her like a bicycle heading down the road in the left lane (probably heading for the left turn lane, as bicycles might sometimes do.) You don't brake for those. But it's a pretty bad perception mistake, and should have triggered lots of warnings. To safety drivers who were looking at the road.

I'm not talking about a lot of things. If the car is not reliable enough to be given the authority to hit the brakes instead of slamming into someone that it detects in front of it, it shouldn't be tested in autonomous mode.

Everybody leaves the test track with a car that still can't do a lot of things, but if everybody leaves the test track *and goes into autonomous mode* with a car that can't avoid crashing *nearly at full speed* into something that the sensors detect right in front of it 25 meters away while traveling around 39-43 mph, then everybody is as reckless as Uber. But I doubt that's true.

She was not a bicycle, but the system correctly identified a bicycle (which I don't think means an autonomous bicycle with no human attached to it - the system knew there was a human and a bicycle in the roadway right in front of it). Whether the human was on the bicycle or next to it is not particularly relevant. You don't slam into someone whether she's on the bicycle or next to it.

"As to why it did not slow down -- that is indeed a flaw."

Sounds like we're in agreement, then.

"And more to the point, it should have understood that even if it had super high confidence it was a bicycle, it should have been a situation where you slow."

If there's a bicycle 25 meters in front of the left side of your vehicle which is traveling perpendicular to you from left to right, it should have been a situation where you slam on the brakes, in my opinion. But at least slow down, yeah.

"However, it did not brake hard because (I am presuming) it was treating her like a bicycle heading down the road in the left lane (probably heading for the left turn lane, as bicycles might sometimes do.)"

Your assumption contradicts Figure 2 of the Preliminary Report. It also assumes very dubiously that the vehicle treated her like something which completely contradicted what the LIDAR was reporting. (She was not heading down the road, she was traveling perpendicular to the road.)

"You don't brake for those."

At night? I would brake early, and I'd brake fairly hard once I saw the bicycle moving from left to right, even if I thought it was also moving forward. I think you would too. So that's another flaw, if it is indeed what happened (I doubt it is, for the reasons above).

"But it's a pretty bad perception mistake"

I don't see how the perception mistake was at all relevant. Person riding bike or person walking bike - you don't slam into them. If there was a perception error as to the location or velocity, okay. But there's nothing in the preliminary report suggesting that.

The error was the logic of what to do given what is perceived, not perception.

I am presuming that they did not identify her as a person walking a bicycle, or even a person riding a bicycle crossing the road. If they identified her as either of those, then not slowing is obviously wrong. If it identified her as a cyclist in the left turn lane, which is where she was when detected, there are 2 left turn lanes at this location, there is a logic where you don't slow for that. You don't even slow for somebody stopped in the left turn lane, when you are in the right lane and heading for the right turn lane. Of course, very soon they should have tracked her as being in the left lane. And while that should have told their model "she is crossing the road towards you" I am presuming some flaw made that not happen. But even then, while not ideal, I can see a system saying, "I am in the right lane and I am heading for the right turn lane. I see an unknown in the left turn lane. Now I see a vehicle in the left lane." These are not things you slow for. Then it changes to "I see a bicycle in the left lane." Here most people would slow but I can see a planner not doing so. Finally we see "I see bicycle encroaching on the right lane!" Which triggers emergency stop. Which Uber does not have the vehicle do.

I'm not sure what you mean by "these are not things you slow for," as many of these things are something that I slow for. Even if it's a motor vehicle, I'm not going to pass a stopped car
in the next lane late at night at 39 miles per hour in a 45 mph zone. In fact, I'll probably stop and see what's the matter. At the least I'll slow down in case someone is out of the car or there's something in the roadway that the car is stopping for. Maybe cars (other than Waymo?) are too dumb to "think" that way, but that says to me that cars (other than Waymo?) are a long way from belonging on the road, because this is the way self-driving cars need to behave.

It should have been clear way ahead of time that the unknown object / vehicle / bicycle wasn't moving forward down the road. And it wasn't stopped at a light or anything like that. There wasn't traffic around that it was stopped for (if there was, you should slow down even if your lane is open). This should have made it clear that this was *not* a vehicle getting ready to make a left turn.

The system is complete crap if this is what happened.

You missed the point of my initial comment. It is the same as Tesla because both are L2. The driver is responsible for detecting objects and responding to them. Uber’s L2 feature did not fail.

Now it definitely surprises me to say that, because I argued against it extensively on Reddit. And I think there is still a chance that Uber lied to NTSB about the emergency braking thing so they could claim their tech didn’t fail, and throw the safety driver under the bus. But that’s what it looks like at this point, as much as it pains me to say...

The reason it is very different from the Tesla is because Tesla is making a product designed to be human supervised at all times. Uber is making a product designed to drive without a human, but because it is a prototype, it is not yet complete and needs human supervision.

This is no idle difference, though I can understand how people might view them as similar. In fact, Uber tried this one. When they operated in California, they attempted to do their test operations as a fancy autopilot, claiming that because they had a human supervisor they were just like a Tesla and needed no permit. California DMV said "no way you play that game" and kicked them out of California. It's part of why they ended up in Arizona. Later they came back to California.

I used both of those arguments on Reddit. And I have been arguing against L2 for weeks.

But with this new evidence from the NTSB report, it appears that it was actually L2.

I know you don’t like SAE levels, but here’s my explanation. The difference between L2 and L3-5 is that a L3-5 feature performs all of the DDT including OEDR. A L2 feature does not perform all of the DDT, often because it doesn’t perform some aspect(s) of OEDR.

If Uber assigned responsibility for emergency braking to the safety driver and not the ADS, then the ADS was not in charge of full OEDR and the feature is therefore L2.

We can criticize the design, but any talk of flaws or faults or failures go out the window with L2, just like for Tesla accidents. Do you disagree?

The levels confuse people into thinking that autopilots and robocars are two levels of the same technology, when I contend this is far from certain, and in fact not true.

There is the robocar, that can drive without a human, and it is what Uber wants to build, and Waymo has built. And then there is how you prototype and test it. It is not that I don't see the parallels between the ways you test a robocar and how a consumer drives with an autopilot, but they are really different things going on, and their similar roots don't make the the same thing.

The similar root is that we trust humans to make all driving decisions. So you can put a car on the road as long as there is a human attentive to the driving task. But beyond that the goal is different and the practices are different.

I agree with the California DMV when they ruled that the Uber car is not a level 2. I participated in the drafting of those regulations, and so did Anthony (in fact more than me.) He was playing a game when he tried this loophole, because if you declared a car with a safety driver to be level 2, there was no point to the regulations at all. At the time there were no Waymo cars ready to run unmanned, they were 6 years away, so the regulations would have regulated nothing based on your interpretation (and the one Anthony tried at Uber.)

First, this is interesting because as I said, I have spent a lot of time over the past few weeks arguing against L2, so it’s strange to be on the other side of the debate. I used the CA example extensively.

I am not declaring that any AV prototype with a safety driver is L2. I am declaring that the Uber at the time of the accident was L2 because there is evidence that the safety driver was assigned responsibility for emergency braking. Therefore the ADS was not responsible for the full OEDR. Do you disagree?

If the ADS was not responsible for the full OEDR, then it was not responsible for the full DDT. If it was not responsible for the full DDT, then it is L2.

If your robocar vs non-robocar dichotomy is similarly broken down according to “Does it perform the full DDT?” then we can disregard the levels and call it a non-robocar.

The safety driver is always assigned "Handling any situation the system is unable to handle." For the first several years of any robocar in testing, including Waymo's, there are things it doesn't handle. Some are known, some are surprises. It is a very rare car that is actually known to be able to handle the entire driving task. Waymo's is probably the only one. Safety drivers will routinely intervene for strange situations on the road of all sorts, including certain types of construction, humans directing traffic, stalled vehicles or accidents, "that intersection where we want to do it but we regularly get problems," difficult traffic lights, children jaywalking and so on.

The usual policy is to not put the public at risk. If the safety driver does not have full confidence in the vehicle, they do not say "let's have the system drive through this to see if it hits anybody." They may say, "Let's have the system drive through this but be very alert in case it is not detecting or responding correctly to anything." Or they may say, "I'm taking the wheel."

You could call this a more limited operational domain, if you want to get picky. I don't. It is a very dynamic thing, based on human judgment of what the system can handle or what it is suspected it won't handle. Sometimes, like this Uber, it is a specific coding decision -- don't emergency brake.

In that case, however, they really mean emergency brake, as in emergency brake because the system has made a mistake and suddenly realized its mistake. I presume the system is in fact designed to handle situations like a jaywalker and to handle them with normal braking. What it is not designed to do is to suddenly realize it has made a mistake and emergency brake. And also not to emergency brake in those much rarer situations where a problem is suddenly visible that was not visible before, like an accident right in front of you or a person jumping out from behind a van. Those situations the human can't handle either, they are not in the "ODD" of human drivers.

I agree with much of what you said, but with all due respect, it’s not relevant here. I am making a very specific claim, and you are discussing other things that are tangentially related.

I am declaring that the Uber at the time of the accident was L2 because there is evidence that the safety driver was assigned responsibility for emergency braking. Therefore the ADS was not responsible for the full OEDR. Do you disagree?

I disagree for several reasons, which I will reiterate:

  • We disagree about what the levels are. A car such as the Uber is not L2 unless it is, or is planned to be offered to end users to drive in a supervised, autopilot mode.
  • By your definition of L2, all cars today that I know of are L2, except Waymo's. This makes the definition of limited value, even more limited than I thought it was ... which was pretty limited.
  • It is something else, not really part of the levels. It is a prototype L4 or even L3 if you prefer. (For example for a very long time at Google, cars were always human driven in and out of the garage, to and from the place where they would be tested. )
  • I don't really get the point of calling it L2, when it is not any level, but rather a prototype car under testing. The regulations (in many states) explicitly cover prototype cars under testing with different rules. What is gained by declaring it (and all other cars except Waymo) to be L2?

Ah okay. So you agree that the car does not perform full OEDR, so it would be L2 under SAE J3016, but you disagree that SAE J3016 applies to the Uber test vehicle. Is that correct?

SAE J3016 says:

“The levels apply to the driving automation feature(s) that are engaged in any given instance of on-road operation of an equipped vehicle.”

It does not say that such a feature has to be released/sold/deployed. It does not say that prototypes in test are exempt. The only caveat is that it has to be operated on a public road. Since the Uber was being tested on a public road, I would argue that SAE J3016 applies. Do you disagree?

(It would be good to include clarification about prototypes one way or the other in the standard, and I would encourage you to push for that in future revisions)

My understanding of the implication of L2 is that the system manufacturer is not liable; the human driver is. Perhaps this is incorrect?

While J3016 is the current authoritative source on levels, it is not inerrant. I could argue with you about whether testing is distinct from operation, but that's a semantic argument of no great value. Going back to higher level principles, if we interpret J3016 as saying that any car that can't do the entire driving task is level 2, then J3016 is meaningless, because there is no such car. Even Waymo's car runs into things it does not understand, and it pauses, and waits for instruction from operators at HQ. I believe all cars will do that for years to come. And non-Waymo cars are not event that good.

So if J3016 is useless, what is the point in arguing based on it?

I don't believe the levels define any liability at present. Perhaps a court will decide so. The laws in several states govern whether your car counts as an autonomous vehicle and is subject to autonomous vehicle regulations. This could affect liability much more than the levels. Those laws class the Uber as one, and they class the Tesla as not one, for the reason we've discussed.

Even in Arizona where it's less clear, it's pretty likely that liability will fall amost entirely on Uber in this incident. The safety driver may take some, because even if she was following Uber's orders, she had a duty to follow the vehicle code as well.

I do not believe that J3016 says “any car that can’t do the DDT is L2.” The SAE levels are not about real world performance, nor quality; they are only to communicate design intention.

I believe a prototype can be designed with the intention of owning full OEDR/DDT. But the company knows that it will often fail at first, so they put in a safety driver as an additional failsafe. This would be L3-5, and not L2 according to J3016, even though it failed in its performance of OEDR.

(Quick quiz on this... imagine a company designed a feature to be L5. The vehicle then encounters a road on which it cannot drive. What is the SAE level of the feature on that vehicle? L5 or L4?)

However, if the Uber vehicle was not designed to compete full OEDR/DDT, because it would not do emergency braking, then it would be L2.

The vehicle is not designed to be unable to do emergency braking! At least as far as I know, it, and all other cars are definitely designed to do that. However, that functionality is not working at sufficient quality, and so is temporarily not in use. It is definitely there, as the NTSB report says. Uber is, like Waymo, designing a vehicle to give, well, Uber rides. That means it is indeed designed to do the full dynamic driving task; but as a prototype it is not ready.

I’m on mobile so was/am trying to be succinct.

When I say “design intention” I mean that of the ADS, not the physical car. The SAE level simply communicates the design intention of the ADS; the breakdown of responsibility between the system and the human driver.

Everything (including emergency braking) to system? L3-5

Everything except emergency braking to system, and emergency braking to human driver? L2

Wow. It's even worse than I expected, and I expected it to be really bad. Not doing emergency braking is inexcusable, as is expecting one safety driver to do the job of two people.

Just speculating here, but perhaps during the first 5 seconds, when the perception system wasn't real sure what it was seeing, or whether the pedestrian might change direction, it's plausible that it may have decided to defer braking until it was more sure, thinking it still had enough time to stop if necessary. When it got close to 1.3 seconds, it may have thought there was still time for emergency braking, not realizing that the emergency braking function was disabled. Maybe that could even explain why there weren't any warnings to the driver, if the emergency braking function was disabled without other parts of the software knowing that.

While the idea of deferring braking is not impossible, that would be a very poor design in a vehicle which does not have emergency braking capability.

Usually the perception system is building a model into the future for everybody on the road. Where they are going with probabilities of where they might be at various times in the future. A car, for example, is mostly going to keep going in the direction it is going. It can't suddenly jump to the side or backwards, it can only turn. Pedestrians can change direction at any time and do. Unknown objects at low speed have high uncertainty. Vehicles (which is what they classed her as for a while) do not usually go sideways from the way they are gong.

I was suggesting that the design may have been based on the availability of emergency braking, but that this function may have been disabled without that fact propagating back through the design. It's common in software development for management to suggest disabling a function that is deemed to be somehow problematic, without realizing that other parts of the software are designed to depend on that function being available. Tracking such implicit dependencies between software modules is difficult.

I get that the point is to fix all the things that went wrong with the car, the protocol, and all. I get that self-driving cars must be held to a higher standard than people- driven cars And that any car on the road must avoid any kind of death for any kind of reason. At the same time, I want to mention that while I see you treated the issue of the pedestrian's actions very sensitively, I don't believe most other people would be so kind. Basically, in real life if someone is high as a kite and walks into the middle of a huge road in the dark without looking both ways, nobody is surprised if they get hit by a car. The pedestrian's behavior simply wasn't normal. A person driving a regular car might have logically expected the pedestrian to stop and wait for the car to pass before continuing their slow crossing rather than walking right into the car's path ( was the pedestrian playing chicken?). Or a driver of a regular car might have been changing the radio, or looking at a billboard, or glancing at their passenger during a conversation at just the wrong moment. While a dozen things went wrong with the Uber system that have to be fixed, would you agree that if this crash had been between your friend driving a regular car and the same pedestrian, you would not want your friend to be blamed for the death? Or would you still blame your friend for the death of the woman?

I don't think the discussion has anything to do with blame. Uber has already settled with the family. It's about what standards should apply to these newfangled robocars. Do we want robocars on the road that don't brake for pedestrians?

Pedestrians will do what pedestrians will do. We can't do much to regulate them. But robocars can be subjected to a very wide range of regulations. Before allowing them on the roads we can at least ask how they respond to emergency braking situations, although that question was apparently never asked by Arizona authorities.

Oh, it was generally settled, even before learning she had done some meth, that the car had legal right of way at that location, and so from a traffic code standard, the victim was in violation.

However, there is another element of the traffic code that puts a duty of care on all drivers to not hit others, if they can reasonably avoid it, even if the others are clearly not in their right of way. Just because you see a jaywalker, or somebody going the wrong way or anything else like that, it is not open season on hitting them.

In addition, since people do get high and do jaywalk, we wish for our robocars to be able to not hit such people if they can reasonably avoid doing so. It's just a good idea. A lone person on an empty road is pretty much the archetype for somebody you should be able to reasonably avoid hitting.

I was biking this morning when an ambulance went by. Human drivers seem to have no idea what to do when they hear a siren.
I wonder what Uber would do. I wonder what others would do today. In particular, the biggest problem seems to be the total lack of predictibility for of the human drivers on the road.
It would be interesting to see videos of test cases for each.

Add new comment