NTSB Report implies serious fault for Uber in fatality
The NTSB has released its preliminary report on the fatality involving the Uber prototype self driving car. The NTSB does not attempt to assign blame, but there are some damning facts in the report.
Perception and planning failure
The high level problem is the Uber perception system failed. She was, as I and manyo thers predicted, detected over 100m away by radar and LIDAR. It does not directly say when the camera systems detected her. She was first classified as an unknown obstacle (as is common for the first, distant detections of something) and then as a vehicle, but then as a bicycle. (She was walking a bicycle.) In my analysis of possible causes written immediately after the accident I suggested that mis-classification as a bicycle was perhaps my most likely guess, and another one of my top 4 was no communications between the system and the brakes.
It has not been revealed if it classified her as a bicycle going down the road (as I speculated.) If they had classified her as a bicycle crossing the road, then she should have been treated as a major collision risk.
The Uber, we learn, as some readers suspected, was planning a right turn. As such, it would not brake for a bicycle that is in the lane to the left which is continuing on. Though everybody knows that passing a bicycle on the right when going to make a right turn is a risky move which should be done with high caution.
The investigators say the victim is visible in the camera videos but don't talk much about when that happened and what the visual parts of the perception system did.
Emergency stop failure
The Uber system, after making this error, realized 1.3 seconds out that it should emergency brake. However, it does not do this! It relies on the safety driver, who was not looking. It did not give an audible alert, or apparently, even diagnostics on the screen, to indicate the need for the emergency braking.
This is another tragedy. It turns out that at 38mph in 1.3 seconds you go 22m. 22m is precisely the stopping distance for a hard brake at 38mph. In other words if the car had activated emergency braking then, it would have just barely touched her, or stopped with inches to spare. Swerving could also help, though you generally don't swerve and brake at the same time, and robocars don't like the risk of swerving.
Uber does not emergency brake because, it appears, their system has too many false positives and the vehicle would be impossible to ride in if it did hard braking very frequently. Sudden hard braking (where you would need a bumper sticker that says, "I brake for ghosts") has safety consequences as well, particularly if people don't have seatbelts on, or they are holding hot drinks or laptops, or there is somebody on your tail. However, while it may be acceptable to leave braking decisions to a safety driver, it is very odd that they would not signal some sort of alert -- a sound, a message on the screen, a flash of lights or even a light brake jab of the sort that gets anybody's attention.
Of course, if it had alerted the safety driver 1.3 seconds, that driver would probably need 0.5 seconds or more to react, and so would have hit the pedestrian, but at a much slower speed, possibly not causing death.
In effect, my other guess as to the source of the failure was true, the car wanted to brake but did not actuate the brakes -- but I certainly didn't guess that this would be because Uber had deliberately blocked that.
Safety driver failure
These perception and planning errors are bad, but will be expected to happen in prototype cars, and be caught and corrected by the safety drivers. The more damning information talks about why that did not happen.
The safety driver was not looking at her phone. Instead she was looking at a screen in the center console with diagnostic information from the self-drive system. The report says:
"The operator is responsible for monitoring diagnostic messages that appear on an interface in the center stack of the vehicle dash and tagging events of interest for subsequent review."
This is not an appropriate task in a vehicle with only one person on board. Other reports have suggested Uber switched to just one safety driver to speed up testing before an upcoming big demo. They may have made a serious mistake in not properly adjusting their procedures.
The Volvo is equipped with a driver alertness monitor, which is disabled in self-drive mode. (As is the Volvo's emergency braking system and other such systems.) Possibly the driver monitor has to be disabled due to the fact that safety drivers were asked to look at their consoles, or simply because all the Volvo systems are turned off.
More to come
In other "not so surprising" news, Uber announced just before release of this report that they would shut down testing in Arizona for good, and focus on their Pittsburgh advanced tech headquarters.
I'll update this article during the day with more details. As predicted, multiple things went wrong to cause this fatality. Some are things you would expect to go wrong and have plans for. (Not very related is the fact the pedestrian was high and did not look towards the Uber until the very end. That is tragic for her, but we must expect there will be such pedestrians on the road.)
Some are things that you would not expect to go wrong, however. While Uber and everybody else will learn from these mistakes, there needs to be a deeper investigation as to why Uber told safety drivers driving alone to look at their consoles, why there was no audible or visual alert about the pending problem, and why the decision to never use emergency braking or other avoidance was made. (Even if false positives made it unsuitable at certain thresholds, once you are very close there comes a time when you will not get many false positives and should use it.) In addition, the lack of driver monitoring is an error, though it's one that many teams make, but presumably will not make in the future.