You are here

Almost every thing that went wrong in the Uber fatality is both terrible and expected

Topic: 

Today I'm going to examine how you attain safety in a robocar, and outline a contradiction in the things that went wrong for Uber and their victim. Each thing that went wrong is both important and worthy of discussion, but at the same time unimportant. For almost every thing that went wrong Is something that we want to prevent going wrong, but it's also something that we must expect will go wrong sometimes, and to plan for it.

In particular, I want to consider how things operate in spite of the fact that people will jaywalk, illegal or not, car systems will suffer failures and safety drivers will sometimes not be looking.

What's new

First, an update on developments.

Uber has said it is cooperating fully, but we certainly haven't heard anything more from them, or from the police. But we should. That's because:

  • Police have indicated that the accident has been referred for criminal investigation, and the NTSB is also present.
  • The family (only a stepdaughter is identified) have retained counsel, and are demanding charges and considering legal action.

A new story in the New York Times is more damning for Uber. There we learn:

  • Uber's performance has been substandard in Arizona. They are needing an intervention after 13 miles of driving on average. Other top companies like Waymo go many thousands of miles.
  • Uber just recently switched to having one safety driver instead of two, though it was still using two in the more difficult test situations. Almost all companies use two safety drivers, though Waymo has operations with just one, and quite famously, zero.
  • Uber has safety drivers use a tablet to log interventions and other data, and there are reports of safety drivers doing this while in self-drive mode. Wow.
  • Uber's demos of this car have shown the typical "what the car sees" view to both passengers and the software operator. It shows a good 360 degree view as you would expect from this sort of sensor suite, including good renderings of pedestrians on the sidewalk. Look at this youtube example from two years ago.
  • Given that when operating normally, Uber's system has the expected pedestrian detection, it is probably not the case that Uber's car just doesn't handle this fairly basic situation. We need to learn what specifically went wrong.
  • If Uber did have a general pedestrian detection failure at night, there should have been near misses long before this impact. Near misses are viewed as catastrophic by most teams, and get immediate attention, to avoid situations like this fatality.
  • If equipment such as LIDARs or cameras or radars were to fail, this would generally be obvious to the software system, which should normally cause an immediate alert asking the safety driver to take over. In addition, most designs have some redundancies, so they can still function at some lower level with a failed sensor while they wait for the safety driver -- or even attempt to get off the road.
  • The NYT report indicates that new CEO Dara Khosrowshahi had been considering cancelling the whole project, and a big demo for him was pending. There was pressure on to make the demo look good. There is speculation that the drop from 2 safety drivers to 1 was to speed up the rate of testing before the demo.

In time, either in this investigation or via lawsuits, we should see:

  • Logs of raw sensor data from the LIDAR, radar and camera arrays, along with tools to help laypeople interpret this data.
  • If those logs clearly show the victim well in advance of impact, why the vehicle did not react. What sort of failure was it?
  • Why was the safety driver staring at something down and to the right for so long? Why was there only one safety driver on this road at night at 40mph?

It is possible we won't see all of this if there is only a civil suit and it is settled out of court.

That the victim was jaywalking is both important and unimportant

The law seems to be clear that the Uber had the right of way. The victim was tragically unwise to cross there without looking. The vehicle code may find no fault with Uber. In addition, as I will detail later, crosswalk rules exist for a reason, and both human drivers and robocars will treat crosswalks differently from non-crosswalks.

Even so, people will jaywalk, and robocars need to be able to handle that. Nobody can handle somebody leaping quickly off the sidewalk into your lane, but a person crossing 3.5 lanes of open road is something even the most basic cars should be able to handle, and all cars should be able to perceive and stop for a pedestrian standing in their lane on straight non-freeway road. (More on this in a future article.)

The law says this as well. While the car has right of way, the law still puts a duty on the driver to do what they reasonably can to avoid hitting a jaywalker in the middle of the road.

Even those who have expressed no sympathy for the victim for her unwise move, it is also worth considering that it's possible (though not sure) that the car also would not have stopped for a stalled car in the road, and could have killed its own safety driver, passengers or occupants in or around the stalled car. This depends on the source of the failure. A stopped car is much more visible than a pedestrian in almost every way, and even usually has lights on, but it's so baffling that the car did not react to the pedestrian that I am not going to declare this as out of the range of possibility here. There are other potential places the error could come from that would not fail on a stopped car -- or even a non-walking person. We need to learn what it was.

This potential shows why it will be so difficult for Uber to get more passengers to ride in their cars for some time to come.

That Uber's system failed to detect the pedestrian is both important and unimportant

We are of course very concerned as to why the system failed. In particular, this sort of detect-and-stop is a very basic level of operation, expected of even the most simple early prototypes, and certainly of a vehicle from a well funded team that's logged a million miles.

At the same time, cars must be expected to have failures, even failures as bad as this. In the early days of robocars, even at the best teams, major system failures happened. I've been in cars that suddenly tried to drive off the road. It happens, and you have to plan for it. The main fallback is the safety driver, though now that the industry is slightly more mature, it is also possible to use simpler automated systems (like ADAS "forward collision warning" and "lanekeeping" tools) to also guard against major failures.

We're going to be very hard on Uber, and with justification, for having such a basic failure. "Spot a pedestrian in front of you and stop" have been moving into the "solved problem" category, particularly if you have a high-end LIDAR. But we should not forget there are lots of other things that can, and do go wrong that are far from solved, and we must expect them to happen. These are prototypes. They are on the public roads because we know no other way to make them better, to find and solve these problems. We let student drivers on the road (with a driving instructor) and we let newly licenced teens on the road in spite of their recklessness, because it is the only way we know to turn them into the better drivers they become in time.

That the safety driver wasn't looking is both important and unimportant

She clearly was not doing her job. The accident would have been avoided if she had been vigilant. But we must understand that safety drivers will sometimes look away, and miss things, and make mistakes.

That's true for all of us when we drive, with our own life and others at stake. Many of us do crazy things like send texts, but even the most diligent are sometimes not paying enough attention for short periods. We adjust controls, we look at passengers, we look behind us and (as we should) check blindspots. Yet the single largest cause of accidents is "not paying attention." What that really means is that two things went wrong at once -- something bad happened while we were looking somewhere else. For us the probability of an accident is highly related to the product of those two probabilities.

The same is true for robocars with safety drivers. The cars will make mistakes. Sometimes the driver will not catch it. When both happen, an accident is possible. If the total probability of that is within the acceptable range (which is to say, the range for good human drivers) then testing is not putting the public at any extraordinary risk.

This means a team should properly have a sense of the capabilities of its car. If it's needing interventions very frequently, as Uber was reported to, it needs highly reliable safety driving. In most cases, the answer is to have two safety drivers, 2 sets of eyes potentially able to spot problems. Or even 1.3 sets of eyes, because the 2nd operator is, on most teams, including Uber, mostly looking at a screen and only sometimes at the road. Still better than just one pair.

At the same time, since the goal is to get to zero safety drivers, it is not inherently wrong to just have one. There has to be a point where a project graduates to needing only one. Uber's fault is, possibly, graduating far, far too soon.

To top all this, safety drivers, if the company is not careful, are probably more likely to fatigue and look away from the road than ordinary drivers in their own cars. After all, it is actually safer to do so than it is to do in your own car. Tesla autopilot owners are also notoriously bad at this. Perversely, the lower the intervention rate, the more likely it is people will get tempted. Companies have to combat this.

If you're a developer trying out some brand new and untrusted software, you safety drive with great care. You keep your hands near the wheel. Your feet near the pedals. Your eyes on the lookout. You don't do it for very long, and you are "rewarded" by having to do an intervention often enough that you never tire. To consider the extreme view of that, think about driving adaptive cruise control. You still have to steer, so there's no way you take your eyes off the road even though your feet can probably relax.

Once your system gets to a high level (like Tesla's autopilot in simple situations or Waymo's car) you need to find other ways to maintain that vigilance. Some options include gaze-tracking systems that make sure eyes are on the road. I have also suggested that systems routinely simulate a failure, but drifting out of their lane when it is safe to do so, but correcting it before it gets dangerous if for some reason the safety driver does not intervene. A safety driver who is grabbing the wheel 3 times an hour and scored on it is much less likely to miss the one time a week they actually have to grab it for real.

That it didn't brake at all -- that's just not acceptable

While we don't have final confirmation, reports suggest the vehicle did not slow at all. Even if study of the accident reveals a valid reason for not detecting the victim 1.4 seconds out (as needed to fully stop) there are just too many different technologies that are all, independently, able to detect her at a shorter distance which should have at least triggered some braking and reduced severity.

They key word is independently. As explained above, failures happen. A proper system is designed to still do the best it can in the event of failures of independent components. Failure of the entire system should be extremely unlikely, because the entire system should not be a monolith. Even if the main perception system of the car fails for some reason (as may have happened here) that should result in alarm bells going off to alert the safety driver, and it should also result in independent safety systems kicking in to fire those alarms or even hit the brakes. The Volvo comes with such a system, but that system is presumably disabled. Where possible, a system like that should be enabled, but used only to beep warnings at the safety driver. There should be a "reptile brain" at the low level of the car which, in the event of complete failure of all high level systems, knows enough to look at raw radar, LIDAR or camera data and sound alarms or trigger braking if the main system can't.

All the classes of individual failures that happened to Uber could happen to a more sophisticated team in some fashion. In extreme bad luck they could even happen all at once. The system should be designed to make it very unlikely that they won't all happen at once, and that the probability of that is less than the probability of a human having a crash.

More to come

So much to write here, so in the future look for thoughts on:

  • How humans and robocars will act differently when approaching a crosswalk and not approaching one, and why
  • More about how many safety drivers you have
  • What are reasonable "good practices" that any robocar should have, and what are exceptions to them
  • How do we deal with the fact that we probably have to overdrive our sensor range on high speed highways, as humans usually do?
  • More about interventions and how often they happen and why they happen
  • Does this mean the government should step in to stop bad actors like Uber? Or will the existing law (vehicle code, criminal law and tort law) punish them so severely -- possibly with a "death penalty" for their project -- that we can feel it's working?

Comments

I used to think you were an incorrigible robo-car apologist, and I expected your take on this to be 100% victim blaming. Instead, I've really been enjoying learning about the engineering issues involved in this technology, thanks. Keep it up, and please continue to remind your readers that the victim here was just that: a human victim, who will be grieved for and missed for years to come. She was not a target, a bug, or a jay-walker (I refuse to accept that jay walking is a crime, it is simply abdicating your right of way by choosing a route that does not have a crosswalk). She was just trying to live in the world as it exists, the best she could manage, like we all do.

Jeff

I appreciate your concern. It is the normal habit among those who work on these systems to use fairly bland terms to describe things. We usually just say "obstacle" to mean anything on the road you should not hit, but yes, those "obstacles" very often are of course people or vehicles with people. The vocabulary exists because you're not gong to say "a pedestrian or cyclist or car or truck or debris or bus" and even "road user" is not very correct either. In LIDAR and radar the word "target" is often used just for historical reasons.

However, she was a jaywalker. I know the debate about the history of that term, and the questions about whether roads should be for cars or for people and the negative association of the term. Nonetheless, there is a difference in how both human drivers, and robocars, will plan for and react to pedestrians on the road and near the road that depends on whether they are at a crosswalk or not. The pedestrians act differently, the cars act differently, and the system is created explicitly to allow these different actions to make sense. You may disagree with the rationale for that, but it exists. I have some upcoming writing in the queue about this. If you have another term, suggest it, but "Person crossing not at a crosswalk" is a bit awkward even if most accurate.

I'd reserve the term "jaywalker" for a person crossing at a location where they are not legally allowed to cross. Maybe there was a specific prohibition against crossing at the location where this woman crossed, but the general rule in Arizona (and many places) is that people are allowed to cross the street outside of crosswalks.

With that said, I don't mean to imply that the woman had the right of way. When crossing outside of crosswalks in Arizona, a pedestrian is supposed to yield to vehicles.

She crossed right at a sign that said "No pedestrians (visual icon of pedestrian with red line through it) -> use crosswalk." It's about as jaywalky as you can be, other than in the way that one reader point out, namely that you only get a sign like that in a place that a lot of people want to cross, and in this case there are even (effectively useless) pedestrian paths leading up to the point. The road design failed her too.

The sign was facing the other way. Was there another one on the other side of the road?

(The road design definitely failed her. In fact, if it weren't for the "no pedestrians" sign I'd argue that this was an unmarked crosswalk.)

The signs face out from the median on both sides, so in order to get to the median you in theory would see one, unless you walked off the paths. Who knows, perhaps the family of the deceased could have a case against the city. That road is apparently known for having people who cross it -- a sign like this tends to mean "people keep crossing here and we want them to stop."

She very probably walked or rode past this sign on the other side, crossed the path, and then went past that sign (facing backwards to her) to enter the fatal stretch of road.

Fair enough. I'm not really arguing that this particular woman in this particular case wasn't crossing the road at an ill-advised location. I kind of have a pet peeve against using the term "jaywalker" to refer to anyone who crosses a road at a place other than a crosswalk, though. People hear that and assume that it's illegal, which, in many places, it isn't. I've even had to explain this to a uniformed police officer before, who didn't realize that it's legal where I live to cross the road outside a crosswalk (in most situations).

Maybe autonomous vehicles need to know this too, but I'm not sure it makes much of a difference. Unmarked crosswalks, on the other hand, probably do need to be taken into account by autonomous vehicles, and they're probably rather difficult for an AI to recognize (though many can probably be programmed in ahead of time).

Don't know about Uber, but I believe many teams do have the unmarked crosswalks on their maps. Though I would speculate -- and this is just speculation because I have not directly inquired -- that the systems, just like humans, may treat the 4 different classes differently. Namely marked crosswalk, unmarked crosswalk, legal-to-cross non-crosswalk and illegal to cross non-crosswalk. And finally, physically walled-off road where pedestrians would need to jump a barrier.

Not that I am saying that a car would remotely consider it "fair game" to hit pedestrians in any of these, even the last one. Just that the psychology of pedestrians and human drivers is different in all these situations, and thus how the car reacts should also be different.

One example of the latter case is the freeway. Both humans and robots on the freeway drive at the limits of their sensors. Go stand in the middle of the freeway, especially at night, and you have a very high chance of being killed by human drivers today. In fact, even trying to run across you do because you have bad instincts on timing at 70mph. That's why so many people who pull off to the side on freeways and then try to cross die. Nobody is expecting a ped there.

She crossed right at a sign that said "No pedestrians (visual icon of pedestrian with red line through it) -> use crosswalk."

It's worth pointing out that these types of signs are placed in locations where are high number of pedestrians are crossing. So places where you see these signs are actually places you will have a higher probability of seeing pedestrians, rather than a lower probability.

There are well trodden paths on both sides of the road here, so it is quite clear this is a well used crossing point.

Lastly, there are VERY good reasons why pedestrians cross at places like these, despite various technical legal issues, signs, etc. One of course is that you save many minutes of walking. Many times the detour to go to/from the crosswalk is just as long as the remainder of your entire pedestrian trip. Like imagine you had a 20 mile automobile trip which would be 40 miles if you took your local DOTs recommended route. You'd probably take the 20 mile route whenever you could because the detour is completely unreasonable in proportion to your trip as a whole.

But the PRIMARY reason pedestrians cross at these points is not often understood by drivers or the public officials who put up signs & such: It is FAR FAR SAFER AND EASIER to cross these big divided highways at mid-block locations than at intersections.

At mid-block, you have two lanes to cross--both coming from the same direction, then a refuge island, then two more lanes. This is easy to do if you just wait for breaks in traffic.

At an intersection like this ( use crosswalk." ">Google aerial photo of intersection in question), you have 24 lanes (!!!) coming together all at once. Typically when the pedestrian has the "Walk" light, 3-4 of those lanes from 3 different directions can LEGALLY cross the crosswalk. Yes, they are supposed to yield to you, but in practice you must keep a continual eye on them. One of the 3 lanes of traffic that can legally cross your path is coming from BEHIND YOU.

Additionally, you have to keep an eye on an additional ~20 lanes of traffic coming from a minimum of four different directions (again, one directly behind you) which are supposed to either stop due to the traffic signal, or are not supposed to be able to legally swerve across your path. The later are the vehicles that are crossing the intersection parallel to you during your walk signal, either from the rear or the front. We all know they are not supposed to swerve across a couple of lanes and turn across the crosswalk, but sometimes they do. You must keep your eye out for them.

At the same time, the vehicles on the street you are crossing have a red light and must stop (except for the ones making a right turn--they usually slow a bit and then proceed without worrying much about potential pedestrians). We know people occasionally miss the red stop lights however, and so we have to keep an eye on them as well.

And I forgot to mention--crossing at the intersection is roughly 3-4 times the length of uninterrupted pavement as crossing one side of the divided highway.

In short, crossing at a major intersection like this is extremely complex and also dangerous.

Crossing at mid block is much less complex, far less distance from safety point to safety point, and generally regarded by those who actually do it regularly as both easier and safer.

FWIW I occasionally cross a four-lane divided highway in my city on foot and I always make a point of crossing mid block.

It is, in fact, far easier and safer than crossing at intersections. Even more so in my case because they "forgot" to include any crosswalks or ped crossing signals at the intersections in question.

I am saying this, I hoped it was clearer. Cars must (and I think most do) presume pedestrians will cross anywhere. However, they will also presume, as humans do, that they will do it with a different style in areas like this. Looking right (and even left by habid) among other things. Note that as a cyclist, this woman has less excuse about going a bit out of her way, but I understand why she would want to not do that; as you say lots of people cross in these sorts of places.

What she did that was "wrong" was cross at a non-crosswalk without even looking. As though she were at a crosswalk with a walk sign. Hell, even there, I don't cross without looking. I think people are blaming her because of that. We can all imagine that if people crossed without looking, we would hit them, and we don't want to do that. If she crossed the other way without looking, right into the lane of the Uber, then it would take beyond superhuman reactions not to hit her. And we don't want a world where we would have to face that.

I'd like to note that jaywalking is a US term for US rules about cars and pedestrians and Uber is presumably developing a self-driving system it wants to use in countries around the world where pedestrians can cross without the presumption that they're at fault, as long as they cross responsible. Whatever the situation in this accident, the Uber system will have to cope with this behaviour in many other places around the world and the development and discussion needs to reflect that.

Indeed, as I point out the rules -- and thus the behaviour of pedestrians and drivers -- are different in different places. It is the job of robocar developers to understand that and integrate it into how their vehicles act.

That said, other than a fenced off freeway, I know of no place where a team would feel their car doesn't need the ability to stop for a pedestrian standing on the street, as long as there was enough time to do so, as was true in this case. And actually, most teams are working on ways to do that even on a fenced off freeway.

> What she did that was "wrong" was cross at a non-crosswalk without even looking.

Maybe. Maybe she looked, and misjudged the distance of the vehicle. Maybe she looked, and mistook the vehicle for something else. Maybe she looked, and figured the vehicle would yield to her even though it had the right of way (after all, most people don't just plow into pedestrians even if they do have the right of way).

I don't say this to defend her actions. She messed up, and paid the ultimate price for it.

"Hell, even [at a crosswalk], I don't cross without looking."

I've learned the hard way (fortunately I was not badly injured because they stopped when I started yelling at them) to never cross in a crosswalk without first making eye contact with the driver and waving.

It’s also safer to cross midblock if the pedestrian signal timing at an intersection assumes you are walking faster than you are...don’t know signal timing there, but in most places in the US there’s an assumption peds are walking 3.5 or 4 seconds per minute. A person walking a bike carrying bags may not be able to walk fast enough to complete the crossing at the intersection before the signal changes.

I live in Switzerland, where crossing without a crosswalk is a normal occurrence.

In practice, pedestrians understand that they must act like they do not have the right of way, simply because in a 2000 kg vs 70 kg contest, the vehicle wins. But in fact, vehicles are required to cede passage even without a crosswalk: "3 Sur une chaussée dépourvue de passage pour piétons, le conducteur circulant dans une colonne s'arrêtera au besoin lorsque des piétons ou des utilisateurs d'engins assimilés à des véhicules attendent de pouvoir traverser." - on a road without crosswalks, a driver will stop as needed when pedestrians are waiting to cross. In front of my house, this rule is almost never respected, alas. Perhaps robo-cars programmed for Switzerland will start giving me the rights the law has given me but my fellow citizens rarely do.

Another fun thing I found while looking up laws on Swiss non-jay-walking:

4 Les aveugles non accompagnés bénéficieront toujours de la priorité, lorsqu'en levant leur canne blanche ils indiquent leur intention de traverser la chausee - Unattended blind people always have the right of way when they raise their white cane, indicating their intention to cross the road. (The position of this article in the law makes it clear this is regardless of the presence or absence of a crosswalk.)

That one will be ridiculously hard to implement, but it is just, and these fellow users of the public space must not lose their rights just because robots can't respect them.

To me the key is the right of way. In Tempe, the victim did not have it. So then the responsibility to avoid her comes not from right of way concerns, but from the basic due care needed in every interaction in order to avoid manslaughter charges. I'm talking about what it means to live in society: you try your best not to kill other people. In this case it seems Probable that Uber's "best" is not good enough.

Jeff

Crossing without a crosswalk is normal almost everywhere, not just in La Suisse. And yes, in many places it is not illegal. But generally there are different rules of right of way, and that includes in Arizona. And there is a reason for this, and all drivers (human and robot) and almost all pedestrians act differently because of the different rules.

We have to face it. Whatever sympathies we may have for the victim, she was walking across a wide 45mph road in the dark where most of us can't imagine doing it without constantly checking to the right for an approaching car. At night you can see those oncoming headlines a long way away, though people can sometimes be very bad at the physics of when a car will come. Even so, as I have said, every car should be able to spot and stop for this class of non-crosswalk pedestrian (if you really can't stand the term jaywalker). But we all know crossing like that is an unusual and irrationally unwise thing to do.

How is that regulatory oversight has not come up in this conversation at all ? How is that these things are allowed to run around without any sort of testing protocols ? Shouldn't the systems be checked at a very basic level through a closed-course, controlled conditions checks before letting them loose on public roads ?

And isn't it in the self-driving promoters best interests to develop some sort of standard testing, fast ? SAE has defined the autonomy levels, are they partnering with any regulatory bodies to define the benchmarks and systems to certify and qualify the systems ?

If a "self-driving" car at L4 autonomy can't pass basic DMV drivers exams or ace a moose test, it has no business being on public roads.

They do have testing protocols. These protocols are designed by the companies. Mostly they were designed by Waymo, which was on the road long before everybody else and showed how to do it. Companies like Waymo and Zoox have even hired the former top officials at the government safety agencies to help them design their safety protocols. (Yes, those officials were also hired to advise on how to deal with the agencies. They are not legally allowed to lobby those agencies.)

The truth is that the companies, Waymo especially, know a great deal more about how to design good protocols than people at any government agency would. The regulators at NHTSA and DMVs readily admit that. It would be crazy to have the regulators try to design the protocols.

Instead, you want the regulators to watch the companies, and see if they show signs that they can't be trusted to do this right. Up to now, there has been little sign of that. This event may be the first big sign of it from any major company. (Some small startups have needed some reminding, but rarely.)

Of course, if it is decided that the companies (or at least Uber) can't be trusted, we will have the hard question about how to rein that in in a fast-developing world without causing serious delays to deployment of what most agree will be a life saving, and not life taking, technology when it is more mature.

The past history of auto regulation has been almost entirely this way. All the major safety features of cars, such as seatbelts and airbags, along with all the ADAS technologies (like ALB, Stability Control, Forward Collision Warning, lanekeeping, etc.) were all developed by industry, then deployed on the road, sometimes for decades before regulators started looking at them, and when they did, it was mostly to say, "Wow, this new thing is so good let's force all cars to have it."

The 28 core behavioral competency set was developed at UC Berkeley. AFAIK, there is no standardized test suite and certification for meeting these competencies, in an integrated environment.

Waymo claims to have expanded on this, but the testing methodology isn't in any way standardized or public and aren't open to any scrutiny. This needs to be implemented, fast.

Definitely it looks like the car did not perform as you and I would have expected this technology to perform; the safety driver was not safety driving whether by design or negligence; and the victim, wearing black, did a pretty risky thing no matter the moral/legal justification one may feel she had to attempt it.

But I still am wondering why the lane line shifts to the left at the end of the video just before impact? Was the car setting up for a right turn already, oblivious to the victim? Was the victim surprised by the car's unnatural (correct legal) speed and final small turn into her? I think the victim may have thought she had managed to cross far enough that a small left swerve by a slow moving car (which must surely have seen her, she thought) was going to certainly happen. That is, of course, a terrible assumption, but having the car turn into her may have been quite a shock.

My feeling is that had the car been oblivious to the victim and not setting up for a right turn, it might have just hit her bike. Deeper video analysis no doubt will be done, but I'd like to see the right swerve at the end explained.

I don't know what the car's plan was but I don't see any shift in the video. The background lights do not move as they would if the vehicle were turning in a significant way. (The road is turning very slightly.) The right turn lane has not begun so it would not turn at this point, though it might slow a bit.

Is there any other confirmation of a right swerve?

I made a frame by frame illustration of what I'm talking about. This car tracks perfectly straight for 70 frames and then in 10 frames it's 10-20% of the lane width to the right - into the victim. Check it out.

http://xed.ch/blog/2018/i/0326-57f9-drift.jpg

The road is curving to the right (not just for right turn) at that point. After curving to the left. However, the logs of the Uber will reveal what route it was planning to follow. It should not affect the situation because the impact was before (just before) the opening of the right turn lane. If you posit that it effectively was saying, "I plan to turn right, so don't need to worry about things in my lane after that point" then I won't say that is impossible, but it's a serious error.

Indeed the road does start a right curve just beyond the crash site (which was just before the dotted bike danger line) after quite a run of perfectly straight road (from the overpass). If a car is tracking the center of the lane perfectly, an approaching right turn will cut across video frame pixels to the right. This means that the lane lines of a right turn will be right of the kind of reference lines I show (where straight lane lines could be expected during correct operation). But we see the opposite.

This car definitely veered to the right in the final few frames before the collision.

Was it just bsplining a nice arc to a lane change to make the upcoming right turn? That seems too early to me for that and far from the perfection I would have expected in correct path planning. Was it reacting to the victim? Possibly using stale collision data that wrongly indicated she was still to the left and best avoided by going right? In that case, was she not being tracked? Also in that case we would (hopefully!) have seen both lane lines flare to the outside of my reference lines as the car stopped hard and the nose dived. The car just did a gentle shift to the right away from the perfect lane tracking it had been following.

All I can say is that if the car saw and reacted to the victim, it did so terribly. If it was oblivious to the victim and that's just how its path planning normally works, well, if I was the safety driver I'd be pretty nervous to ever look away!

You can clearly see the blinker (turn indicator) reflect off the signs on the side of the street. The car was preparing for a turn, almost certainly a right turn.

Good catch. Once these things are pointed out, they are clear, both the swerve and the turn signal. This car is setting up for a right turn and turned into the victim. Whether that was enough to be relevant is another question, but it sure didn't help. It looks like the obstacle detection system just utterly failed to detect. It's a shame because this is where robocars should outperform humans.

I didn't see the swerve either, but keep in mind that the "safety driver" probably took over in the moments before impact.

Analysis of not just the video, but when exactly the vehicle left autonomous mode and what happened then, would be useful. One thing I wondered after seeing the video of the "safety driver" is what exactly happens when the human driver takes over. Could it possibly explain part of the reason why the vehicle didn't brake, if the "safety driver" moved the wheel first, and didn't immediately brake?

We need to see all the data.

I don't know Uber's exact design, but normally if the driver touches any of the controls they are in complete control from that instant. If they touch the wheel, the throttle is immediately released, but they need to apply the brake if they want actual braking. Most cars also have the famous "big red button" which kills the system in the event that it won't yield the controls in this fashion. I have not heard of it ever being needed to be pressed.

Yes, a really big question is why it did not drive around the pedestrian, as they were an obstruction, at least. Maybe the system is too mindlessly following that road, that lane, to realize it has the freedom to use both lanes and the shoulders to drive around impediments? Braking is a temporary solution,after all.

Which is why any amount of victim-blaming is inappropriate.
And why Uber will get nuked if this goes to court.

I would not say "could as easily" as the probability of a young child crossing this road at that time of night is quite low, though not zero. So "could have been" but not "could as easily have been." This is one of the reasons we don't allow young children to be out on their own in any environment like this, and also why we are all taught from a young age to look both ways before crossing the street.

1. There is no need to posit something as visible as a stalled car with flashing lights. There could have been an animal in the road, or fallen material from a truck, or debris from a recent accident, and the robocar would almost certainly have plowed into it at full speed, resulting in a crash or a dangerous loss of control and possible injury or death to its passengers, that would have been avoidable by a human driver. I think we know enough at this point to be quite confident that the Uber robocar's driving was completely unacceptable.

2. Anyone who is testing robocars should be measuring the alertness of their safety drivers, both to collect overall metrics that establish the adequacy of the safety driving protocol in use, and in real time to alert the safety driver or stop the car when the safety driver is not paying attention. To not have these safeguards in place is unconscionable.

3. No robocar should ever overdrive the effective range of its combined sensor and prediction system, period. If this means the car can't drive fast enough to be safe on the freeway, then it needs better sensors before it can go on the freeway.

We will learn why it made the error. We do know it normally has functioning pedestrian detection. The potential problems run a gamut, including those where a stopped car would not be seen to those where only this particular situation would not be seen. For example -- and this is pure speculation based on what I know about these things -- typically the perception system is locating all "obstacles" in the road and tracking their speed, and making predictions about where they are going. Because she was moving, the tracking system should have modeled her as proceeding into the car's lane with a high probability of impact. This prediction might be where there was a failure. Again, we don't know. There are so many things that could have gone wrong. However, on top of all of them, what failed was the systems would should have been there to correct the failure of these more advanced systems, including of course the safety driver.

I think most people agree that point 2 is somewhere that improvement is needed, certainly with Uber. I do know of teams that are doing tracking of alertness of safety drivers, but not all do.

As for point 3, the cars was not overdriving the range of its sensors when things were operating properly. In fact, since it did not brake at all, this is probably not a problem caused by sensor range issues.

Point 3 is responding to one of the bullet points at the end about future article topics. I was surprised to see that overdriving your sensor range is something you see as within the realm of consideration; or have I misunderstood you? Would you agree with point 3?

People overdrive their headlights all the time on the freeway. Cars can go on the freeway without sensors that can see 200m by taking the strategy of safely following other cars (which we all do even though we can't see through them to what's ahead) and, if there are no other cars on the road, reducing speed so that they are not overdriving sensors. However, when they are alone on the freeway, they also have the option of safely swerving around anything which might appear in any one lane. That leaves the issue of something like a multi-car pileup blocking all the lanes. As we know, the reason these multi-car pileups exist is because all the human drivers are also going too fast. The question of "Can robots take the same risks that almost all humans take?" is not a settled one, it has arguments on both sides. I have made the argument that they should be allowed to speed (but only when they can handle that speed safely in a way that humans can't be trusted to.) Almost all humans drive on freeways too fast to avoid a pedestrian who walks across the freeway. Walking across a freeway is probably the most assured way to die for a pedestrian, which is why they are physically barred from it. (Sometimes people jump the fences, or they get out of stalled cars and try to cross.) Society seems to have decided, for better or worse, to make freeways a super dangerous place for pedestrians in the interests of greater freeway speed.

As an independent scholar in artificial intelligence (AI), I have watched the rise of self-driven autonomous vehicles (AV) with a sense of dread and reluctance. About a year ago, in spring of 2017, a man named Brown was killed in Texas (I believe) when his Tesla AV accidentally and fatally drove under a semi-trailer truck that was crossing the highway. The light-colored semi was apparently indistinguishable from the sky, to the Tesla AV software. Mr. Brown had been an enthusiastic early-adopter of robo-car technology. But Elon Musk, founder of Tesla, did not bother to tell shareholders and the public about the first robo-car fatality for about six weeks, until a U.S. government announcement forced Mr. Musk's hand and he had to acknowledge the fatality. Since that cover-up, I have very low respect for Musk/Tesla and Musk/OpenAI. Now with the Uber fatality, self-driving cars face a difficult future because moneyed interests will try mightily to force AI-driven cars down our collective throats. IMHO (in my humble opinion) we should wait for genuine, concept-based True AI before we let artificial Minds drive cars on public highways -- and I have created three such incipient AI Minds in Forth for robots; in Perl for webservers; and in JavaScript for tutorial AI. I risk professional ostracism for coming out against current robo-car technology, but I believe in speaking my mind. -Arthur

It was in Florida, and a Tesla is not a self driving car, and that mistake in knowing what it was is what led to that fatality.

As for "true AI," whatever that is, it's some time coming. Most believe you can make a vehicle that drives more safely than people without it. If they are right -- that's what they are trying to prove right now -- then a lot of people will die if you delay it waiting for your true AI.

There is an article here:
http://www.azfamily.com/story/37791825/investigators-recreate-fatal-crash-involving-self-driving-uber-car
claiming that "The Tempe Police Department, the NTSB and the National Highway Safety Administration were recreating the crash", using the exact same vehicle, and even the exact same bicycle. But instead of varying the conditions to determine what factors caused the automatics to fail to detect or avoid the pedestrian, they are testing human reaction time and braking distance to see "who's at fault." I don't tend to believe everything I read in news reports, but this seems particularly odd. Normally the NTSB doesn't try to determine who's at fault--instead they try to identify contributing factors so they can issue safety recommendations. Since the brakes weren't applied, it really doesn't matter what the stopping distance is, unless you're trying to pin the blame on the pedestrian for a failure of the automatics by saying it wasn't possible to stop in time.

On the roads, the vehicle code remains king, so they will want to know everything. The world has not decided what standards to hold robocars to, but it will be useful in the discussion to include findings such as "A reasonable human would have been able to easily stop" or that they would not have been. Or that "a reasonable human would never have entered the street with an SUV coming towards them" and so on. The video and sensor recordings will also reveal what other cars were present -- in the video we see one ahead of the Uber, but don't know about behind it.

I don't agree that the NTSB will care at all about the vehicle code. They care about making safety recommendations. Their safety recommendations almost certainly aren't going to be that pedestrians should be more careful. Instead they are likely going to be related to improving the automatics so that pedestrians are reliably detected and avoided. The NTSB is certainly familiar with aircraft automatics that are more advanced than human pilots. That doesn't have any bearing on whether the automatics can be improved. For example, if an aircraft on crashes attempting an automated landing in thick fog, nobody asks whether a human could have done a better job. Instead, they ask how to improve the automatics.

When the NTSB studies aviation incidents, they certainly consider the rules of the air, and the rules of airport operations.

NTSB does not consider fault. So they don't consider rules related to who is at fault. Their findings can't be admitted in legal proceedings, because they aren't concerned with fault. They are only interested in making safety recommendations. They have no jurisdiction over pedestrians, so won't be making any recommendations to pedestrians to be more careful around self-driving cars. If they find ways to improve the safety of the automatics, they will make such recommendations, regardless of what the vehicle code says about who has the right of way, and regardless of irrelevant hypotheticals, such as whether a human driver could have done a better job. The NTSB's inquiry does not end if automatics are found to be as safe as a human. In many cases, such as LIDAR at night, automatics are expected to detect obstacles better than human vision.

This talk of whether the car or pedestrian is at fault ignores whether the driver or city is at fault. Leaving the driver out of it for a minute, I feel certain that a lawsuit will indicate that the city is at least partially at fault. If you know people are crossing in a dangerous, even lethal location, and all you did was put up a sign when you should have prevented the situation, you allowed the fatality. So you notice that people are crossing unsafely. First remove the crosswalk-type brick, obviously. That's a no-brainer. Then put up a barrier. Maybe it's a fence or a hedge or a net or a wall. Duh. Third, make a good place to cross that soles the problem. A pedestrian bridge, a tunnel, or rebuild the road into a bridge. People want to cross there, perhaps an inviting location on one side ( like a beach or park) and a big source of pedestrians on the other (like a parking lot or college), I don't know the details. So, if this is the city's fault because they knew it was a problem place, why should Uber pay the price? Orthogonal to that I have a question. If nobody cared about this woman enough to give a darn about her and then a non-blood-relation suddenly comes out of the woodwork as soon as there's a multimillion dollar payoff at play, (I'm not saying this is what's happening, but asking about the scenario I'm describing as opposed to what actually happened) does that person deserve the payoff? What if society wants to punish the wrongdoer with a huge settlement so the cause of the fatality is cotrected, but the person who brought the suit really isn't deserving of the money? What is the ethical solution?

Cities don't have infinite money. And is the crossing that dangerous for people who bother to look towards oncoming traffic?

As to the damages, that will be up to the jury. Yes, I would guess that the defendant would argue that the plaintiff was simply not wronged as much as they claim. The lawyers will try to argue both sides.

Add new comment

Subscribe to Comments for "Almost every thing that went wrong in the Uber fatality is both terrible and expected"