Waymo has a crash in Chandler, but is not at fault.
A crash today with a Waymo van is getting attention coming in the same area just a short time after the Uber fatality, but Waymo will not be assigned fault -- the driver of the car that hit the Waymo van veered out of his lane into oncoming traffic because of somebody else who was incurring on the intersection. Only minor injuries, but higher energy than prior crashes for Waymo.
Update: Waymo says the vehicle was not in autonomous driving mode.
Waymo has released the video view from their car to make it very clear the accident was not their fault, even if the car had been driving.
This does, however, cause people to ask, "Could the Waymo car have done more to avoid being hit?" This question was also asked recently when a stationary Navya shuttle was hit by a truck that was backing up. In that case the Navya could have backed away to prevent being hit, which a human probably would do.
It is my hope that at some time in the future, robocars will start to gain superhuman abilities not just to drive safely, but to avoid being hit by reckless drivers, but that day is not any time soon. The truth is that people forget how hard the problem of building a robocar is, but this particular task is not very high on priority lists. Once the higher items have been well resolved, this will start to happen.
One reason teams will be reluctant to solve this is the fear of making things worse. From a liability standpoint, just sitting there is the low-risk choice. It's hard to blame a car for just sitting there if it had the right to do so. If a car starts backing up, or swerving, or zooming away, you move into less tested and charted territory. In some cases, like the Navya event, it probably was relatively simple, and we might see some efforts made. We don't have much info on the geometry of the Chandler event, but it's easy to imagine situations like this where trying to move could make things worse. It might change the angle of the crash and alter the damage or injuries. Other cars might also suddenly move causing other problems. The apparently safe path out might involve leaving your right-of-way or doing things you've very rarely tested because they just don't happen very often.
This is just the sort of thing to test in simulators, as I was discussing earlier today.
Even when it all looks good, it can still go bad. Imagine you're stopped at a light in the #2 position and you see somebody barreling up behind you, not stopping in time. You could elect to veer out of the lane if there's room, but that means the car in the #1 position is hit hard, without you to buffer the crash. This pushes them into the intersection where something far worse could happen -- it might change a fender bender into a fatality, and it's very hard to predict.
To really get out of accidents, you have to understand the situation in all directions, have reasonable models about what other drivers will do to avoid the accident, and also be confident in your skills in very rare driving situations. This is only of the very few situations where V2V communication could actually do something, but it's such a rare situation that it's not worth doing V2V just for this.
Humans get a bit of a pass in what they do when somebody is heading for them. We know humans have limited ability to do instant physics and strategy in fractions of a second. Machines are not so limited, so we can be more bothered when they don't do it.
One situation where calculations might be easier is a multi-car pileup. If you see a car braking hard in front of you, while somebody else is tailgating, if you brake hard, you will be rear-ended. You could time your braking to be the absolute least you can get away with, just kissing the bumper of the car in front, to minimize the impact in the rear. That's probably pretty sure to make things better, but I am sure one can think of times when it could go wrong.
We could also imagine, in this case, the car being truly superhuman by predicting not the impact of the silver car, but the running of the red light by the black one. You do want to notice this (and not go through the intersection until it's clear) but you could imagine a very smart car noting that the silver car may be forced to swerve and might enter your lane. You could take pre-evasive action, including speeding up (and then hard braking for the car running the red) or other steps. This is very speculative, and the uncertainty cones for all vehicles are large, but one could imagine thinking about it. Waymo will probably put this accident into their simulator, and then could try some things out -- it would impress the public a lot. If the road were more crowded it would be harder to do such tricks safely.
Because robocars, even when they can't avoid an accident, will know it is coming, this offers a few other special options. They could sound alerts to passenger and tighten their seatbelts. They could even release some airbags in advance of impact instead of just after it, if this will help. They could even release airbags outside the vehicle to reduce the crash impact. In an extreme case, we might imagine a car that can quickly rotate its passenger compartment, so that the passenger is facing away from the impact and pulled back in her seat, allowing far more force without injury and avoiding the need for an airbag. That would be a pretty fancy passenger compartment, and would only work if only one direction of impact is coming.
Comments
cxed
Sat, 2018-05-05 17:08
Permalink
Moot in the long term
How a robocar responds to another car doing something extremely stupid will be less likely to be worried about than, for example, your tsunami escape logic since once humans are out of the driver seat, the cars shouldn't be doing stupid things in the first place. That's the whole premise of the technology. But there will be wildlife interactions and meteors and, yes, tsunamis.
brad
Sat, 2018-05-05 17:26
Permalink
Some time
There will be some time before the humans become a minority on the road, at least in some areas, and even the robots will make mistakes from time to time.
Anonymous
Sun, 2018-05-06 08:35
Permalink
A human runs the red light
A human runs the red light while driving at a high rate of speed, swerves to avoid a collision with another vehicle, the human loses control of the vehicle, and slams into a Waymo car.
People respond: What could have the Waymo done to prevent this accident?
Unbelievable.
brad
Sun, 2018-05-06 14:24
Permalink
Not that unbelievable
I do agree it would be a mistake to, in the early days of the technology, suggest that robocar developers would have a duty to do superhuman things (or even human things) to avoid accidents caused by others. However, it is an opportunity for road safety that will eventually be worthy of exploration. Everybody building robocars hopes to improve road safety, and they will do it in any way they can. The key realization is that asking them to do everything at once slows down the adoption of key technologies for road safety. This is similar to the trolley-problem-problem. Some day, developers will work on solutions to that extremely rare situation, but because of morbid fascination, some people think that should happen sooner, rather than later.
Anonymous
Sun, 2018-05-06 16:57
Permalink
The point is people are so
The point is people are so used to people driving recklessly and causing accidents that when somebody runs the red light while driving at a high rate of speed, swerves to avoid a collision with another vehicle, loses control of the vehicle, and slams into a Waymo car, people don't think "This accident would never happened if this person wasn't manually driving and in an AV." Instead, people say, "This accident would never happened if the Waymo car didn't get out of the way of that reckless human driving."
People are getting this ass-backwards.
Dan
Mon, 2018-05-07 14:40
Permalink
The point is people are so
Any robocar that assumes all other drivers will always drive perfectly and never do anything careless, reckless, or illegal, requiring evasive action on the part of the robocar to avoid a collision, is a dangerous robocar, even if it would never be found legally at fault for an accident. Defensive driving, looking ahead and expecting the unexpected, is an important safety technique that all drivers should use, including robocars. It's only a matter of degree as to what sorts of lane incursions a robocar should anticipate and what sorts of evasive actions it should take.
brad
Mon, 2018-05-07 15:17
Permalink
Defensive driving
Generally, robocars are indeed programmed for defensive driving. That is almost always slowing down and exercising caution in risky situations. Not veering out of your right-of-way. Because robots don't panic, they could do that, and I think they will. But not on day one.
Monocarp
Mon, 2018-05-07 18:50
Permalink
"Any robocar that assumes all
"Any robocar that assumes all other drivers will always drive perfectly and never do anything careless, reckless, or illegal, requiring evasive action on the part of the robocar to avoid a collision, is a dangerous robocar, even if it would never be found legally at fault for an accident. "
- This is a strawman statement.
"Defensive driving, looking ahead and expecting the unexpected, is an important safety technique that all drivers should use, including robocars."
- And when divers do not do anything of the above? You know what happens. The point of AVs is to do all the above all the time. That alone makes them better drivers than humans.
"It's only a matter of degree as to what sorts of lane incursions a robocar should anticipate and what sorts of evasive actions it should take."
- Evasive actions sometimes cause accidents - this accident for example.
Dan
Mon, 2018-05-07 19:09
Permalink
Evasive actions sometimes cause accidents
Yes, evasive actions sometimes cause accidents, but other times they prevent accidents. The trick is to figure out which are which. Robocars should generally be pretty good at that, since they don't panic, can make split-second decisions, and maintain good situational awareness of their relationship to other vehicles. They know instantly whether changing lanes is likely to cause an accident or not.
brad
Mon, 2018-05-07 20:48
Permalink
But the key point
The key point is how the law treats you. If you were just sitting there, and somebody hits you, you're legally in the clear. On the other hand, if you make a deliberate decision to evade, and that decision is wrong, and causes another accident, this is not so clear. There are states where, I have read, that humans in this situation are in the clear -- a human who panicked because of a situation like this and did the wrong thing would not be blamed -- the initial cause of the chain reaction would be blamed. Would a robot get the same treatment? They don't panic. If they did, if the law said, "If you are about to be hit and you use your best available information to avoid that, and it causes another accident, its on the first car not you" then I think developers would be much more willing to experiment with that. The law had better say that, because what if you swerve to avoid an accident because 99% of the time it will work but it's the 1% where you hit a schoolbus?
Dan
Mon, 2018-05-07 23:43
Permalink
But the key point
I don't agree that "The key point is how the law treats you." I wouldn't want to ride in an autocar that is programmed not to attempt to swerve in emergencies because it is afraid of legal consequences in the 1% case where it might hit a schoolbus. I don't necessarily expect the car to do stunts like driving on sidewalks, but I would expect it to do things it normally knows how to do, like changing lanes into an unoccupied lane. We have seen too many cases of robocars not swerving for pedestrians, and not swerving for stationary obstacles, and not swerving for oncoming vehicles jumping the median. It would be nice to start seeing a few swerving cases.
Monocarp
Tue, 2018-05-08 04:19
Permalink
"We have seen too many cases
"We have seen too many cases of robocars not swerving for pedestrians, and not swerving for stationary obstacles, and not swerving for oncoming vehicles jumping the median. It would be nice to start seeing a few swerving cases." In other words, you want AVs to drive just as dangerous and risky as human beings, which defeat the point. Swerving is dangerous and just as liable to cause accidents than preventing them.
Dan
Tue, 2018-05-08 11:25
Permalink
"We have seen too many cases
I agree that swerving is dangerous, and you wouldn't want to be swerving in non-emergencies. But I don't agree that swerving is just as dangerous as plowing ahead into a gore point or a pedestrian. When a robocar detects at the last second that it is highly likely to cause a fatal accident by plowing into a gore point, stopped vehicle, or pedestrian, and it
knows that the lane to the right or left is unoccupied, which it assesses to have low likelihood of causing an accident, I definitely want it to rely on its judgement of the relative accident probabilities and swerve into an unoccupied lane.
brad
Tue, 2018-05-08 12:05
Permalink
Gore point
Nobody would want to go into a crash barrier or gore point rather than swerve. That's because if you're driving into a crash barrier, you've already done something very wrong. The hard questions come when you're about to hit somebody or something in your right of way, and you've done nothing wrong; they have. The law, at least for now, may not favour you if you then choose to swerve and cause a different accident. By the law I mean the courts, because the person saved is not in front of the jury, only the person you took deliberate action to harm.
Anthony
Tue, 2018-05-08 05:37
Permalink
Safety drivers
" It would be nice to start seeing a few swerving cases."
One thing to keep in mind is that there's almost always a human "safety driver" at the time of these incidents. Suddenly swerving when that human driver was of the opinion that no action should be taken, is especially dangerous. And what happens when the human takes over mid-swerve? A half swerve might very well be worse than no swerve.
At level 5, swerving starts to make a lot more sense. At level 5, swerving can easily be tested on a closed track with no humans in the vehicle.
(This whole "safety driver" stage is incredibly dangerous, and I wonder how useful it is in the first place.)
brad
Tue, 2018-05-08 12:14
Permalink
Entering unoccupied lanes
Yes, I think they will enter unoccupied lanes. Some of the early highway robocar designs which needed partial human supervision could not decide to change lanes on their own. To change lanes on your own, you need to know that nobody is coming up behind you at high speed in the lane you want to enter. Generally, this requires a rear radar aimed at that lane. The only reason you need that rear radar is for changing lanes (and swerving.) So some designs considered not having it, though long term they all will. If there are people in the vehicle who are in a hurry and want to change lanes, they can tell the car they want to change lanes and that it is clear to do so. If nobody is in a hurry, you don't have to change lanes in ordinary driving.
Anthony
Tue, 2018-05-08 04:53
Permalink
Depends on the state
It depends on the state, but it's hard to imagine how someone could be found liable for doing the safest thing with the given information, absent a defect which caused the lack of information. Even in a strict products liability case, there has to be a defect before there can be liability.
That said, the law may deem some activities to be so inherently dangerous that they aren't allowed even in an emergency. But that's just a subset of not doing the safest thing, as a matter of law.
In any case, it's still unclear how the case of a truly driverless cars will even work. You phrase the question as "will the robot get the same treatment," but unless there's an enormous breakthrough where robots become sentient, I find it unlikely that this will be the question. Rather, the question will be if the operator committed a tort, and/or whether the product manufacturer committed a tort. In a case where the computer is making the decision this is probably better handled as a products liability case. Assuming the car itself has been well tested and passed whatever inspections wind up being required in order to operate a robocar, the operator (presumably the person who pressed "go" in the case of a level 5 vehicle) shouldn't have any liability at all for design defects in the vehicle. The operator generally isn't at fault if the brakes fail on a properly maintained vehicle, why should they be responsible if the collision avoidance system of a robocar fails?
Anthony
Tue, 2018-05-08 05:01
Permalink
Note
Note that this assumes the law allows robocars in the first place. Right now, in most states, I would assume the person who presses "go" and then lets a robocar do its thing is guilty of a tort merely for letting a robocar loose on the roads. Which makes sense right now, because there are no adequately-tested level 5 vehicles.
brad
Tue, 2018-05-08 12:09
Permalink
The operator
Yes, the operator of a vehicle should not normally be at fault under the law, though courts can find anything. What I mean there is that somebody operating a fleet of robocars might agree with the vendor to take liability as part of the contract and offer an indemnification. But mostly I think liability will fall on the builders of the car system. (But all parties will get sued just to see what happens.)
Anon
Mon, 2018-05-07 04:25
Permalink
Subjective
I've said it before and I'll say it again - it's going to be really hard to make (objective) 'robocars' in a subjective world.
Monocarp
Mon, 2018-05-07 09:54
Permalink
The more (objective)
The more (objective) autonomous machines there are, the more objective the world becomes.
James P Heartney
Mon, 2018-05-07 14:16
Permalink
Evasive action making things worse
One of the more famous examples of evasive action making an accident worse was the sinking of the Titanic. Had the ship not attempted to avoid the iceberg, and instead slammed into it directly, the ship's bow would have sustained heavy damage, but probably the whole liner would not have sunk. Instead only the front watertight compartment would have filled. However, because the ship turned in an attempt to avoid a collision, the iceberg tore a long hole in the ship's side, flooding multiple compartments and dooming the entire ship.
Cars are a different creature from ocean liners, but it's still true that panicked quick actions may make the overall situation worse.
brad
Mon, 2018-05-07 15:15
Permalink
Robots don't panic
That's one reason we view this differently -- robots don't panic. But there are things the robot is highly tested and certified to do, and things that are rare and less tested. Things like swerving into other lanes, driving onto sidewalks, running red lights in clear intersections -- these are all possible and for humans, not so hard because we are general thinkers. (Though we do panic.) Robots are not general thinkers, so they can do well what they were built to do well. Even if you might avoid being hit by going on the sidewalk, you are not experienced at it, and there is the risk of hitting a pedestrian. Not that you would not have sensors and use them, but that maybe, must maybe, this is something that never got tested.
Add new comment