Waymo unmanned vehicle has debatable encounter with erratic videographer
Submitted by brad on Mon, 2019-10-28 16:07
Topic:
Tags:
Watching a 3rd party video of a Waymo minivan operating entirely vacant, I was a bit surprised (at 1:05 in the video) when the van did not pause after the video-shooting driver of the other car pulled up next to it by going into the oncoming lane, and then it cut left in front of that car. All very slow and not dangerous, but not what I expected. In the article in comment #1 I link in the video and muse on the issues of handling situations like this.
Read about it at Forbes.com in Waymo operates vacant but makes potential misstep
You can also watch the Youtube Video
Comments
Andrew Clough
Tue, 2019-10-29 08:40
Permalink
Probably no remote involvement
At my last job I took a shift monitoring our remote medical delivery bots one night every month. From that I'd say I can't see how a remote operator could possibly have instructed the to proceed that quickly. Maybe if they'd been alerted to the tail, had been watching the entire time, and had a button that was essentially "It's fine, just execute the normal plan"?
brad
Tue, 2019-10-29 11:04
Permalink
Probably not
It is possible in Waymo's case that the car can decide that it wants remote assist, and then while waiting for that assist, the situation changes and it decides it no longer needs it. It will then proceed. However, Waymo has not said whether that happened.
Alex Anderson
Tue, 2019-10-29 10:11
Permalink
I think that the fact that
I think that the fact that the Waymo waited for the driver to completely stop essentially put the risk at an acceptable level. Even if the other driver intended to go ahead, his stopped state gave him ample time to react to the Waymo. It likely would have required intent on the part of the human driver to create a collision here, given that there was a good 5 seconds before the Waymo actually steered in front of him. It also appeared to me that the Waymo did not treat that turn the same as its previous. It seemed to proceed more slowly. Waymo has had to contend with drivers and pedestrians behaving differently around it, and there has been a risk of getting into situations like we see with Tesla's Smart Summon feature, where it can't act because of the actions of others.
As a side note, while autonomous vehicles can't make eye contact, I do think it's possible they may eventually (if not already) consider which way humans are directing their attention.
brad
Tue, 2019-10-29 11:02
Permalink
What caution
The Waymo did indeed see the car, and did not move while the video car was moving, and only moved once the video car came to a stop (sort of at the stop sign, but again, it was going the wrong way in the oncoming lane so stop signs are not really the question.)
I think the issue is this. When somebody does stop at a stop sign, a highly probable prediction for their next action is to go if the intersection is clear.
With a normal driver, one can start assuming the typical rules of stop signs -- the Waymo van got there first, it gets to go first. But this is not like a 4 way stop, and this is not a normal driver obeying the rules of the road.
There is some probability, and it's not that small, that the other driver is in a hurry and trying to get around you, and thus they have passed you on the left. The defensive driver will let them go.
Alex Anderson
Tue, 2019-10-29 16:10
Permalink
What I'm saying is that if
What I'm saying is that if the Waymo did a statistical calculation of the risk posed by the human driver's erratic behavior, it could determine that it is safe to proceed in the manner it did. Even if the human driver did start to go immediately, the fact that the driver was starting from a stopped state would have left him with time to react to the Waymo. The Waymo did not immediately turn left. It pulled forward. And as it pulled forward, the gap between the vehicles increased. By the time it started turning in front of the other car, it was very safe to do so. The human driver would need to have done something more than make a predictable human mistake to hit the Waymo. I count 6 seconds from the time the Waymo began to slowly move forward to the time that its wheel turned left.
Might a human handle this differently? Sure! Humans don't drive mathematically.
brad
Tue, 2019-10-29 17:17
Permalink
Six seconds?
I don't count anything close to 6 seconds before it starts turning. Barely a couple. The camera is rotating with the Waymo van. From what I understand, after the car stopped the Waymo van drove as if the car would stay stopped. A possible assumption but I don't see it as assured enough. Again, the normal prediction for a car that has stopped at an intersection is that it will proceed soon. Now yes, one hopes a rational human driver would, if somebody was turning left in front of him, hold off. But the car's movements should have tagged this car as "non rational driver" and caused more conservative assumptions.
FKA
Tue, 2019-10-29 18:27
Permalink
Rules
Good example of how worrying too much about "the rules of the road" makes little sense in these low-traffic suburban neighborhood roads? Maybe. What the Waymo car did was right by the statutory rules and right by the de facto rules.
The other car stopped, and the Waymo van asserted its right of way by slowly putting its vehicle in front of the other one. It did exactly the right thing. When a car stops at an intersection that you're already stopped at, the assumption is that the driver is yielding to you. When you have the right of way (Waymo did), and the other vehicle can easily avoid hitting you (it could), putting your vehicle in front of the other is exactly right.
Granted, from this one short video it's not clear if the Waymo vehicle did exactly the right thing for the right reasons.
brad
Wed, 2019-10-30 11:05
Permalink
Not the rules
Right, this is not a rules of the road situation. This is a "what do you do when another car is violating the rules?" I believe that when another car goes flagrantly outside the rules, you want to start being more cautious. Not when other cars go outside the rules in what we might call "normal" ways, like rolling stops and the odd deviation from their lane. This car did a highly atypical thing. The Waymo system should know the difference between normal soft rulebreaking and harder rulebreaking, and become more conservative in the latter case.
FKA
Wed, 2019-10-30 11:27
Permalink
It did
The car seems like it did become more conservative. What more would you have wanted it to do? The only safer thing I can think of would have been to alter the course completely and turn right, and I think that would have been too apprehensive.
Another option would be to insist that a human take over. And maybe in production you'd want to do that. But what the Waymo vehicle did was not any more dangerous than staying still. Sure, the other driver could have consciously chosen to ram the Waymo vehicle. But he could have done that even if the Waymo vehicle remained stopped.
I wonder how atypical the other car's behavior was, for Waymo, for that location. I'm not sure what the road it was that it was on, but there were no other cars in motion and very few even parked. I guess how atypical it is depends on how many variables you want to match. Driving on the wrong side of the road when in these types of developments is common. Just a few weeks ago I had someone who was slowly driving at me on the wrong side of the road on the road I live on. I decided to slowly pass her by going onto the wrong side of the road myself. We pretended we were in England that day. And I wondered what a Tesla in full self driving mode would have done. Probably not the right thing.
brad
Wed, 2019-10-30 12:42
Permalink
What should it do?
I think that an erratic driver of this sort should cause the car to stop as it did, and throw the question to the ops center. It went immediately after the other car stopped, so this did not happen in this case. Failing that, it should see the other car stop, then wait a few seconds to find out what it's going to do, and then go slowly to show clear signalling of intention. And yes, going out straight (as one reader thought it did) and making a sharp turn on the other side of the intersection would also be a good plan, but that would only happen if you imagined the vehicle planned for this very situation, which is not so likely. Normally, of course, you to a left turn with as large a turning radius as you can get at such a T intersection, so you can be smooth and quick. That's because anybody coming from the other 2 directions is less likely to be able to hit you.
In this case, because there is somebody immediately to your left, who you are going to cut in front of, you should delay cutting in front of them as much as possible.
FKA
Wed, 2019-10-30 16:36
Permalink
It should do what it did, which is what I would have done too
Pausing longer after the other car stopped would have been more dangerous. It would have signalled to the other driver that the Waymo vehicle was not going to take the right of way.
I still don't see anything wrong with what the Waymo vehicle did. The other vehicle stopped. That's key. If you're going to cut someone off, you don't stop.
By the way, I don't think it would have had to have planned for this exact situation to decide to make a larger turning radius. Running some possible scenarios would be enough to "know" that it's safest to make the turn in such a way that the only way the other car is going to hit you is if they intentionally hit you, and keeping the distance as far as possible is the way to achieve that. And it was achieved, I think.
As I said in the thread where you suggested that a robocar might sometimes want to intentionally crash into an erratic driver, position is the key to driving assertively. When you have the right of way, you put your car in a position where the other drivers can either yield to you or crash into you. But you generally should leave enough room so that they don't accidentally crash into you.
brad
Wed, 2019-10-30 17:40
Permalink
Yes, and no
The Waymo did the right thing for a human driven car which knows that the other driver is trying to film and thus is not going to go. But of course the car can't know that (at least today.) All it knows is that the other car has tried to pass it by going into the oncoming lane, or worse, that it is an erratic driver ignoring normal practice and the law. In either of those situations, you the car should exercise extra caution because it can't model humans as well as we can. It can't make eye contact. If I were in this situation, I would be making eye contact with the other driver, then going while watching them carefully. The car can't do what I would do, and what I suspect you would do too.
FKA
Wed, 2019-10-30 19:10
Permalink
The driver did not
The driver did not try to pass the Waymo vehicle. On the contrary, it came to a stop next to it.
The Waymo vehicle did exercise extra caution. It waited until the other vehicle came to a stop before entering the intersection, and it entered the intersection slowly and carefully.
The Waymo did the right thing. Period.
The Waymo also did what I would have done if I were controlling the car remotely. Yes, if I were in the car, I would have tried to communicate with the other driver. I might have given him a "WTF?" look. Although, given the situation, it probably would have been more of a "yes, I'm driving autonomously, so what" look. I'm not sure that would have made much difference, though. And that obviously wasn't an option. So the Waymo did the best it could with what it had. It communicated with the other driver by asserting the right of way. Had the other driver responded by driving into the intersection, I'm sure the Waymo would have stopped. If it was programmed really well maybe it would have even gone into reverse to avoid a crash. Again, I don't see how there would have been a crash unless the other driver intentionally crashed into the Waymo. Maybe if the other driver was drunk, I guess, but the way he was driving did not give any cause for suspicion of that. The driving just seemed like someone who was curious about what they were looking at. Yes, the Waymo probably didn't recognize the camera, but it wouldn't be completely surprising if the Waymo had prior experience with gawkers, and the presence of the camera didn't really change the situation any.
You have to put what you call "erratic" driving into context. The Waymo was driving around with no human in it. With that in mind, the human driver's behavior wasn't that odd.
Kudos to Waymo if they've actually learned all of this, and it wasn't just lucky that the Waymo did the right thing.
brad
Thu, 2019-10-31 09:39
Permalink
What Waymo knows
I don't think the Waymo has that level of understanding. It did identify that it was odd for a car to be on its left moving in the same direction -- one in the opposite direction would be normal -- but as soon as that vehicle stopped it immediately considered that resolved. I would advise more caution then.
vroom
Thu, 2019-10-31 00:23
Permalink
One at-fault crash in 10m
One at-fault crash in 10m miles? Does that count the reckless crash that Anthony Levandowski precipitated in the Prius?
https://www.businessinsider.com/anthony-levandowski-google-self-driving-car-crash-2018-10
brad
Thu, 2019-10-31 09:37
Permalink
No
Google doesn't count that as an at-fault accident for the software for several reasons, though it is certainly an interesting incident to study. Leaving out the fact that it was not a version of the software meant to be deployed -- Google, like most companies has rules about what can be deployed, with code review and sign-off by others required -- what happened is the car stopped in a ramp. That is not ideal behaviour but other cars are supposed to also stop, but another driver tried, according to reports, to get around and Anthony took over.
I never saw Anthony's video so I don't know more about the accident. But if another car had had an accident because of this, you could put it in the more nebulous cloud of accidents you might term, "accidents triggered because the car, while doing nothing illegal, did something too conservative or non-human."
Waymo has had a few of those. Accidents where they waited too long to go at a light, and a human driver, presuming that the car would go aggressively as most humans do, rear ended it. The law is clear on fault in these cases, but they are still worth studying.
However, Waymo's number of these class of accidents is also very good.
vroom
Fri, 2019-11-01 05:24
Permalink
According to the news
According to the news articles, the Google Prius did *not* stop on the ramp, but rather actively inhibited the other car from making a safe merge. If the Prius simply stopped completely, it might not have instigated the accident.
What’s more concerning is the aftermath: Google / Waymo took no official action to curb Levandowski. If it were a safety driver contractor and not him, the contractor would almost certainly have been fired. Sadder still, evidently none of the engineers on the project called out the matter for its ethical absurdity. At that time, the engineers were still getting outsized chauffeur bonuses, right? Waymo engineers do get paid a lot less today.
I’m very skeptical of any of the safety claims that Waymo makes. Based on the takeovers I’ve personally witnessed in urban areas, there’s no way they’re doing 10,000mi per takeover in urban. (Maybe for Chandler though). Taking ownership for the Levandowski crash is an important first step towards providing the accountability that the public deserves. Based on how easily Tesla fooled the NHTSA, regulators are certainly not going to protect us.
Hopefully, Waymo opens their safety record for public scrutiny / data mining. Without consumer-verifiable data *pre-ride*, there’s no reason to trust Waymo over e.g. Tesla Autopilot.
brad
Fri, 2019-11-01 09:37
Permalink
Prius-Camry
I don't know any more about that incident than was in the papers. Another report says that the Prius blocked a merging Camry. Technically, in California law, the freeway traffic has the right of way over a merging car, and so the accident blame in such a situation would go on the car that tried to force its way into the merge rather than stop at the end of the merge lane. In the polite world, you are supposed to make an effort to help the merging cars get onto the highway, but it's not the law.
Obviously, Waymo's relationship with Anthony was a strange one that I never quite understood, and it took them time to reverse course on him. I am not sure you can say no managers called out the matter -- Anthony had his fans and detractors. And the story says it all happened because Isaac was being critical of Anthony trying to break protocol on where to test, so it began with him being called out.
I agree they should be open about all events, though. But I am not holding my breath because the code Anthony was testing and Anthony are long gone.
vroom
Fri, 2019-11-01 13:50
Permalink
Here’s the more complete
Here’s the more complete article, which describes how the Google / Waymo Prius instigated the accident. Notably, nobody went back to check on the crash victim even though the whole thing was recorded on camera and it was very clear there had been an accident. “Long hair don’t care.”
https://www.newyorker.com/magazine/2018/10/22/did-uber-steal-googles-intellectual-property
While I hear you on what the law says, and while your argument would (for better or worse) likely be deemed “correct” within the inner engineering circles of Waymo or any other self-driving team, the position of “law clears the robot of liability” continues a narrative that is frankly a bit arrogant (all studies today show users are scared of robocars) and plainly adversarial to the interest of public safety.
For context, Uber was also able to clear itself of liability of Elaine Herzberg’s death (and pass the liability completely onto the person behind the wheel) despite substantial evidence that Uber *intentionally* made their car unsafe in preparing for a CEO demo. (Uber crippled the ability of their perception system to communicate the existence of obstacles to planning because perception was producing too many false positives to make the demo work. Uber also disabled Volvo’s failsafe). Furthermore, Tesla continues to have numerous fatal crashes and yet stands behind the official (yet rigorously debunked) NHTSA study that argues Autopilot is safer than a human operator.
While most of the core technical challenges of robocars are now well-studied, the question of user-product fit is still largely an unknown (particularly in any way open to rigorous public review). Private studies (and issues like people throwing rocks at Waymos) tend to show people don’t want these things. People are already scared of not just accidents but traffic law and its inefficacies. Robocars only add more uncertainty to the situation.
It’s immensely frustrating for self-driving companies to hide behind legal talking points because their use of the roads is a privilege. The public own the roads. The public won’t own the profits generated from robotaxis, but they should own (rigorously) the raw data and argument used to accredit robots as safe drivers in public spaces. The results so far show definitely that NHTSA and regulators are not going to responsibly step up to that challenge themselves.
brad
Fri, 2019-11-01 18:44
Permalink
This accident was a problem
I agree that this accident (and other accidents where the vehicle is not at fault under the law) deserve more discussion and scrutiny. I just mention why Google can correctly claim it only has the one at-fault accident.
A key factor to consider in this case is, was it Anthony being rogue, or a systematic problem in how Google conducts itself, and it definitely seems to be the former. You can fault them for keeping him around after this, but he's long gone now. Anthony has always been one of the most aggressive people in the field, and this is not the only time he's crossed the line in my view.
I don't agree about the studies. I think almost all studies I've seen are near meaningless. I actually think that far from people being afraid of robocars, they are far too trusting of them! You can't ask people what they think of a technology which barely exists and almost none of the public have seen or understand. You can ask, but the data are not useful for conclusions. People didn't trust elevators without operators, either. The number of people throwing rocks can probably be counted on one hand, as far as I have heard. Everywhere I go people are too eager, too trusting, not too afraid.
Uber did behave badly, but not in the way you say. In particular, that they did not use the Volvo EAB is not at all unusual, everybody removes that function from test cars because they are trying to build a superior one and it would be problematic to have two at once. I have seen no report of them crippling perception but the full report is not out. I saw a report of them not allowing the car to break when it called for more than 0.6gs. They did that rather stupidly. Not enabling hard braking actually is not that unreasonable a situation, but it definitely should have done mild braking and issued warnings if it wasn't going to brake. But all of that was 1% of the cause of the accident -- 99% was in who they got to safety drive, and what she did, and how they managed her.
As for Tesla, well, I've written a lot about that. The debunking of the NHTSA claim is very interesting but also quite messy because it makes a lot of assumptions about what the Tesla numbers mean. Tesla is very weaselly about it though and could just give the real numbers and they don't. Yes, that does make us suspect the real numbers are not good, but we don't yet properly know them.
Add new comment