Waymo's left turns frustrate other drivers


This week's hot story was again from Amir at The Information and there is even more detail in the author's Twitter thread.

The short summary: Amir was able to find a fair number of Waymo's neighbours in Chandler, Arizona who are getting frustrated by the over-cautious drive patterns of the Waymo vans. Several used the words, "I hate them."

A lot of the problems involve over-hesitation at an unprotected left turn near the Waymo HQ. The car is just not certain when it can turn. There is also additional confirmation of what I reported earlier, that operation with no safety driver is still very rare, and on limited streets.

Unprotected turns, especially left ones, have always been one of the more challenging elements of day-to-day driving. You must contend with oncoming traffic and pedestrians who may be crossing as well. You may have to do this against traffic that is moving at high speed.

While I am generally not surprised that these intersections can be a problem, I am a little surprised they are one for Waymo. Waymo is the only operating car which features a steerable, high resolution long range LIDAR on board. This means it can see out 200m or more (in a narrower field of view.) This lets it get a good look at traffic coming at it in such situations. (Radar also sees such vehicles but with much less resolution.)

For Waymo, this is not a problem of sensors but one of being too timid. One reason they are operating in Phoenix is it's a pretty easy area to be timid in. The instinct of all teams is to avoid early risks that lead to early accidents. That became even stronger in Arizona after the Uber fatality, which used up a lot of the public's tolerance for such errors. As such, the "better safe than sorry" philosophy which was already present in Waymo and most teams has been strengthened.

The problem is, it needs to eventually be weakened. Timid drivers won't make it in the real world. They won't make it far at all in the non-tame places like Boston. Thus you look at a nasty trade-off:

  • The more timid you are, the more problems you have and the more people driving behind you that you annoy.
  • The less timid you are, the greater the risk of a mistake and an accident.

That leaves only a few options. The other drivers must adapt better to a timid driver, or the public must get more tolerant of a slightly higher (but still better than human) accident risk.

Hype around self-driving cars has pushed some into the public to expect computerized perfection. That's not coming.

Changing how people react?

Believe it or not, it is not as impossible to change the behaviour of other drivers as you might think. In Manhattan not that may years ago, two things were very common. Endless honking, and gridlock. Both were the result of well ingrained aggressive habits of New York drivers. The city decided to crack down on both, and got aggressive with fines. It worked, and those things are both vastly reduced.

It's less possible to adjust the patterns of pedestrians. Some recent articles have gotten a lot of attention from people suggesting this must happen. I think some of it will happen, but much less than hoped, and that will be the subject of another article.

A robocar should eventually get very good at driving decisions which involve physics, because computers are very good at physics. Unlike humans which have difficulty precisely timing when an oncoming car will get to them, robots could in theory do it with a very frightening precision, making turns through gaps that seem tiny to humans. This would frighten other drivers and cause problems, so it's not something we'll see today.

This isn't the last story we'll see of robocars frustrating other drivers. That's particularly true if we unwisely keep them at the speed limit. My long term belief is that most of the traffic code should be eliminated for robocars and be replaced by, "it's legal if you can do it safely and without unfairly impeding traffic." The whole idea of a traffic code only makes sense for humans. With robots, since there will never be more than a few dozen different software stacks on the road, you can just get all the designers together in a room and work out what should be done and what is safe. Traffic codes and other laws are there to deal with humans who can't be trusted to know the rules or even obey the rules they know. While companies can't be trusted to do anything but look after their own interests, you can easily make it in their interests to follow the rules.


"it's legal if you can do it safely and without unfairly impeding traffic" is what the law should be for humans, too, for the most part

An example will be a place where for humans, we have made the rule "no left turn." We make that rule because we can't trust humans to safely and politely turn left there. 99% might, but even if 1% don't it's a big risk, so we ban it outright.

Robots with perfect ability to predict the positions of other cars, on the other hand, could be easily trusted to make the turn when the road is clear. If they can demonstrate they are near-perfect at it, they should turn, to help the flow of traffic. Same is true for 4 way stops. We want humans to come to a full stop. But robots can be quite good at knowing there is nobody else in or approaching the other paths, and let them go through on a roll with no risk. There are hundreds of examples of this -- in fact most of the vehicle code and most special rules.

Now it is true that experiments showed that you can take away all the rules for humans and it still works -- ie. remove all the signs, all the lines. It even improves throughput in some areas. But robots can do even better.

Humans will drive selfishly, and robots will do the same (or rather in the interests of their owners and passengers.) But there is a difference. You can punish humans for destructive behaviours and it stops some of them. Punish a robot for them, and it stops all the robots from that vendor. Completely and absolutely.

It is not a technological problem, but a philosophical one. Can we do useful things without harming others?
The same benefit and possibly more, we will obtain manufacturing RC safe, under a marketing system, where the price is not necessary to take it into account, and where the benefit reaches us all.
The opposite, in some way, is what UBER did.
In the FIFA World Cup, the trophy is taken by only one team, but all participants get benefits.

I don't think the difference is whether or not it should be a law. If 99% of people can safely make a left turn somewhere, left turns should be allowed. If there is adequate visibility at an intersection, rolling through a four-way stop should be allowed. The thing is, it basically *is* allowed. Virtually no one comes to a full stop at a stop sign when there isn't traffic you have to stop for. There's a no-left-turn sign near where I live that is adhered to during peak traffic times and routinely ignored during low traffic times.

The only real difference is that computers aren't (yet) good at making judgments about when the law is an ass and should be ignored. It'll be interesting seeing how this is solved. Exempting computers from traffic laws would be one way, but in my opinion it'd be the wrong way.

That said, I do think there will be problems with charging robots with the violation of traffic laws. Traffic laws are strict liability, but even strict liability crimes generally require some volitional conduct on the part of the accused. Depending who is charged with the violation of a traffic law, there may very well be a good defense.

Interestingly, Florida's struggle with right-on-red traffic cameras illustrates a lot of this. Florida installed traffic cameras at a lot of intersections, and was ticketing people for making improper rights on red. The law says that you have to come to a complete stop *before* the stop line when making a right turn on red. Virtually no one actually follows this law though, and an incredible number of people were being issued with tickets. A lot of them were being thrown out, due to loopholes in the law, but ultimately the problem was solved not by changing the law, but instead passing a new law that says that a ticket can't be issued for a right-on-red violation if the person stops at any point (not necessarily before the stop line) and the violation isn't witnessed by a police officer present at the scene. So it remains perfectly illegal, but virtually unenforceable, which is exactly the way the legislators want it.

Just to add 1 point to what I proposed: Only RC in one area.
Brad just wrote: My long term belief is that most of the traffic code should be eliminate for robocars and be replaced by, "it's legal if you can do it safely and without unfairly impeding traffic." The whole idea of a traffic code only makes sense for humans.
I agree. So if we have only RC in one area, in those places, Brad's rule could be easily applied.

In the Australian State I reside in there has been a marked change in driving behaviour over the last two decades. While many/most still speed a little over the speed limit, it is rare to see vehicles going well over the limit. The combination of new technology for measuring speed (speed cameras) and governments quite liking the revenue they raise has greatly slowed traffic to not much more than Grandma speed. Everyone has had to learn patience. I doubt a vehicle that never goes over the speed limit would be creating the same level of anger and frustration as in previous decades. It's probably now a more suitable environment for an autonomous vehicle. Perhaps it shows meeting halfway is doable?

I agree that the only thing we can count on from Waymo is named self interest. I wonder if they will need some help to choose to accept some more risk if they risk a ticket for impeding traffic. I also look forward to the moment when legislatures decide to adjust tort law so that when humans and robots run into each other, the robots are held to a much higher standard, such that they are overwhelmingly found at fault. Like this, the externalities that the robot makers are dumping onto the rest of us will at least be paid back some.

How much risk are we going to require them to take?

I bet part of the discrepancy is based on the fact that humans are just going to subconsciously ignore certain risks, in many cases irrationally or illegally. Google used to tell the story about how it's cars used to follow the law at a four-way stop and it would up sitting there for ages because people don't follow the law at a four-way stop (probably referring to the fact that people stop past the stop line to the extent they stop at all). But with a four-way stop, there's more leeway to take risks, as people are moving slowly. When making a left turn, especially when there are pedestrians around, a mistake can be much more serious.

I wonder if part if the problem is judging the intention of those pedestrians. Are they going to cross through a don't walk sign? Are they going to cross at a walk signal or are they waiting to cross in another direction (or are they just standing on the corner for some reason)? This is a really difficult problem for a computer, which most humans can do fairly well (although there are lots of mistakes made and crashes that occur).

As a pedestrian, I'm always very careful when crossing at a walk signal when there are cars making a left turn. I'd estimate a good 10-25% of drivers won't yield to pedestrians in the crosswalk in that situation. So I think the willingness of many, if not most drivers, is set too high.

I don't think you can dispense with the vehicle code, even in cases where only robocars are used. There can be conflicts between algorithms, possibly including patent wars. For instance, one vendor might develop and patent a driving algorithm that puts other robocars at a disadvantage without improving average efficiency. This would not be good for consumers. It is better to specify some, most, or maybe all of how the cars will resolve conflicts.

I have an article coming up. The trick is you can get all the developers in a room together. They can agree on principles using something like Kant's categorical imperative. Only do it if it would be OK if everybody did it.

It is impossible to combine driverless cars and pedestrians and regular driven cars without having rules to set the expectations. Imagine the panic when you expect the driverless car to do something and it suddenly decides to cut you off or do something most drivers would not do. People will panic and will create accidents, which will end up rightfully being blamed on the driverless car.

There is a very detailed set of rules of the road. If they need refinement, I am sure that will happen.

Add new comment