Google's crash is a very positive sign

Topic: 

Reports released reveal that one of Google's Gen-2 vehicles (the Lexus) has a fender-bender (with a bus) with some responsibility assigned to the system. This is the first crash of this type -- all other impacts have been reported as fairly clearly the fault of the other driver.

This crash ties into an upcoming article I will be writing about driving in places where everybody violates the rules. I just landed from a trip to India, which is one of the strongest examples of this sort of road system, far more chaotic than California, but it got me thinking a bit more about the problems.

Google is thinking about them too. Google reports it just recently started experimenting with new behaviours, in this case when making a right turn on a red light off a major street where the right lane is extra wide. In that situation it has become common behaviour for cars to effectively create two lanes out of one, with a straight through group on the left, and right turners hugging the curb. The vehicle code would have there be only one lane, and the first person not turning would block everybody turning right, who would find it quite annoying. (In India, the lane markers are barely suggestions, and drivers -- which consist of every width of vehicle you can imagine) -- dynamically form their own patterns as needed.)

As such, Google wanted their car to be a good citizen and hug the right curb when doing a right turn. So they did, but found the way blocked by sandbags on a storm drain. So they had to "merge" back with the traffic in the left side of the lane. They did this when a bus was coming up on the left, and they made the assumption, as many would make, that the bus would yield and slow a bit to let them in. The bus did not, and the Google car hit it, but at very low speed. The Google car could have probably solved this with faster reflexes and a better read of the bus' intent, and probably will in time, but more interesting is the question of what you expect of other drivers. The law doesn't imagine this split lane or this "merge." and of course the law doesn't require people to slow down to let you in.

But driving in so many cities requires constantly expecting the other guy to slow down and let you in. (In places like Indonesia, the rules actually give the right-of-way to the guy who cuts you off, because you can see him and he can't easily see you, so it's your job to slow. Of course, robocars see in 360 degrees, so no car has a better view of the situation.)

While some people like to imagine that important ethical questions for robocars revolve around choosing who to kill in an accident, that's actually an extremely rare event. The real ethical issues revolve around this issue of how to drive when driving involves routinely breaking the law -- not once in a 100 lifetimes, but once every minute. Or once every second, as is the case in India. To solve this problem, we must come up with a resolution, and we must eventually get the law to accept it the same what it accepts it for all the humans out there, who are almost never ticketed for these infractions.

So why is this a good thing? Because Google is starting to work on problems like these, and you need to solve these problems to drive even in orderly places like California. And yes, you are going to have some mistakes, and some dings, on the way there, and that's a good thing, not a bad thing. Mistakes in negotiating who yields to who are very unlikely to involve injury, as long as you don't involve things smaller than cars (such as pedestrians.) Robocars will need to not always yield in a game of chicken or they can't survive on the roads.

In this case, Google says it learned that big vehicles are much less likely to yield. In addition, it sounds like the vehicle's confusion over the sandbags probably made the bus driver decide the vehicle was stuck. It's still unclear to me why the car wasn't able to abort its merge when it saw the bus was not going to yield, since the description has the car sideswiping the bus, not the other way around.

Nobody wants accidents -- and some will play this accident as more than it is -- but neither do we want so much caution that we never learn these lessons.

It's also a good reminder that even Google, though it is the clear leader in the space, still has lots of work to do. A lot of people I talk to imagine that the tech problems have all been solved and all that's left is getting legal and public acceptance. There is great progress being made, but nobody should expect these cars to be perfect today. That's why they run with safety drivers, and did even before the law demanded it. This time the safety driver also decided the bus would yield and so let the car try its merge. But expect more of this as time goes forward. Their current record is not as good as a human, though I would be curious what the accident rate is for student drivers overseen by a driving instructor, which is roughly parallel to the safety driver approach. This is Google's first caused accident in around 1.5M miles.

It's worth noting that sometimes humans solve this problem by making eye contact, to know if the other car has seen you. Turns out that robots can do that as well, because the human eye flashes brightly in the red and infrared when looking directly at you -- the "red eye" effect of small flash cameras. And there are ways that cars could signal to other drivers, "I see you too" but in reality any robocar should always be seeing all other parties on the road, and this would just be a comfort signal. A little harder to read would be gestures which show intent, like nodding, or waving. These can be seen, though not as easily with LIDAR. It's better not to need them.

Comments

some good points but i believe you are incorrect about the California vehicle code. it is perfectly legal for two vehicles to share the same lane at the same time side by side. this is how a car is able to pass a bicycle, for example, and why motorcycle lane splitting is not forbidden. but in these cases, it is the responsibility of the vehicle approaching from behind to not screw up. from the description of this accident, it's not clear to me at all that the robot shares any fault here.

Two vehicles can indeed share a lane, but they both have to fit, and the problem was due to the sandbags which temporarily narrowed the lane

The vehicle code says nothing whatsoever about "fitting". In fact, it is quite ambiguous about fitting and that is an issue for cars attempting to adhere to the new "3-foot" bicycle safety passing law. The law does specifically state that the responsibility for the overtaking driver to pass safely. In this case, the bus was overtaking the robot. The assertion that the fault lies with the sandbags is both curious and false. Here is the actual law:

http://www.leginfo.ca.gov/cgi-bin/displaycode?section=veh&group=21001-22000&file=21750-21760

I'm also frustrated by the two cars turning at once scenario. I always turn into the closest lane (as you are taught in driving school). I find very often the driver turning left will make a wide and lazy turn and end up in my lane.

I hit a car that turned right into my path after making eye contact with the driver - so I'm not convinced that this will help robot cars...

I think Brad highlighted the interesting part. It doesn't matter what's legal or ethical or polite or what sandbags were lying around. What matters is why did this computer controlled car drive into a bus? Even if the bus does reckless crazy stuff (like a normal human driver), the computer should not hit it and indeed, it should take evasive actions to avoid being hit. I'd like to see exactly who hit whom and how. And of course, had the bus been computer controlled the chance of this kind of accident occurring at all vanishes to negligible.

Agreed, we hear about how robocars will be able to drive 70 MPH with only inches between them because their reaction time is so quick.

Basically, driverless technology is trying to apply objective reasoning in a subjective world. This is one of the reasons why truly driverless cars have a long way to go.

One thing this accident highlights is just how much publicity the first fatality caused by a driverless vehicle is going to get. There have already been plenty of clickbait headlines about this low impact collision "Can Google’s Driverless Car Project Survive a Fatal Accident?" " Driverless Car Crash Brings Public Concerns to the Forefront" "A Google Car's crash shows the real limitation of driverless vehicles".

Despite the fact that annually around 1.2 million people die in road accidents worldwide and around 10,000 in the USA alone, that first inevitable fatality will generate a huge amount of publicity and debate. There seems little doubt that driverless cars will be held to a different and much higher standard than humans who cause fatalities. There is nothing wrong with insisting on higher standards, after all it is one of the benefits used to promote driverless cars, but hopefully the standards won't be set so high as to suffocate and cause long delays in the project. Another issue will be the spillover publicity from fatalities caused by the type of driverless experience that some of the major car companies, including Tesla, are trying to introduce ;that is higher speed driverless mode with a human "ready" to take over. Google tried and basically have rejected this halfway human/computer in control step as too dangerous, even though it sounds safer. I am guessing Google are right and that the first fatality may well be at speed on a highway in a private vehicle in driverless mode with an inattentive human unable to respond in time. Unfortunately, bad publicity will spread over all the driverless systems being developed. Perhaps a different and distinctive name for the two separate approaches would be beneficial in the long term.

Add new comment