Anatomy of the first robocar accidents

Topic: 

I have prepared a large new Robocar article. This one covers just what will happen when the first robocars are involved in accidents out on public streets, possibly injuring people. While everybody is working to minimize this, perfection is neither possible nor the right goal, so it will eventually happen. As such, when I see public discussion of robocars and press articles, people are always very curious about accidents, liability, insurance and to what extent these issues are blockers on the technology.

So please consider:

Anatomy of a Robocar Accident

This article comes in part from having attended the "We Robot" conference in April at Stanford University. While it was generally about robots and the law, robocars were probably the most popular topic. Several of the papers in the proceedings are well worth it for those interested in the law of robotics. (The real law, not the silly Asimov laws.)


In a curious coincidence, last week saw an unusual robocar accident in Brazil that caused minor injuries -- on live TV no less. On a Brazilian TV show, professor Alberto Ferreira do Souza from the Federal University of Espirito Santo has just shown TV Presenter Ana Maria Braga their robocar, which features the smaller 32-line Velodyne LIDAR on the roof and various other sensors. After the successful demo, he reaches into the car to turn off the system and restore the car to manual. Unfortunately, the car has stopped on an incline, and in doing so from outside the car, this releases the hold on the brakes that their drive-by-wire system had and the car starts rolling down the hill, and the open door whacks Braga hard, though fortunately with only minor injuries. Here is a video and Here's the story in Portuguese with a different video clip. I have no idea why a puppet parrot is commenting on the accident.

As you can surmise, the self-driving software was not directly at fault here, but rather bad human judgement in turning it off. Curiously, this is not the first time we've seen serious problems with humans not correctly handling turning systems on and off. I routinely show a video of my friend Anthonly Levandowski, who built a motorcycle for the DARPA grand challenge and forgot to turn on an important system just before the race, causing his bike to tip over right out of the starting gate. Volvo also had the "press demo from hell" when their crash-prevention system did not operate. It was reported that a battery had discharged by mistake, and that in recharging it they had disabled the collision system.

There have been several small robocar accidents. Just about every team in the DARPA Grand Challenges had a car go awry during early development and testing, and a few even had accidents during the challenge, with one small car to car fender bender and a fairly hard hit on concrete barriers. Google has also reported their car has been rear-ended while stopped at a light during testing -- a situation where blame is always placed on the rear car.

Comments

Brad,

The "Anatomy of a Robocar Accident" article is excellent!

I wonder if the manufacturers will disable *my* car when someone else gets into an accident and they haven't had time to fix the problem?

I think New Zealand has a good idea there with the no-liability insurance.

Thanks,
Randy

I find the whole excitement about robocars to be getting way ahead of itself. As the article notes, robocars will be designed and programmed to be more obedient of motor vehicle laws than human drivers. What will happen in the common scenario of two lanes merging under crowded stop-and-go traffic conditions? Rather than obeying the (not always followed) social compact of alternating one car from each lane into the merged lane, would the human drivers take advantage of the robocars predilection for collision-avoidance and just cut the robocars off? I can easily foresee a robocar stuck at the merge point while a parade of human-operated cars from the other lane proceeds merrily along into the merged lane. Both the robocar and all the cars traveling behind it will be stuck. Alternatively, will the robocar insist it is now its turn to proceed in the merge and crash into a car merging from the other lane?

This has already been explored by Google at 4 way stops. They concluded the car needed some aggressive patterns to match the human style at intersections, and programmed it in, and it worked. I would be very surprised at any call by the law to punish this. It might be strictly legal but hardly rational to enforce a situation where the vehicles would get delayed. Especially the people behind the "stuck" vehicle would be vociferous in complaining if the law forced this option.

However, in the meantime, the cars will have people in them, who can take the wheel and get through any situation the car can't handle. In the more distant future, if the cars are empty, they themselves are not in a hurry, but the people behind them might be. By that time I expect working data service at things like highway lane merges where a remote control center could resolve the issue.

The reality is that while one driver might say "hah-hah, I can cut off the robot and it will lose the game of chicken with me and back off because it's a stupid robot. Hah-hah." But they will really be saying it to all the humans waiting behind that robot, and while some people are jerks like this, everybody will not be, and the robot will find a way.

I doubt an umanned robot would be designed to truly force its way in, though it might be programmed to take its share of the contested right-of-way and back off if the other car insists harder. There are lots of drivers today on the roads who follow that pattern already.

Note: For clarity I am largely ignoring a robocar that has a human being within who could direct or control the vehicle in a crisis, but rather focusing on self-directed robocars (empty or carrying a passenger such as a child).

Sorry, but I don't find your answer that compelling. On the one hand you suggest the robocars can be programmed to be more like humans, but this means tiptoeing up to (if not crossing the line of) violating proper and safe operation. As you noted in your main post, a cluster of robocars rigorously obeying the speed limit is a prescription for a clot that would jam up a highway.* How would a manufacturer/software developer defend in court building the ability to violate the laws into the vehicle's operating system? On the other hand, your suggestion that there will be some humans who would eventually let the robocar in to the merge is certainly reasonable, but a shift in the dynamics of the merge from the two lanes from roughly a 1:1 split to something like even 6:1 in favor of the humans would have a huge impact on traffic flow patterns.

"By that time I expect working data service at things like highway lane merges where a remote control center could resolve the issue."

Seems a bit deus ex machina (not intended as a pun). Are you suggesting either a scenario where there are no human-operated cars or a scenario where the traditional human drivers are put under some form of enforced control? In any event, the idea of a robocar generally operating by a certain set of established protocols but that would switch over to some other set when things get sticky would introduce a large amount of uncertainty as to robocar behavior for the human drivers in other vehicles.

Personally, I would love for this all to work out. The availability of robocars would do so much for people like the elderly who can no longer drive but would see a huge boost in quality of life via less isolation if they had a vehicle at their beck and call, particularly given car-centric development patterns in communities. I just have a lot of doubts about a mixed mode of both human-operated vehicles and robocars sharing the same roads. Also, the default of a robocar simply stopping when it is confused and needs more instruction/support or its software has crashed is not a viable option in many motoring contexts and will generate very dangerous situations.

* The Massachusetts State Police tested the concept of a "rolling roadblock" of police cars side-by-side on the Massachusetts Turnpike driving at the speed limit back when Mike Dukakis was governor. The resulting miles of traffic backed up behind the police cars forced the rapid abandonment of the scheme.

First of all, for an unmanned vehicle (no passenger) nothing is a crisis, really. If all else fails, pull over and wait for somebody to resolve it. Some other car will be sent to do whatever job you were doing. Unmanned vehicles won't be on the highway that much, not on busy ones. Unmanned operation is used to deliver vehicles (usually a short trip) or refuel or service vehicles (also short), for delivery of items to nearby customers (short, rarely highway) and for repositioning (which may use highway but always in counter-commute directions or at night.)

So let's look at some rare cases. We have a vehicle with a passenger who is unable to help the vehicle, like a blind person. (Though even they can provide a fair bit of help.) I don't see small children being sent outside of the mobile data service area, frankly, since parents will want to always have contact. Indeed, unmanned cars would also never go outside the mobile data area. Fortunately there is some level of mobile service on all urban highways and a large fraction of rural ones. These are the areas where car service is going to be offered at first, remote rural roads come later. So I think the odds of finding a surprise merge that is not on the map and does not have cell service are very, very low.

But not zero. Now as it turns out while merges should be done in a "take your turn" mode they actually are not. Usually the right lane is empty and people even get resented if they zoom up and cut into the line ahead. (I wrote a number of blog posts on this over the years and revised my thinking after learning that full use with take-your-turn is the highest throughput merge.) There are lots of timid drivers on the road and I really don't believe it is correct to say a timid driver will not get through reasonably soon. Not as fast as an aggressive driver. So I don't see any switch from 1:1 oto 6:1 or anything similar.

Robocars going the speed limit would of course keep right, and not form a block. They would probably fine each other actually. There are usually people going the speed limit in the right lane already, so we know this can be tolerated, though I think the right course is for the law to allow robocars to travel above the limit if all other cars are above the limit. This is the most rational course.

Cars will stop if they don't find any place they can go forward. Then they will get remote assist, I suspect.

You don't actually mention the problem I think is more real, which is in some cities of the world (though not commonly in the US) agressive driving is essential. For example, I have driven in cities where if you want to enter traffic from a driveway you must cut into gaps that are technically too small, presuming the cars coming up will brake a bit. And indeed, this is what we do and it works. Other things like 3 point turns and many other actions also demand that you expect other drivers not to be blind idiots. And it works.

I will also note that unmanned electric robocars will be able to pull pretty serious accelerations in these situations, and fit into gaps humans would have trouble with.

But even so, the question of driving in these cities is a complex one. Not one that people will solve first. It might be that unmanned cars just never go places they can't get out of without taking risks. I hope not, but it's a possible.

Very interesting article but I think that the question of interaction of robocars with pedestrians, bicyclists and human motorists deserves deep scrutiny. A robocar is cautiously going to attempt to avoid collisions -- but humans break laws, play bluff and take risks. That means, for example, that a robocar will stop for a pedestrian illegally stepping off the curb, or a car inching out from a stop sign, or a bicyclist riding against traffic -- while a human driver might only blow the horn and expect the pedestrian to retreat, the motorist to stop, or the bicyclist to dart into a parking space. I see the potential for robocars to bring mixed traffic to a stop, because humans will outbluff them. Where does this lead? To robocars' being allowed only on limited-access highways, where traffic conditions are uncomplicated? To traffic in urban areas being reduced to the speed of bicyclists, because robocars are more cautious about overtaking than humans are? To banning bicyclists, pedestrians and human motorists from roads where robocars are permitted to operate robotically?

I agree that jaywalkers and rule-ignoring cyclists are an interesting challenge. California gives the right of way to cars outside crosswalks (implied and marked) but requires due care to protect the jaywalker anyway. That doesn't mean the vehicle has to stop, but it might well slow down for a jaywalker -- as many humans do.

One example we've seen of robocar programming being adjusted to "the real world" is Google's discovery that they could not navigate busy 4 way stops if they always yielded to other drivers, even those going out of turn. They learned they had to assert themselves when it was their turn, even if somebody else was also trying to go out of turn. What normally happens is the other driver backs off. Sometimes the other driver does not back off, and then the robocar strategy is not to try to play chicken.

I have wondered what happens if human drivers start to exploit the fact that the robocars will always yield in a true game of chicken, even if the human driver does not have the legal right of way. They might well do that, but don't forget, the robocars will have passengers who are put out by this, and they might well do things like take the wheel and continue the game of chicken, or they might make note of the fact the car is making video of the whole situation, and with the other car entirely in the wrong, vehicle code wise, they might e-mail it to the police.

If the robocar is vacant it will not try to win a game of chicken, so that will slow it down, but it doesn't care, it's a robot. Eventually it will get an in.

In your post:
"In addition, the human driver will almost surely have the authority to direct the car to perform certain actions the machine would not do on its own, such as exceeding the speed limit. Like a cruise control, people will expect the car to obey its owner, and we all know the reality is that driving the speed limit is actually a traffic-obstructing hazard on many highways. "

Are you serious? Are we abandoning any pretense that the speed limit is actually a limit, above which the risk of serious accidents increase dramatically? These robocars, if they must exist at all, MUST be programmed to never exceed the speed limit. Abandoning speed governors in the early days of motor cars was a bullying move by motordom against advocates of safety, common sense, and users of public spaces. If we are going to get self-driving cars, requiring them to actually obey the motor vehicle laws would be one of the most important ways in which they could improve safety.

Believing in things that "We all know" is equivalent to turning off your brain. The (already too-high) speed limits are actually present for a reason. Joe Schmoe shouldn't have the expertise or the authority to override public safety. The way people drive now, they do it all the time, but they shouldn't be able lazily make their computer car operate dangerously. Think of it as a safety interlock. A drill-press operator isn't able to use a properly-designed press without the drill bit properly set, because it would be unsafe. Similarly a car user should not be able to operate a well-designed car in an unsafe way.

There have indeed been attempts to make cars so they can't exceed the speed limit. There was indeed strong pushback against this or any laws requiring it. While machines may not have autonomy, humans do, and we insist on it, and we want our machines to obey their owners, not the government. The responsibility for obeying the law belongs to the humans giving commands to the machines.

Because European regs asked for speed limiters, they exist on a number of cars today but I do not believe they are widely used. Sometimes I believe people use them in giving cars to teens and so on.

The reality is that there is the posted speed limit on US highways, and the real speed limit, which is higher and not at any fixed point. Rather, you will almost never get a ticket under +5mph, very rarely under +10mph, and with gradually higher probability above that. This depends on the roads and what the road is actually capable of, and traffic, and the moods of police, and sometimes even the revenue desires of police.

In France, they have a different system. The limit on the Autoroute is 130kph. Almost people drive a bit below that limit, some a fair bit below. Police reportedly ticket anybody going over that limit if they see them. This is how it should work, it is called the rule of law rather than the rule of men.

You refer to a pretense that the speed limit is actually a limit. You are correct, that is a pretense. Pretenses are bad.

It is also worthy of note that the safe speed limit actually varies from driver to driver, car to car, road to road, weather condition to weather condition, traffic condition to traffic condition. It is not a single number. Human law insists it be a single number, but that's not the ideal choice when considering machines. In addition, the speed limit is not a cliff, where at 65mph you are safe and 66mph you are unsafe. There is a risk of accidents at all speeds. The speed at the time of impact does affect the damage of the accident of course, but again it is a gradual scale.

With robocars, you have the option to test the system in all sorts of conditions and roads and traffic and know the right speed choice equation for that particular vehicle in its particular situation. This is the best way to determine what speed the vehicle should go. However, instead we have a fixed number. We are not able to do that today, so they lay down a single number, and humans exercise their own moral judgement on what speed to go, and assume the risks if they exceed that number. Because of social convention, there is generally very low risk of punishment if one is going in the "middle of the pack" speed-wise on any given road.

There are multiple ways you could change that system if you don't like it. As noted, the US often takes the individualist view and leaves the decision up to the person in control. From a utilitarian viewpoint, the challenge is complex. If you make the machines enforce the law, and go much slower than other traffic, you get three consequences:

  • The machines are moderately safer, possibly reducing their accident rate and severity. If they are programmed well, it reduces it from very, very low to very, very, very low.
  • The machines are seen as impeding traffic, generating some resentment towards them. In some circumstances, if they travel well below the prevailing speed of traffic, they may actually increase the level of accidents caused by others.
  • Because they cause longer travel times, they are less popular than they could be. There are fewer of them and more human driving, which increases the the frequency of accidents. Further, people "in a hurry" with such vehicles would disable the self-drive system and take the wheel (and throttle) under human control, increasing the frequency of accidents.

While it's hard to predict the exact result, it's entirely possible that making the vehicles go at the speed limit could cause more accidents, quite a few more than would be saved due to the lower speed. Perhaps in a future day with most cars being robocars, it might be practical to limit the speed and reduce severity, but practical still doesn't mean it's a good idea. However, in that future, the speed would be limited to a new robocar speed limit, not one calculated for human driving. The robots might well go 90-120mph on easy highways, the humans being the obstacles in the road.

Famously, Germany has no speed limit on about half of the Autobahn. It is smaller than Texas and has 4 or 5 times the population and the roads are not as wide. However, Texas has many, many more traffic fatalities. Main reasons: better driver's ed in Germany, better cars.

However, weren't US speed limits, at least the 55 mph one, despite "55 saves lives", introduced to save fuel, back in the oil crisis of the early 1970s? (Of course, driving 55 mph and using three times the fuel than I use in Germany driving 110 mph, because the Texans are sitting in a Texas-size car, is just silly.)

Yes, the 55 was aimed at fuel, and was tolerated during the gas crisis of the 70s but the public gave up tolerating it.

110mph uses 4 times as much energy to overcome drag as 55mph, so you have to work at it to use 1/3rd the fuel, even with an SUV and a small car. But they do work at it!

Thank you for the post man, really great writing. the self-driving software was not directly at fault here, but rather bad human judgment.

Add new comment