DARPA challenge mystery solved and how to handle Robocar failures

Topic: 

A small mystery from Robocar history was resolved recently, and revealed at the DARPA grand challenge reunion at CMU.

The story is detailed here at IEEE spectrum and I won't repeat it all, but a brief summary goes like this.

In the 2nd grand challenge, CMU's Highlander was a favourite and doing very well. Mid-race it started losing engine power and it stalled for long enough that Stanford's Stanley beat it by 11 minutes.

It was discovered recently a small computerized fuel injector controller in the Hummer (one of only two) may have been damaged in a roll-over that Highlander had, and if you pressed on it, the engine would reduce power or fail.

People have wondered how the robocar world might be different if they had not had that flaw. Stanford's victory was a great boost for their team, and Sebastian Thrun was hired to start Google's car team -- but Chris Urmson, lead on Highlander, was also hired to lead engineering, and Chris would end up staying on the project for much longer than Sebastian who got seduced by the idea of doing Udacity. Google was much more likely to have closer ties to Stanford people anyway, being where it is.

CMU's fortunes might have ended up better, but they managed to be the main source of Uber's first team.

There are many stories of small things making a big difference. Also well known is how Anthony Levandowski, who entered a motorcycle in the race, forgot to turn on a stabilizer. The motorcycle fell over 2 seconds after he released it, dashing all of his team's work. Anthony of course did OK (as another leader on the Google team, and then to Uber) but of course had recently had some "troubles."

Another famous incident came when Volvo was doing a demo for press of their collision avoidance system. You could not pick a worse time and of course there is video of it.

They had of course tested the demo extensively the night before. In fact they tested it too much, and left a battery connected during the night, so that it was drained by the morning when they showed off to the press.

These stories remind people of all the ways things go wrong. More to the point, they remind us that we must design expecting things to go wrong, and have systems that are able to handle that. These early demos and prototypes didn't have that, but cars that go on the road do and will.

Making systems resilient is the only answer when they get as complex as they are. Early car computers were pretty simple, but a self-driving system is so complex that it is never going to be formally verified or perfect. Instead, it must be expected that every part will fail, and the failure of every part -- or even every combination of parts -- should be tested in both sim, and where possible in reality. What is tested is how the rest of the system handles the failure, and if it doesn't handle it, that has to be fixed.

It does not need to handle it perfectly, though. For example, in many cases the answer to failure will be, "We're at a reduced safety level. Let's get off the road, and summon another car car continue the passengers on their way."

It might even be a severely reduced safety level. Possibly even, as hard as this number may be to accept, 100 times less safe! That's because the car will never drive very far in that degraded condition. Consider a car that has one incident every million miles. In degraded condition, it might have an incident every 10,000 miles. You clearly won't drive home in that condition, but the 1/4 mile of driving at degraded level is as risky as 25 miles of ordinary driving at full operational level, which is a risk taken every day. As long as vehicles do not drive more than a short distance at this degraded level, the overall safety record should still be satisfactory.

Of course, if the safety level degrades to a level that could be called "dangerous" rather than "less safe" that's another story. That must never be allowed.

An example of this would be failure of the main sensors, such as a LIDAR. Without a LIDAR, a car would rely on cameras and radar. Companies like Tesla think they can make a car fully safe with just those two, and perhaps they will some day. But even though those are not safe enough yet, they are safe enough for a problem like getting off the road, or even getting to the next exit on a highway.

This is important because we will never get perfection. We will only get lower and lower levels of risk, and the risk will not be constant -- it will be changing with road conditions, and due to system or mechanical failures. But we can still get the safety level we want -- and get the technology on the road.

Comments

Aside from the movie plot scenario where good guy wants to get away in autonomous vehicle, and wants to override it out of "degraded" condition, I think that a very real situation is where there is a DoS against a vehicle that spuriously puts it into degraded mode. Waiting on the side of the road for another car might be okay on a Thursday afternoon commute, but it's probably a big deal if it's your ride out of New Orleans. I think that a key part of this is that we have to move significantly beyond the "Engine Light" that we have today. All vehicles (not just autonomous ones) need to more clearly say what's wrong with them. The autonomous ones will get themselves fixed, and you said this week, could go test their brakes if they have doubts. But shouldn't all vehicles be able to report this kind of thing?

I am not sure what DoS you mean here. A vehicle would consider itself degraded if it has detected some sort of fault, such as a bad sensor or other hardware problem, or a strange software fault etc. Are you suggesting somebody could trick a car into feeling it has had a sensor failure by sending strange data to their sensor? That's not impossible, but you would have to be right there in front of the car if it is, which does not scale well.

He's talking about Evil Magic Hackers who can use their super computery wizard powers to Hack Your Car and make it do crazy things.

Remember that we live in a time of magical thinking and superstition. We've moved on a little bit from the 90s when people thought that they could make your computer explode as soon as you finish reading this sentence(BOOOOM) but they still think that this stuff works like on TV, where some bro with a goatee holds up his iPad and pushes a big button that says "HACK" and now he can remote-control drive your car off a cliff.

Add new comment