Government Testing Labs Can't Certify Robocar Safety

Topic: 
Tags: 

At Nvidia GTC, the giant European testing lab AVL presented their plan for certifying robocar safety the European way. It is both too much bureaucracy and too little testing and won't work.

Read Government And Independent Testing Labs Aren't The Way To Certify Robocar Safety

Comments

The author doesn't seem to understand that you can't proof ASIL C or D (SIL 2 or 3 respectively) for a system by testing only.

Even if you run it for 11000 years without a single disengagement you've only shown a pfhd of 10^-8 is possible for the exact situations the car has seen for the last 11000 years. But not for any situations that differ even just slightly from the testig scenarios.

That's why additional analytical approaches are state of the art, in 61508 and all standards derived from it like 26262. This won't be different for SOTIF.

And as pointed out in the article, it's impossible to even identify a fraction of all risky situations that could occur in an open system like public road traffic.

It will never be possible to validate safety for self driving cars on public roads, unless we make steps backwards in what is concerned state of the art by engineers and product liability laws.

As a functional safety software developer I have to disagree with the statement that the developers are the ones who should validate the system. Also many funtional safety standards prohibit this specifically.

The reason is that you are completely biased as the designer, and when you model the architecture in your head you already design it in a way that you think is safe. It needs a second person to actually see the flaws in your way of thinking. It happened to me before.

Furthermore it is a good idea that this person is in a way independant from the organisation the designers are part of.

Your employer will not understand what functional safety really means. For him you are kind of a necessity that costs him money in the first place. So there always will be pressure from the management side to make things cheaper and finish them quicker.

Actually Boeing seems like a pretty good example. If it is actually true they down-rated this function from DAL A to like D or C because Airbus was 9 months ahead in development of their competing product it clearly shows the decision makers were not the ones who understand what their decision really meant.

The way proposed by the author will leave the decision to someone who in fact is lacking the qualification to make the decision. I'm not saying a member of the executive board will make a decision that will cost lives intentionally. He just might not know better. But in the end, the designers will not be the ones who ultimately decide at what point there is no more money left to carry on with the development process.

Also the idea that the designers are the only persons who really understand what's going on inside the software is against everything that's considered as making software safe. Safety critical software needs to be as transparent and well documented as possible, to make it validatable. The four eyes principle, so to speak, is one of the absolute foundations of anything safety, and software is not an exception when it comes to that principle by any means.

I think it will be possible. What I am saying though, is that the European type certification process, where a lab runs a series of tests, is of very limited value. The folks from AVL reject the US approach of self-certification of 26262 compliance.

Developers have their biases, of course, but they are also the only ones who understand the system at the level necessary to test it. As noted, just running it for a million miles is not sufficient, you need more than that. Robocar developers are coming up with new ways to be safe, and that requires new ways to confirm that they are safe.

What you want is for the developers to lead the task, and for independent parties to work with them to correct their biases and catch them at any deliberate attempts to bypass the system -- or at least the right incentive structures to stop that from happening.

If the incentives are set properly, then the management who approve a safety plan will consult the right people to make sure things are being done right, because the cost of not doing so will be too high. They will know more about what they don't know and what they need to bring in from outside.

Pretty much all government safety testing/licensing/certification can be criticized as "too much bureaucracy and too little testing and won't work." I don't see anything special about robocars there.

I don't just mean in the area of products, either. A lawyer can pass the bar exam and still be a completely incompetent lawyer. A physician can pass the medical licensing exam and still be incompetent to practice medicine. These exams aren't sufficient, but they are useful to weed out some incompetent would-be practitioners.

Testing that is done by an independent third party is useful, though. Vendors should not get access to the simulation scenarios in advance. If they fail, they can get access to the scenario on which they've failed. That's not sufficient, but it might be useful, to weed out some incompetent robocars.

That said, I'd probably be opposed to making such testing mandatory. I think it'd be a good idea for the robocar manufacturers to get together and form an independent third-party organization to do such testing. But I'm a fan of the self-certification approach.

I don't think you can stop them from getting access to the scenarios in advance. I guess you could make a rule that they must turn off logging in their cars, but there will be strong motives to cheat on that. They won't think they are cheating at the test, just giving themselves the best chance at passing.

But if you let them get copies of the one they fail, it's the same. The submit to the test, they fail some, they fix those failures, they submit again -- and they pass.

Now AVL proposes that they actually have a huge library of tests, and make a random selection, which would interfere with that plan a bit, but they also don't imagine they could keep that library secret.

Truth is, if you have a good library of tests, it's better for everybody if it's open and available to all. If a government lab has a new, useful test that people fail on, and they keep it to themselves, and as a result a car runs into that situation in the real world and has a crash -- that's not a good thing.

If we're talking about a government-mandated program, yes, there will be motives to cheat. There also will likely be very large penalties for getting caught cheating. Volkswagen was criminally charged and ordered to pay $2.8 billion for cheating on emissions testing.

If we're talking about a government-mandated program and they fail, you let them know why they failed. You give them a copy of the portion where they failed. You let them submit again, but not immediately. In fact, if it's a severe mistake (one that could have killed someone), you might not let them test again for quite a while. It depends on what their explanation is for why they failed. These tests are not a way to develop the product. You don't submit for testing until you think you're ready.

If we're talking about a voluntary program, then pretty much the same thing, except that it's up to the companies themselves to make sure that the QA department who submits for testing isn't giving the developers so much information that they're using the tests as a way to develop the product.

By the time these cars are ready to be used without any driver in a wide array of situations, I think there's going to be little chance of using the tests as a way to develop the product, though. That would be like trying to use a database of prior chess matches to develop a grandmaster-beating chess program, wouldn't it?

Add new comment