Issues in regulating robocars, and the case for a light hand

Topic: 
Tags: 

All over the world, people (and governments) are debating about regulations for robocars. First for testing, and then for operation. It mostly began when Google encouraged the state of Nevada to write regulations, but now it's in full force. The topic is so hot that there is a danger that regulations might be drafted long before the shape of the first commercial deployments of the technology take place.

As such I have prepared a new special article on the issues around regulating robocars. The article concludes that in spite of a frequent claim that we want to regulate and standarize even before the technology has been out in the market for a while, this is in fact both a highly unusual approach, and possibly even a dangerous approach.

Read:

Regulating Robocar Safety : An examination of the issues around regulating robocar safety and the case for a very light touch

Comments

"There is a sad history of regulation where large players actually encourage regulation and participate in the regulatory process. This often results in regulations which are acceptable to the large players -- who have large resources to deal with procedures and bureaucracy -- but create a barrier to entry by small innovators."

The Internet is an almost perfect example of this and hopefully is the model robocar deployment will emulate.

Prior to the Internet the officially sanctioned network proposal was the ITU / OSI standards developed by committee and being slowly implemented across many large businesses chiefly telecom. The standards where typical of committee based standards and immensely difficult to implement in any sane and compatible manner.

In the mean time to get actual work done the DARPA TCP/IP protocol and Ethernet where deployed. Because those became IETF standards and those where based on the principle of adopting things that where demonstrated to actually work it was found you could actually get work done.

But it took the better part of a decade for the official approach to peter out and for everyone to simply acknowledge that the Internet (as it came to be called) was simply the better cheaper more pragmatic approach.

Brad can of course provide a far better history of that process than I can as he was intimately involved in the process :-)

Indeed computers offer us more lessons than the history of cars. But it's not quite the same as either. This is a safety technology, but also a risky technology. But to provide the safety it has to grow like a computer technology, not highly constrained one.

I got to this page from having read your article at:
http://robohub.org/what-does-the-vw-scandal-mean-for-robocars/

What really hit me was this:
"Vendors (large, reputable ones, at least) have strong motives not to lie on self-certification, both because they are liable for the safety failures that are their fault, and because they will be extra liable with possible punitive damages if they deliberately lied."

But VW is not an isolated case and the threat of liability has not been enough to keep the car industry honest or to keep us safe.

Consider something much much simpler than a self-driving car, like the car ignition switch. Documents show that GM knew they had a liability as early as 2005. By 2010, Brooke Melton was dead from GM's liability. GM fought the resulting court case every step of the way. It wasn't until 2015, a decade after they first became aware, that they first attempted a "compensation fund" and even then only offered "compensation" for 124 deaths their product caused. In the resulting congressional hearing, Sentor Boxer indicated that "[Mary Barra didn't] know anything about anything" and "If this is the new GM leadership, it's pretty lacking." GM never really showed fear of liability. Sure, GM CEO Mary Barra did finally get around to admitting "terrible things happened" as if killing people in what should have been a preventable death is simply a "terrible thing." But killing people still isn't enough to make GM's Bob Lutz think twice about his claims that Apple can't improve on what GM already does.

That was just a simple ignition switch. A problem waiting to kill people left in several cars for nearly a decade.

What if we get more complex than that? Jeep Cherokee isn't at the level of complexity as a self driving car. Yet hackers can remotely stop the engine while it is going 70 mph on the highway.

VW hid it's "cheat device" code in the complexity of everything that makes up it's engine such that regulars never knew about it for years.

Oddly enough, my Android phone seems to get better code auditing than cars being released this year. The reason is because the majority of the code that makes up Android is released under open source licenses for anyone to audit. But it is the poorly audited prioritary code of the car that I have to entrust my life with.

Before we add the complexity of self-driving cars, we need to ask the following questions:

(1) How complex is too complex for a car company to self-audit? How often do "terrible things" have to happen before a car company's self-audit can not be trusted?

(2) How complex is too complex for government regulators to audit?

(3) How complex is too complex for a small group of third parties under NDA to meaningfully audit?

(4) How complex is complex enough that the code should be required to be available under terms similar to Android?

Lets address in a meaningful way the problems with the current level of complexity of cars before raising the complexity to self-driving cars.

It is much simpler. If a car vendor self-insures their self-driving cars, they when they cause an accident (which will be completely clear in the 3D recordings of the accident) they they will pay, just like the insurance company of a driver who causes an accident. So if they lie about how safe the car is, they are lying to themselves from a financial standpoint (though also lying to the customers and public from an injury standpoint, since of course you still don't want to be injured, even if they will pay for it.)

So there is no incentive to lie. It's not a question of whether you will get caught. If there is an accident, it doesn't matter why the software failed, whether due to an honest bug or a deliberate one. You just pay. You pay more if it was deliberate, but even if not caught you still pay. So what is the value in lying?

Add new comment