Driverless Car Summit 2013 Part 1 - Fear of Google


This week I attended AUVSI's "Driverless Car Summit" in Detroit. This year's event, the third, featured a bigger crowd and a decent program, and will generate more than one post.

I would hardly call it a theme, but two speakers expressed fairly negative comments about Google's efforts, raising some interesting subjects. (As an important disclaimer, the Google car team is a consulting client of mine, but I am not their spokesman and the views here do not represent Google's views.)

The keynote address came from Bryan Reimer of MIT, and generated the most press coverage and debate, though the recent NHTSA guidelines also created a stir.

Reimer's main concern: Google is testing on public streets instead of a test track. As such it is taking the risk of a fatal accident, from which the blowback could be so large it stifles the field for many years. Car companies historically have done extensive test track work before going out on real streets. I viewed Reimer's call as one for near perfection before there is public deployment.

There is a U-shaped curve of risk here. Indeed, a vendor who takes too many risks may cause an accident that generates enough backlash to slow down the field, and thus delay not just their own efforts, but an important life-saving technology. On the other hand, a quest for perfection attempts what seems today to be impossible, and as such also delays deployment for many years, while carnage continues on the roads.

As such there is a "Goldilocks" point in the middle, with the right amount of risk to maximize the widescale deployment of robocars that drive more safely than people. And there can be legitimate argument about where that is.

Reimer also expressed concern that as automation increases, human skill decreases, and so you actually start needing more explicit training, not less. He is as such concerned with the efforts to make what NHTSA calls "level 2" systems (hands off, but eyes on the road) as well as "level 3" systems (eyes off the road but you may be called upon to drive in certain situations.) He fears that it could be dangerous to hand driving off to people who now don't do it very often, and that stories from aviation bear this out. This is a valid point, and in a later post I will discuss the risks of the level-2 "super cruise" systems.

Maarten Sierhuis, who is running Nissan's new research lab (where I will be giving a talk on the future of robocars this Thursday, by the way) issued immediate disagreement on the question of test tracks. His background at NASA has taught him that you "fly where you train and train where you fly" -- there is no substitute for real world testing if you want to build a safe product. One must suspect Google agrees -- it's not as if they couldn't afford a test track. The various automakers are also all doing public road testing, though not as much as Google. Jan Becker of Bosch reported their vehicle had only done "thousands" of public miles. (Google reported a 500,000 mile count earlier this year.)

Heinz Mattern, research and development manager for Valeo (which is a leading maker of self-parking systems) went even further, starting off his talk by declaring that "Google is the enemy." When asked about this, he did not want to go much further but asked, "why aren't they here? (at the conference)" There was one Google team employee at the conference, but not speaking, and I'm not am employee or rep. It was pointed out that Chris Urmson, chief engineer of the Google team, had spoken at the prior conferences.

In private, others expressed to me frustration at how little information comes out from Google, which has remained mum about any business plans, saying only that it has talked to all major car vendors, and believes cars will be on the road by 2017. Car companies in general believe they are much less secretive. It's normal for car vendors to generate streams of concept cars showing off features that might be found in future cars, and talk openly to generate buzz. They keep certain aspects of new car releases secret, but due to the long development cycles in cars, don't seem as afraid to reveal details 1-2 years before a car comes out. Because Google keeps things close to the vest, they said, it generates uncertainty and distrust, because they can't tell if the effect of Google's efforts will be positive or negative for them.

Another thing I learned from car company insiders was something long-suspected: That projects for self-driving systems inside car companies were greenlit or had budgets increased within a week of Google's car being announced to the world. Whatever Google ends up doing, it clearly lit the fire under the car companies to get the field in motion.

In a related issue, there was discussion of NHTSA's recommendations to states and description of how they are researching robocars. Most people, including myself, read this report as saying that states should (like Nevada and others) allow the testing of prototypes on public roads, but should hold off on permitting operation by ordinary people.

I sat down with Nat Beuse, who helped author the report at NHTSA, and he was surprised that people had that impression, but I still fail to see how. At the present time, analysis of state vehicle codes suggests that both testing and operating robocars is legal, because that which is not forbidden is by default permitted, and these systems are akin to very smart cruise controls. (Running unmanned vehicles is another story, though.)

With this in mind, state efforts which declared that only testing was allowed would in effect ban use by customers. And once a ban is in place, it is very hard to get it reversed, and doing so can take a long time and a lot of study.

During the discussion session, I put forward a different thesis. Today, there are millions of teens with learning permits. With no skill, they are allowed out on the road, often with just a parent's supervision, or sometimes under the supervision of a driving instructor, who usually has both their own backup brake pedal, and the ability to grab the wheel. Google, Continental, Audi and all the other companies who are testing on the road also work this way. The software drives, but a safety driver is sitting in the driver's seat, carefully watching and ready to use the brakes or wheel if there is a problem. I think it's not unreasonable to claim that the latest robocar prototypes are as safe as a teen taking the first try at the wheel, and this "driving instructor" approach might be a better way to look at vehicle testing. NHTSA and the states can then take on their traditional role, which is to wait, and only regulate if safety problems arise which will not be fixed without regulation.

Another issue that was brought up (by myself and others) in the panel of state regulators was how these regulations will affect "garage tinkerers." Dennis Schornack, senior advisor to the governor of Michigan, was keen to point out that the history of the car industry of Michigan was full of innovation that came from solo inventors in their garages, and that a large portion of Michigan's economy came from that. If the regulations are such that only large companies can comply, this vital channel could be cut off.

Today it's hard to see a small group developing a commercial robocar without the resources of a GM or Google. But all the DARPA challenge teams which started this off were small, and in the future, it will be possible for smaller and smaller players to make a difference.

The state regulators said they weren't actually all that keen to regulate, and that this impetus had come down from their legislatures. Let's hope they regulate well.

More coverage continues in part two.


I hope their fears of google come true.

You hope that Google goes too far and causes an accident that sets back the field, or you hope that Google hurts the car companies?

I assume their fear is that google is sucessful and greatly disrupts their business model.

Yes, but as noted there are multiple fears. But my sense in talking to people was not that they were bothered by Google being a competitor, but by the fact that they can't be certain what sort of competitor Google might be. Google has said that they do not plan to manufacture cars, but that's about all that's been said.

Allow me to expand my thoughts. I don't think there is a current auto manufacturer in any rush to release to the public a fully driverless car (NHTSA level 4). They like the small incremental steps to allow them to keep selling cars to individuals. Google has displayed a desire for fully driverless technology that allows the blind, elderly, young to use cars - IE non licensed people. Once the cars are fully driverless then the business model of selling cars to individuals breaks down as it is more beneficial to users and cheaper to provide as a transportation service - see the Columbia Earth Institute study. Outsiders (Google etc) are more likely to initiate this change than an established dominate player. This is the rub, they are comfortable with their auto peers because they pretty much all think alike because their interest are aligned. Google might team up with a "lagging" edge manufacturer and order 25,000 driverless cars and start a driverless transportation service in CA (San Fran to San Jose, LA, San Diego & Sacramento).

Also on a side note regarding the NHTSA level 3 & 4. Am I crazy in thinking level 4 might be easier in certain regards to implement than level 3? Level 3 has lots of issues, while level 4 seems virtually achieved by Google with their current car.

Well, I've written extensively on this change, however, I think there will be a mix of people using transportation services, people who still want to own their own vehicle (that desire won't vanish very quickly) but still own a self-driving vehicle, people who own and hire it out when not using it, and people who buy traditional cars.

Just what mix there will be is up to the market, but it won't be 100% in either camp.

But the car companies know this I think, though they may still resist it.

No, level 3 is easier than level 4. In most places quite a bit easier, though it has to do a good UI for transition. I think such a UI is reasonably doable.

Brad, you reported on what Nat said the Feds weren't trying to say - hold off on permitting operation by ordinary people etc. Did you get a better understanding what they were trying to say? Given that Google has been reported at choosing to ignore the reports recommendations, did anyone Nat included, have any thoughts on that?

I regret not digging into the positive as much, because I was so surprised to hear that they didn't want to be so discouraging. In his view, the states were calling NHTSA on a regular basis asking for guidance on what to do. I get the impression that since they were not quite ready to tell the states, "Go ahead, write laws to make it legal to sell these" they put a focus on telling the states to legalize testing. The charitable view would be that they extended that to mean "and that means don't legalize use" without considering how that would be seen as a very negative message.

As I've examined this issue over time, my initial bias against regulation has grown even stronger. Testing should be viewed like having a driving instructor take a 16 year old out on the roads. Sale and use should be like getting a fancy cruise control. The courts should be more than enough to push vendors to make it safe. On the other hand, NHTSA should try to measure if it is saving lives, and if it is, they should take steps to encourage its deployment (including pushing federal regulations to reduce the actions of the courts if they are going too far.) I must admit I don't usually think of NHTSA as banning things, but rather as encouraging (and then demanding) technologies that improve safety in the aggregate. Unlike the courts, which look at individual cases, their job is to look at the aggregate.

I believe that the NHTSA can and should have a important role in this process. They, as far as I can tell, have the best connections to the overall vehicle infrastructure. I am hoping that their seeming lack of interest in legalization is due to that they don't yet know what are the issues and appropriate criteria for taking such a step (society does make a lot of assumptions about followup etc. when it basically hands the car keys to a 16 year old and says "go have fun". Things like continued parental supervision and the like) and not something more sinister like protecting the market of the traditional vehicle makers.

A competent, respected and influential agency perhaps the NHTSA is desirable and probably necessary to help the numerous jurisdictions craft consistent, timely, sufficient and not overreaching regulations on how to manage this new technology. Leaving it to the courts alone seems likely to result in considerable variation and imbalances for quite some time especially between jurisdictions until a consensus is established, and we may not like where it ends up. See medical malpractice for example. Leaving it solely to the providers of the machines and technology is probably even worse. Getting it wrong one way may mean stifling the technology. Wrong the other way means people will die, and the technology may be stifled too.

Something that occurred to be about the differences between the typical 16 year old and the (current) autonomous car technology is that the 16 year old is likely to be at the peak of their perception and first level recognition capabilities of objects in the surrounding environment, but weak on experience with interacting with those objects. Almost the exact opposite is true for current autonomous vehicles. i.e. weaker on perception and recognition of objects in the environment especially in bad weather or ambiguous situations, but should come with a vast library of experience (Google's technology anyway). There are other differences too. How should these differences affect the "driving tests" of each party?

The courts aren't a great answer, but pre-regulation before you know what the final form of the technology will be, is also a bad idea.

My feeling is the best approach is to let the companies fear the courts (and thus do a good job on safety) and you probably don't need to force them to do much at all. If the courts are going the wrong way and killing the tech, you can use regulation to slow that down with liability caps.

Brad wrote: "...pre-regulation before you know what the final form of the technology will be, is also a bad idea."

Eh? Good (Best?) regulation should NOT specify or consider the technologies to be used for implementation, rather specifies in specific detail the needs and expectations of society of an implementation (you may be right that there is a shortage of good regulation out there.) Specifying specific technologies is a crutch, a shortcut (that happens a lot) that can help bring clarity and brevity to the intent of the regulation and specific examples of an acceptable level of performance. It is not really a good substitution for good specification, but it can be easier. It also often means that the regulation is obsolete before it hits the streets, which is exactly the problem I think many wish to avoid.

Waiting until after a technology has solidified may mean that the requirements of society are being considered subservient to the chops of a particular development team, may inappropriately favor one technology over others and deprive implementers and courts and other jurisdictions a good understanding of society's needs. I think its pretty clear what the typical Person-on-the-street expects here, and that its not all that difficult, but somewhat tedious to express in detail.

I don't think that we should care at the regulatory level what technologies are used to produce an autonomous vehicle whether its the LIDAR/RADAR/SONAR etc one computer 2 computer approach or 10 psychic squirrels and a bunch of rubber bands as long at it exceeds the safety, reliability, performance, and flexibility of our current system i.e. wetware (human) drivers.

I don't mean the component technologies, I mean the overall system itself. I write many predictions for how this will develop in these pages, but I expect some of my own predictions to be wrong. No regulator is likely to do better than that either.

There are still many variables on how this will work and how it will be used, and it would be foolish to regulate based on today's assumptions. In general, you only regulate after it has been shown that vendors won't make a safe product without regulation.

It's not only "for no skill driver" but also "for aged driver."
Actually, it is not "as victims" but "as offender."
The data of fatal accident in Japan says, the worst "8.52 persons for age16-24 per 100k-linced people" and the next is "6.31 persons for age65 or more."
I believe the situation is quite same as the U.S. and other developed countries.
Because the percentage of aged people is gone up and up.

Brad, what is the possibility of more serious testing (such as multiple test cars) being restricted to a specific geographical area. China made great use of special zones to kick start new industries. I guess the advantage would be the ability to concentrate resources and inform and educate the public within the test area. The interaction of the general public with this technology may also be able to studied in more detail.

Add new comment