Report from TRB's workshop on automated road vehicles -- down with the NHTSA levels

Topic: 

This week I attended the Transportation Research Board Workshop on Automated Road Vehicles which has an academic focus but still has lots of industry-related topics. TRB's main goal is to figure out what various academics should be researching or getting grants for, but this has become the "other" conference on robocars. Here are my notes from it.

Bryant Walker Smith told of an interesting court case in Ontario, where a truck driver sued over the speed limiter put in his truck and the court ruled that the enforced speed limiter was a violation of fundamental rights of choice. One wonders if a similar ruling would occur in the USA. I have an article pending on what the speed limit should be for robocars with some interesting math.

Cliff Nass expressed skepticism over the ability to have easy handover from self-driving to human driving. This transfer is a "valence transfer" and if the person is watching a movie in a tense scene that makes her sad or angry, she will begin driving with that emotional state. More than one legal scholar felt that quickly passing control to a human in an urgent situation would not absolve the system of any liability under the law, and it could be a dangerous thing. Nass is still optimistic -- he notes that in spite of often expressed fears, no whole field has been destroyed because it caused a single fatality.

There were reports on efforts in Europe and Japan. In both cases, government involvement is quite high, with large budgets. On the other hand, this seems to have led in most cases to more impractical research that suggests vehicles are 1-2 decades away.

Volkswagen described a couple of interesting projects. One was the eT! -- a small van that would follow a postman around as he did his rounds. The van had the mail, and the postman did not drive it but rather had it follow him so he could go and get new stacks of mail to deliver. I want one of those in the airport to have my luggage follow me around.

VW has plans for a "traffic jam pilot" which is more than the traffic jam assist products we've seen. This product would truly self-drive at low speeds in highway traffic jams, allowing the user to not pay attention to the road, and thus get work done. In this case, the car would give 10 seconds warning that the driver must take control again. VW eventually wants to have a full vehicle which gives you a 10 minute warning but that's some distance away.

Several speakers expressed concern that really full testing is not economically possible. Humans have a fatality every 300 million km on the highway and you just can't test that long. There were calls for a suitable virtual testing environment. (A whole breakout group on testing also put a large focus on this.)

BMW's highly automated driving car has 2 4-plane LIDARS looking fore and aft, along with radar, cameras and position sensors. They report "thousands of km" and a drive from Munich to Nuremberg with no incidents, including 32 lane changes -- something particularly challenging on the Autobahn where people in the other lane can be moving a lot faster than you.

Down with the NHTSA levels

The highlight for many was the passionate talk by Adriano Alessandrini of University of Rome La Sapienza who described the CityMobil2 project running in several European cities. There they have full robocars operating on mixed streets with pedestrians and cyclists. These are cars like the Induct I wrote about earlier which have no controls at all, no wheel and never a driver. Instead they just go very slowly, around 10mph among the pedestrians, so they can be safe and stop if anything comes in front. In fact, he reported that children sometimes deliberately throw themselves in front of the vehicle or even between the wheels, and they need sensors to detect that and stop.

Alessandrini expressed opposition to the prevailing view of the workshop that there will be a slow progression in the 4 levels defined by NHTSA (or the almost identical 5 levels of the SAE.) He says that "level 4" -- full autonomy -- is here today at slow speeds, and it is wrong to imagine it comes last. He's totally right. As I have often written describing concepts I have called Whistlecars and deliverbots it is possible to have vehicles that can operate unmanned at lower speeds and on a limited subset of streets sooner than you can solve the problem of driving a human around at higher speeds. Humans are impatient, but unmanned cars are not. As such I have now been saying that the right early answer is "Level 3.5" which mixes unmanned (level 4) operation for delivery, parking and refueling in limite areas with level 3 (self-driving but with occasional human supervision) and even a little bit of level 2 (self-driving with constant human supervision) as needed. The "levels" are not levels at all as technologies will arrive at different times based on the road you are driving on.

Perversely, I have even wondered if driving in India, one of the most chaotic road systems in the world, isn't actually more tractable a problem due to the low speeds compared to driving unmanned at 45mph on an arterial but non-limited-access road in the USA.

NHTSA recommended that states not make unmanned operation legal (thus delaying it a lot) but it assumes a step by step progression is the path to it.

To add to that, Ron Medford, now in charge of safety at Google after 35 years in government and recent work at NHTSA, reported that Google's primary goal is a fully autonomous "level 4" car, though he did not rule out doing some other steps along the way.

There was also a great talk by a staffer at the White House's Office of Science and Technology Policy. The talk was off the record, since only the most senior staff (and the boss) get to speak on policy on the record, but it showed an excellent amount of foresight and understanding of the consequences of robocars for many levels of society. Part of why I like it, I was told, because they read this blog over there. Hi, folks.

I participated in the breakout on Liability and Insurance, though I was tempted by many of the breakouts. These sessions were under Chatham House Rules, so without attribution I will note the following:

  • It's recommended that cars log not just problems, but all the times they did well, and prevented an accident. Such a log will be useful in future trials.
  • There are those who think the standard of care will be not just, "would a human driver have done better in this situation?" but also "could a robot have been programmed to do better here?" That's a very tough standard.

The privacy session felt that just having the user click to agree on a privacy policy every time they drive is not much of an answer, though we see it already on in-car navigation systems. They also identified a problem if under-18s are going to ride in these cars, unable to click to agree, and under stricter privacy protections.

The testing group, while keen on real world testing, was also very keen on sim, expressing similar sentiments to those in my article at simulators. While real world testing is an absolute must, sims can test strange situations and save a lot of money. They suggested that a lot of data recorded from the SHRP 2 naturalistic driving studies (which recorded real drivers for a year) could be put into sim. I suggested that perhaps every accident recorded from a Russian dash camera could also be put into sim.

The final presentation was from the DoT on the roadmap they are building for research on these cars. It was a reasonable roadmap, except there was the too-frequent overemphasis on "connected car" and the report was even called "connected automation" because it comes from the ITS program office. While DoT did not say it, the presentation included a quote that robocars can't happen at all without communications, and while they don't just mean DSRC here, they definitely are thinking of DSRC. Fortunately I wasn't the only one to call out opposition to that. DSRC and V2V could be useful concepts, the the idea that they have to come first is dangerous.

The DoT report contained some new numbers. One was an estimate of $120B in cost for congestion (in time and fuel) and the other was a $500B cost of accidents, which is double the NHTSA estimate from 2003, so I wonder if it's correct.

Bosch, Google and the sensor store AutonomousStuff all offered demo rides, which were very popular. The AS ride was done on a rental car -- an impressive feat -- but it was just showing off sensors, not driving the car.

Comments

Great summary Brad - your contributions at the conference in the plenaries were really appreciated by myself. There is far too great a connected vehicle lobby and they seem to be protecting their turf, in my opinion. When in fact there is room for both - as long as neither technology or supporter seeks to dictate what the other does. I do like your prior observation that they are 'orthogonal' - that sums it up nicely for me.

So my thoughts on the $500 billion is that the increase may be due to it now being calculated as a 'societal cost' - which is a combination of the direct cost (i.e. the components that you can actually stick a definitive dollar amount against) and the societal component which is basically trying to put a cost against the intangible components of the societal harm caused. Some people refer to this societal component as the 'willingness to pay to have avoided the crash'.

If you were not aware, us safety professionals prefer to use the words 'crashes' or 'collisions', as of course 'accidents' suggest that it was unavoidable - yet we know some 90% of the time the crash was related to driver error.

Or, the $500 billion may just be a revised calc that reflects the recent increase in total fatalities and the inflation aspect and the increased cost of healthcare, emergency services and the devalutaion of the dollar relative to the previous historical value - less likely to be this one in my opinion.

For comparison, a 2007 study commissioned by Transport Canada estimated that the societal cost of crashes was $62 billion - which was 4.9% of GDP at that time. Since then crashes have reduced and GDP has gone up.

"I suggested that perhaps every accident recorded from a Russian dash camera could also be put into sim"

Ha! Brilliant thought actually. Some of the accidents you see through those things are incredible.

The industry is lucky to have your input on this Brad.

Brad,

I don't believe that real world testing needs to be as onerous or produce as unreachable of a goal as what is intimated. Accident severity is very difficult to model, and requires the autonomous vehicles to actually get into accidents. Even the definition of "severity" is unknown: do you use the insured loss, the medical complications, etc...? These definitions will be influenced by factors such as the state's legal framework (Michigan will have much more severe losses due to its unlimited PIP laws). Restricting the analysis to frequency, are autonomous cars less likely to get into an accident than a human driver, drastically decreases the number of required miles. Having all the companies report their miles through a single entity and creating nationwide standards will also allow the standard to be met more quickly and with more credibility. Lastly, a more structured testing plan, through more accurate stratified sampling, will allow the vehicles to become "licensed" in various driving situations earlier than a comprehensive solution approach.

This is an area of hot debate. Everybody agrees that testing at the level of the millions of hours between human fatalities is not doable. Certainly not for every release of the code. Over time, the cars will have those millions of hours of experience through many versions. With each new version, programmers always do "regression" testing, which is to say they play through every known problem situation seen in the past, along with a suite of other validation tests, and ensure the new version performs properly. This assures you probably haven't made the system worse with your changes and thus can include the results of prior testing in your evaluation.

It is always possible that a new change will cause a problem in some unknown or even mundane situation. But you can't drive the car on millions of miles of boring highway just in case that's happened, it's not practical.

You can do things like count "incidents" of various types. Humans have incidents all the time. They are always not seeing something and hitting the brakes a little late (but not too late.) They are always drifting out of lanes, not checking blind spots, pushing their way into traffic, looking down at their controls etc. Every so often one of these incidents combines with something else for an accident. Robocars which have incidents will result in the cause of the incident being fixed. An ideal testing result will be zero incidents, unlike humans. A robocar that can go 100,000 miles with zero incidents is vastly superior to a human being which will have many incidents in that distance, even if on average no accidents. Testing will reveal some relationship between certain types of incidents and accidents, though.

I really like the idea of using incidents to determine the safety of the technology. However, are there any readily available and understandable statistics on incidents of current drivers to prove that the safety of autonomous vehicles? My thoughts around the testing is that it should reduce the societal cost of its introduction by increasing the safety and decreasing the monetary cost for the producers and buyers. Having more certain payments go through automobile insurers rather than Products Liability reduces the costs in two ways. First, auto insurers will have the proof to lower the premiums in conjunction with the technology rather than waiting years until the technology has "proven" itself in the marketplace. Second, approximately 40% of Products Liability costs have gone to lawyers while only 5% of auto insurance losses go to lawyers.

Full disclosure, I'm an actuary at a P&C company. But without myself or my company having anything to gain, my work has just been on the personal side. I'm sure there are lots of pieces I'm not thinking of, but I wrote down my ideas in a little more detail: https://nebula.wsimg.com/22192ba0b28f4ce45d0b59806c600bc2?AccessKeyId=485B96A7A18302084F62&disposition=0 Thanks!

Hi Brad,

Thanks for your notes. From what you write about "putting every accident recorded from a Russian dash camera into simulation", I thought you may like this video: http://youtu.be/fDwu04T3qGI

The video shows how lidar data can be directly translated into simulated scenarios. You can then use these scenarios for further processing and virtual experimenting (e.g. changing traffic and road conditions, putting various virtual sensors on the vehicles, adapting your algorithms, etc.)

If you drive long enough with this setup and/or mount this setup on many vehicles, you will definitely also capture accidents.

The video is a preview of a joint product TASS International and IBEO are preparing. It is based on IBEO laser scanners and the simulation platform PreScan, which is the industry-standard simulation tool for developers of ADAS and active safety systems. Of course, PreScan can also be used by developers of self driving cars.

You may also want to read the joint press release of TASS and IBEO at: http://www.tassinternational.com/news/ibeo-automotive-and-tass-international-link-their-technologies-next-generation-driver

Curious about your thoughts on this... any feedback or suggestions from your side are more than welcome! ;-)

Best,
Martijn

Hi Brad! I caught a lot of the workshop via video (hopefully the rest will become available at some point) and fortunately, caught many of your highlights including your big moment regarding “connected automation” (sorry, they didn't focus the camera on those with questions/comments :). I quite agree that there seems to be an odd blind spot of sorts regarding “connected automation”s role at the DoT. I suspect that some of it is historic: A lot of money has been spent on things like the Michigan connection study (underway) and even things like the California demo a while back. Some of the perception technologies used now were not (cheaply) available, etc. However, technology has moved on and V2V is- clearly not "necessary" - until cell phones :-) the only communication drivers have had with each other has been almost exclusively visual (Does Google or other prototypes detect horn honks?). I also agree that resolution of the many V2V issues may be much more difficult then autonomy without it. Hopefully, they will get past that soon.

Cliff Nass's presentation was also a favorite of mine, especially as he hit on some of the Human Factor issues that haves been missing in many of the discussions on successful autonomous cars. The even in general seemed to be more sensitive to HF issues then in the past.

I would be interested in more on the Office of Science and Technology Policy presentation (not available on video) and how their thinking compares and may impact the DoT thinking, and is any of this related to the recent WSJ article on Google getting pressure to "slow down"?

And while the NHTSA levels may not not have been liked much, a whole lot of people were using them. What sort of discussion was there on these at the NHTSA report feedback session? - also not on video.

Most people liked the levels. The presentation suggesting they were not the right approach was contrarian. I happen to agree with it.

The OSTP view had many similarities to my own as well, and as such had differences from the DoT views. There is not one DoT view, though.

I have not seen human factors discussion as missing. That's one thing NHTSA's been touting from the very start. Google has a large human factors team as do many other groups.

I had to miss the DoT feedback session.

Thanks a lot for the slides and links. That was really interesting and it confirmed my point of view about inter vehicle networking..

Add new comment