This week I attended the Transportation Research Board Workshop on Automated Road Vehicles which has an academic focus but still has lots of industry-related topics. TRB’s main goal is to figure out what various academics should be researching or getting grants for, but this has become the “other” conference on robocars. Here are my notes from it.
Bryant Walker Smith told of an interesting court case in Ontario, where a truck driver sued over the speed limiter put in his truck and the court ruled that the enforced speed limiter was a violation of fundamental rights of choice. One wonders if a similar ruling would occur in the USA. I have an article pending on what the speed limit should be for robocars with some interesting math.
Cliff Nass expressed skepticism over the ability to have easy handover from self-driving to human driving. This transfer is a “valence transfer” and if the person is watching a movie in a tense scene that makes her sad or angry, she will begin driving with that emotional state. More than one legal scholar felt that quickly passing control to a human in an urgent situation would not absolve the system of any liability under the law, and it could be a dangerous thing. Nass is still optimistic — he notes that in spite of often expressed fears, no whole field has been destroyed because it caused a single fatality.
There were reports on efforts in Europe and Japan. In both cases, government involvement is quite high, with large budgets. On the other hand, this seems to have led in most cases to more impractical research that suggests vehicles are 1-2 decades away.
Volkswagen described a couple of interesting projects. One was the eT! — a small van that would follow a postman around as he did his rounds. The van had the mail, and the postman did not drive it but rather had it follow him so he could go and get new stacks of mail to deliver. I want one of those in the airport to have my luggage follow me around.
VW has plans for a “traffic jam pilot” which is more than the traffic jam assist products we’ve seen. This product would truly self-drive at low speeds in highway traffic jams, allowing the user to not pay attention to the road, and thus get work done. In this case, the car would give 10 seconds warning that the driver must take control again. VW eventually wants to have a full vehicle which gives you a 10 minute warning but that’s some distance away.
Several speakers expressed concern that really full testing is not economically possible. Humans have a fatality every 300 million km on the highway and you just can’t test that long. There were calls for a suitable virtual testing environment. (A whole breakout group on testing also put a large focus on this.)
BMW’s highly automated driving car has 2 4-plane LIDARS looking fore and aft, along with radar, cameras and position sensors. They report “thousands of km” and a drive from Munich to Nuremberg with no incidents, including 32 lane changes — something particularly challenging on the Autobahn where people in the other lane can be moving a lot faster than you.
Down with the NHTSA levels
The highlight for many was the passionate talk by Adriano Alessandrini of University of Rome La Sapienza who described the CityMobil2 project running in several European cities. There they have full robocars operating on mixed streets with pedestrians and cyclists. These are cars like the Induct I wrote about earlier which have no controls at all, no wheel and never a driver. Instead they just go very slowly, around 10mph among the pedestrians, so they can be safe and stop if anything comes in front. In fact, he reported that children sometimes deliberately throw themselves in front of the vehicle or even between the wheels, and they need sensors to detect that and stop.
Alessandrini expressed opposition to the prevailing view of the workshop that there will be a slow progression in the 4 levels defined by NHTSA (or the almost identical 5 levels of the SAE.) He says that “level 4” — full autonomy — is here today at slow speeds, and it is wrong to imagine it comes last. He’s totally right. As I have often written describing concepts I have called Whistlecars and deliverbots it is possible to have vehicles that can operate unmanned at lower speeds and on a limited subset of streets sooner than you can solve the problem of driving a human around at higher speeds. Humans are impatient, but unmanned cars are not. As such I have now been saying that the right early answer is “Level 3.5” which mixes unmanned (level 4) operation for delivery, parking and refueling in limite areas with level 3 (self-driving but with occasional human supervision) and even a little bit of level 2 (self-driving with constant human supervision) as needed. The “levels” are not levels at all as technologies will arrive at different times based on the road you are driving on.
Perversely, I have even wondered if driving in India, one of the most chaotic road systems in the world, isn’t actually more tractable a problem due to the low speeds compared to driving unmanned at 45mph on an arterial but non-limited-access road in the USA.
NHTSA recommended that states not make unmanned operation legal (thus delaying it a lot) but it assumes a step by step progression is the path to it.
To add to that, Ron Medford, now in charge of safety at Google after 35 years in government and recent work at NHTSA, reported that Google’s primary goal is a fully autonomous “level 4” car, though he did not rule out doing some other steps along the way.
There was also a great talk by a staffer at the White House’s Office of Science and Technology Policy. The talk was off the record, since only the most senior staff (and the boss) get to speak on policy on the record, but it showed an excellent amount of foresight and understanding of the consequences of robocars for many levels of society. Part of why I like it, I was told, because they read this blog over there. Hi, folks.
I participated in the breakout on Liability and Insurance, though I was tempted by many of the breakouts. These sessions were under Chatham House Rules, so without attribution I will note the following:
- It’s recommended that cars log not just problems, but all the times they did well, and prevented an accident. Such a log will be useful in future trials.
- There are those who think the standard of care will be not just, “would a human driver have done better in this situation?” but also “could a robot have been programmed to do better here?” That’s a very tough standard.
The testing group, while keen on real world testing, was also very keen on sim, expressing similar sentiments to those in my article at simulators. While real world testing is an absolute must, sims can test strange situations and save a lot of money. They suggested that a lot of data recorded from the SHRP 2 naturalistic driving studies (which recorded real drivers for a year) could be put into sim. I suggested that perhaps every accident recorded from a Russian dash camera could also be put into sim.
The final presentation was from the DoT on the roadmap they are building for research on these cars. It was a reasonable roadmap, except there was the too-frequent overemphasis on “connected car” and the report was even called “connected automation” because it comes from the ITS program office. While DoT did not say it, the presentation included a quote that robocars can’t happen at all without communications, and while they don’t just mean DSRC here, they definitely are thinking of DSRC. Fortunately I wasn’t the only one to call out opposition to that. DSRC and V2V could be useful concepts, the the idea that they have to come first is dangerous.
The DoT report contained some new numbers. One was an estimate of $120B in cost for congestion (in time and fuel) and the other was a $500B cost of accidents, which is double the NHTSA estimate from 2003, so I wonder if it’s correct.
Bosch, Google and the sensor store AutonomousStuff all offered demo rides, which were very popular. The AS ride was done on a rental car — an impressive feat — but it was just showing off sensors, not driving the car.