NHTSA Regulations part 4: Crashes, Training, Certification, State Law, Operation, Validation and Autopilots
After my initial reactions and Overall Analysis here is a point by point consideration of second set of elements from NHTSA's 15 point certification list for robocars. See my series for other articles or the first half of the list.
In this section, the remind vendors they still need to meet the same standards as regular cars do. We are not ready to start removing heavy passive safety systems just because the vehicles get in fewer crashes. In the future we might want to change that, as those systems can be 1/3 of the weight of a vehicle.
They also note that different seating configurations (like rear facing seats) need to protect as well. It's already the case that rear facing seats will likely be better in forward collisions. Face-to-face seating may present some challenges in this environment, as it is less clear how to deploy the airbags. Taxis in London often feature face-to-face seating, though that is less common in the USA. Will this be possible under these regulations?
The rules also call for unmanned vehicles to absorb energy like existing vehicles. I don't know if this is a requirement on unusual vehicle design for regular cars or not. (If it were, it would have prohibited SUVs with their high bodies that can cause a bad impact with a low-body sports-car.)
Consumer Education and Training
This seems like another mild goal, but we don't want a world where you can't ride in a taxi unless you are certified as having taking a training course. Especially if it's one for which you have very little to do. These rules are written more for people buying a car (for whom training can make sense) than those just planning to be a passenger.
Registration and Certification
This section imagines labels for drivers. It's pretty silly and not very practical. Is a car going to have a sticker saying "This car can drive itself on Elm St. south of Pine, or on highway 101 except in Gilroy?" There should be another way, not labels, that this is communicated, especially because it will change all the time.
This set is fairly reasonable -- it requires a process describing what you do to a vehicle after a crash before it goes back into service.
Federal, State and Local Laws
This section calls for a detailed plan on how to assure compliance with all the laws. Interestingly, it also asks for a plan on how the vehicle will violate laws that human drivers sometimes violate. This is one of the areas where regulatory effort is necessary, because strictly cars are not allowed to violate the law -- doing things like crossing the double-yellow line to pass a car blocking your path. Here the proposals talk about doing this but wimp out. There should be an explicit declaration that "it's legal to violate the vehicle code if the vehicle can assure it is safe and necessary for the free flow of traffic." If there isn't such an explicit declaration, there should be a plan to get one, perhaps involving a system where vendors can submit examples of times they need to violate the law, and a requirement that states allow them if safe.
It is true that nobody has a list of all the situations where this is needed, so we can't write the list today, but this is something that teams with a lot of road miles will already know most of the answer to.
This may be much bigger than the regulators expect. I have noticed many areas of the existing vehicle code which probably should not be applied to robocars, as long as they will do them safely. This includes things like rolling stops at intersections known to be clear of other vehicles, left and right turns where they are not normally permitted (when safe) and much more.
Operational Design Domain
These regulations, while making the mistake of coding the SAE levels into law, do make serious effort to understand that the levels are only a small part of the story in describing what a robocar can do. The levels focus on the amount of human supervision involved in operation, though in reality for almost all interesting long term situations, that amount is very low to zero.
To mitigate this, the regulations define what they call the Operational Design Domain (ODD) which roughly means where the vehicle can drive. Both in terms of geography but also in terms of speed, weather, road types and more.
Cars will indeed vary in what they can do based on these factors, but I don't believe this will be something that is static or able to be specified succinctly. I suspect many cars will evaluate some roads on a case by case basis. Specific issues about each road will be evaluated and a decision made, either by algorithm or by people, how and when the vehicle can drive that road. And as expected, that answer might vary based on the weather.
I suspect teams will be tracking the list of roads they handle and don't handle, and be constantly improving that list. In addition, when they find problems with their cars, they may find problems that only apply to certain roads, and so they won't rate their car as generally unsafe, just unsafe on roads that have certain characteristics.
There will be some broad rules which will allow roads and situations to be described in words or code, but some may be human decisions as well. A team may simply be uncomfortable with a situation without having a hard and fast rule about why they excluded or added a road.
So while a car should understand its "ODD" it may not easily be expressed what that is. If the law requires the ODD be described in "plain language" as these regulations do, that may be difficult.
Object and Event Detection and Response
The regulations group a huge fraction of a self-driving system's components into what they call the OEDR. These functions, which roboticists roughly call the mapping (with localization), the perception system, and the path planning system - and more. This is the area of most research and uncertainty.
This is an area which frankly should stay outside the realm of regulation until it is much, much, much further along. NHTSA should not presume it has created a proper list of situations to handle. Instead, it would make sense for NHTSA to perhaps create a clearing house for the collection of unusual situations on the road which can be used as a reference by teams trying to go after the main goal, which is to be able to handle well all the things you see with any frequency on the roads, and to handle safely those things that are far less common or never encountered before.
The regulations should just set the goal. The list of tasks does not belong in them.
Once again, the requirement for detailed documentation of all the reasoning precludes the use of machine learning systems.
Crash Avoidance -- Hazards
Here the report does reference an external document, but it is not clear if it demands handling of all these, or points to the list only as a useful reference. Here, the best approach would not be for the government to try to list all the unusual road situations which might be dangerous, but to encourage industry to work together to come up with a good list.
Fall Back (Minimal Risk Condition)
Another reasonable requirement that is still too detailed. Everybody already knows they need to deal with system failures and get to a safe state. Indeed, everybody knows that even if their own systems are somehow perfect, cars are going to be hit and malfunction for entirely physical reasons in some cases.
These ones are OK -- a requirement that the developer have a way of testing that the system works and can handle strange situations.
This has been covered in a different section and will get more analysis later as well.
Rules for the autopilots and other lower level vehicles
These rules don't cover the autopilots and super cruise controls quite as clearly, but they do add a new level of regulation to these systems. They seem already influenced by the Tesla accident, and there are suggestions that countermeasures against too much relaxation by the supervising driver may become mandatory.
These systems are somewhat more mature than real robocars, and there are several out there on the market, so the arguments against regulating a technology before it's out are not as strong here. At the same time, this is quite early in the game. While not regulating the driving system itself, these new regulations add a lot of bureaucracy to the development of autopilots. This may actually make illegal the "Comma One" neural network autopilot described last week.
In the weeks to come I will look at NHTSA's model for state regulations and their plans for the future.