Safety Drivers for Robocars -- the issues and rationale
The wake of Tesla's incident has caused a lot more questions about the concept of testing prototype robocars on public roads supervised by "safety drivers." Is it putting the public at risk for corporate benefit? Are you a guinea pig in somebody's experiment against your will? Is it safe enough? Is there another way?
The simple first answer is that yes, it is putting the public at risk. Nobody expects the cars to be perfect, and nobody expects the safety driver system to be perfect.
The higher level question is, "how much risk?" and is it the sort of risk we can or should tolerate.
Teens
For contrast, consider the question of teen-age novice drivers, who are also allowed out on the road, first with an adult supervisor (who is often a driving instructor but not required to be,) and then after a ridiculously simple test on their own. More recently, they have been restricted in what they can do on their own until they become adults.
We usually start the teen out in a parking lot or private road to get the basics, but very quickly that becomes not very useful, and they must go on the real road.
The driving instructor is very much like the safety driver. Many student driver cars have a 2nd brake pedal for the driving instructor to use. I remember the first time a car passed me (with what seemed just like inches) and I swerved away, and the driving instructor used the brake on me.
We allow an unskilled, reckless teen on the road for no other reason than to help that teen build skills to become a better driver. Statistics show teens are reckless for several years to come. We allow them on the road so they can become better mature drivers. Each individual's training helps that individual, not society. The benefit to society is that we don't have another system to turn us into more mature and safer drivers. (Sort of. In some countries, a lot more training is demanded of teens before they hit the road.)
Another analogy is flying -- airliners are flown on automatic most of the time, including on landing, with the pilots overseeing and ready to take over at any time. It works well, though other than at landing, the task is simpler and the pilots have plenty of time to fix things when they take over.
Robots
The same approach has been taken with robots. They also start out in the "parking lot" or test tracks, but quickly the limitations of this become clear.
Unlike the teen, developing the robocar doesn't just develop the particular car being tested. Everything learned, every improvement, goes into all the cars in the fleet, forever. It's as if sending one teen to driving school taught their whole cohort. That's a nice win.
On the other hand, we have a lot more money and time to develop the cars. The budget is in the billions, not thousands. So if there are safer alternatives, we can afford them. In addition, humans learn fast, and start out much smarter about certain things, like perception and decision making, than robots do. Robots start out being much more diligent and predictable, and have superhuman sensing ability in some cases, and sub-human in others.
Safety driving
Safety driving is a bit harder than driving instructing, because it takes so long. The better the robocars get, the harder it is to pay constant attention, the easier it is to get lulled into complacency. You can work to improve the diligence of safety drivers, and should, but it's also important to measure it. You can calculate, "what are the odds that a safety driver will miss a safety incident and not take over in time?" It might be one incident in 100, or 1,000 or 100,000. You can test people, on the road or in driving simulators, to learn general capabilities of humans, or of particular classes of humans.
It's also been popular, since pioneered by Google and derived from techniques of the DARPA grand challenge teams, to have two humans in a prototype robocar. One is behind the wheel and constantly watching the road. The other tends to spend most of her time monitoring the software to make sure all is going well, but also looks at the road fairly often. The second person, sometimes called the software operator, can also "spell" the main safety driver in relatively safe situations. If the main driver wants to look away for a couple of seconds or adjust themselves in their seat, it's not that dangerous to do if the software operator is watching the road, and can shout about anything urgent. That may seem unsafe, but it's actually wise to give people short breaks from any monotonous task, even breaks of seconds. Solo drivers do it all the time.
Having two people also makes the work more social, and less boring. It's unlikely the main driver will completely zone out next to their colleague. So you can calculate the performance of the team -- how often will they miss an incident -- and it should be better than that of a solo person.
You are also constantly measuring the performance of the car. How often does it need true help from the safety drivers? It might be once every 13 miles, as reportedly was the number at Uber, or every 80,000 miles as Google/Waymo once reported.
Fortunately, the errors of the car and the errors of the safety driver should be reasonably independent events. In some cases, they would be somewhat negatively correlated. For example, the car may be more likely to have a problem in complex situations, but the safety drivers might be more diligent in complex situations and thus be less likely to miss something. However, there is also the bad factor -- as the car gets better, the safety drivers get a bit more complacent and thus their performance drops.
All of this means you can make predictions about the combined system of robot and safety drivers. And if you can get the whole system down to having numbers like a regular human driver, you've made it so that deploying your test car is at the same level of danger as sending an ordinary human out driving in an ordinary car. In other words, the project can be at the level where they are putting the public at risk at same level as pizza delivery puts people at risk. Pizza delivery does put people at risk, and we're willing to accept it with the only benefit to society being tasty pie at home.
That human level seems to be around this:
- 1 in 100,000: Small ding
- 1 in 250,000: Insurance ding
- 1 in 500,000: Accident reported to police
- 1 in 2,000,000: Injury accident
- 1 in 80,000,000: Fatality
- 1 in 180,000,000: Fatality on highway
- 1 in 600,000,000: Pedestrian fatality
I will not claim that it is simple to measure the safety performance of the combined system, but once you do, if you can get it decently above these levels, I don't think we should feel these projects are inherently putting the public at risk. Though it should also be remembered that these projects are just not out to deliver pizza pies. They are trying to change the world of transportation and save huge numbers of lives once they succeed. Of course, if they get the numbers seriously wrong, then there is a good case that they are indeed creating unacceptable risk.
Consider another product that we judge as not putting the public at risk, namely cruise control, in particular adaptive cruise control. If you use cruise control, as far as the pedals are concerned, you are just a supervisor. However, we all know that you need to regularly adjust the wheel and sometimes have to hit the brakes. With regular cruise control you are adjusting it every few miles, depending on how busy traffic is. Because it is so frequent, you stay alert, and it's rare to hear of somebody missing a cruise control event and not hitting the brakes when traffic slows up ahead. Tesla autopilot adds lanekeeping, and this has caused some people to ignore the road, but Tesla claims the number is small and the overall performance of autopilot+human is still better than the numbers above.
Improving safety drivers
None of this suggests that projects should not do everything reasonable to improve the performance of their safety drivers. Having 2 instead of 1 is just one such technique. Early projects did not monitor the gaze of safety drivers, but as that technology has become more readily available, I believe it will become common. We will also see more research on the performance of safety drivers and what affects it.
Assisting safety drivers
It is also possible for automated systems to assist safety drivers. Usually the driving software is constantly monitoring itself and the car, and can issue audible alerts if something unusual is detected so that the safety driver either takes over immediately or is more diligent.
It is also possible to install completely independent collision detection systems and have them make audio cues when they see something. Of course, inherently these add-on systems will be inferior to the robocar system -- otherwise it's a pretty poor robocar system and why are they using it? -- so there is the risk of having too many false alarms. There is also, oddly, additional risk of complacency -- I can look away because the system will beep if there's somebody in front of me.
With speculation that Uber's fatality was the result of their system classifying the pedestrian as a false-positive (which is to say a sensor ghost that you don't want to brake for) it could be reasonable to have systems make an audible signal whenever they are deciding to ignore what they think is a sensor ghost.
More off road testing before getting on the road with safety drivers
Some will argue that developers should do more off road testing and really get their numbers up before getting on the road. That's probably not true for the same reason we allow cruise control -- when the system needs lots of intervention, we handle working with it fine. It may be there is a "valley of danger" where the system gets good enough to cause complacency, even with professionals, before it gets good enough to not need supervision at all.
There is no question that safety driver operation will and must happen. Even if you could train a car to what you believed was deployment-ready performance in simulator and on test tracks, nobody is going to trust it for deployment without a good record demonstrated on the road, and that has to be done with safety drivers.
The example of cruise control tells us that no system is too poor to deploy with safety drivers. Ironically, it can be argued that some systems might be too good. Perhaps there is a zone of "good" which is somewhere between "cruise control" and "Excellent." Cruise control is acceptable. Super good is acceptable. Yet somehow, "good" might not be.
If that's true, and if Uber was in that zone, the first approach will be to improve safety drivers with monitoring and more team driving. Another approach, just becoming available to us today, is better simulator technology.
Shadow driving
It is possible to have a human being drive around in a car with the full sensor suite, and to take the logs and feed them to a self driving system. You can then see if the system would make any decisions that differ greatly from what the human driver did, and look at them for the cause. This can help, but is very limited. You really only get to look at the instantaneous decision making at any point, based on information up to there. It's not possible to learn any dynamic properties of the system, because if it decides to steer or accelerate differently than the human has done, we can no longer look at what a real system would have done on the road.
You can take this data and turn it into simulator scenarios, and then have the system try to drive in those. That is in fact what is done to build simulator scenarios in many cases. This is difficult, and the result is still not very satisfying, especially if what you want to understand is sensor performance. Simulating sensors is much more difficult and slow than simulating situations, and the reality is that simulating radar is almost impossible, and simulating LIDAR perfectly is also close to impossible. Simulating camera views looks like a good quality video game. Usable, but not the real world, and often too far off the mark.
Still, as I have discussed, there could be great merit in the world building a vast library of simulated scenarios based on the experience of all teams. This would allow a great deal more testing of unusual situations before the cars first go onto the road with safety drivers. But they must go onto the road with safety drivers.
Comments
Karol
Wed, 2018-05-16 11:52
Permalink
Driver Monitoring System (DMS)
Privacy issues aside, I wonder if/how the improvements to Driver Monitoring Systems (identifying hazardous behavior, analyzing the hazardous behaviour and what proceeded it, and sending an early warning signal even before the hazardous behavior occurs) could replicate on human-driven cars the benefits you described for robocars: "Everything learned, every improvement, goes into all the cars in the fleet, forever. It's as if sending one teen to driving school taught their whole cohort."
Of course, that's assuming such DMS is widely deployed and easier to build than a safer-than-human AV.
I'd appreciate thoughts on that topic!
Dan
Wed, 2018-05-16 12:04
Permalink
off-road tests for hazard classification
It seems that recent failures have mostly involved failures to recognize hazards, whether they be pedestrians, fire trucks, gore points, etc. Hazard recognition seems perfectly suitable for off-road testing. For example, you park a fire truck in the road in a private test facility, and have a sensor-equiped robocar with a human driver approach it while recording the sensor outputs. You then feed those sensor outputs into the object classification software and see how well it recognizes the hazard. You then vary the parameters such as driving speed, darkness, type of object, curves in the road, etc., until you have high confidence that fire trucks, gore points, and pedestrians are appropriately recognized under a wide variety of conditions. That way you don't have to use disclaimers such as our cars don't consistently brake for fire trucks.
Michael DeKort
Wed, 2018-05-16 17:47
Permalink
There is much you do not know
Your understanding of public shadow driving and the use of proper simulation is fatally flawed.
Impediments to Creating an Autonomous Vehicle
https://www.linkedin.com/pulse/impediments-creating-autonomous-vehicle-michael-dekort/
Impediments to Creating an Autonomous Vehicle
The creation of autonomous technology will result in benefits to humankind. Those benefits may be different than what we think now. But in the end creating this technology will have benefits. Chief among them is lowering the accident rates and the resulting injuries and loss of life. It is imperative that we create this technology not only as soon as possible but also as safely as possible. Unfortunately, there are several processes and practices currently being used by the industry that are so problematic they will make it impossible to get to a full level 4 autonomous vehicle. These issues will involve so much labor, cost and reputational damage most companies will not be able to bear them. The safety issues are so significant they will soon severely impact the entire industry. Fortunately, all of these issues are technically solvable. Some of this is clearly evidenced by Waymo’s recent paradigm shift to much more simulation and skipping L3.
Public Shadow Driving for AI and Testing is Untenable and Needlessly Dangerous - This practice will make it impossible to create a fully autonomous vehicle. It is not possible, neither in time or money, to drive and redrive, stumble and restumble on all of the scenarios necessary to complete the effort. The other problem is that the process will cause thousands of accidents, injuries and casualties when efforts to train and test the AI move from benign scenarios to complex and dangerous scenarios. Thousands of accident scenarios will have to be run thousands of times each. When the public, governments, the press and attorneys figure this out they will lose their trust in the industry, question its competence and may impose far more regulation and delay than if the industry self-policed. (The Tesla and Uber tragedies demonstrate this.) The solution here is to use proper simulation for at least 99.99% of the effort.
Simulation is Inadequate – There are several issues with the capabilities, configuration and use of simulation and simulators in the industry. The first being that AV sensor system simulation is normally not being integrated with Driver-in-the-Loop (DIL) simulators. When they are integrated it is often not in proper real-time. When testing needs to be conducted using both parts it is usually done in a real vehicle on the test track. (It is for this reason I believe cloud based sensor simulation is problematic). To make matters worse the DIL simulator often does not have a full motion system. Also it appears that tire and road models are not precise enough. These issues will lead to false positives and significant levels of false confidence. The lack of motion cue, tire and road fidelity will largely be hidden. Most of the problems will not be discovered until real world tragedies occur way down the line. The reason for this is the human driver training and testing the car will perform differently when they do not have or expect motion cues. The vehicle will appear to drive properly in simulation. But in the real world there will be timing, speed and angular differences that will manifest themselves as differences in how the driver, vehicle tires and road interact together. These differences will cause enough change to make accidents worse or even cause them. An example of this is when there is a loss of traction. The solution here is to follow aerospace, DoD and the FAAs lead and integrate the AV simulation with a full motion DIL simulator in proper real-time. And to ensure that all of the models are accurate. (Most people think of air travel when I mention aerospace/DoD. Saying it is not near as complex. That is true. What is as complex is urban wargames in simulation where hundreds of entities interact in urban areas in actual real-time. Something not a single simulation product the AV industry has can do as far as I know. And we did it 20 years ago. How is that possible if computers were feeble in comparison to what we have today? Shared memory and an executive that controlled when and how often tasks run).
Accidents Scenarios are being clarified as Edge or Corner Cases - Here is the Wiki definition of a Corner Case – “In engineering, a corner case (or pathological case) involves a problem or situation that occurs only outside of normal operating parameters—specifically one that manifests itself when multiple environmental variables or conditions are simultaneously at extreme levels, even though each parameter is within the specified range for that parameter.” What folks are calling edge or corner cases are the core complex or dangerous scenarios that must be learned in the primary path. Call them exception handling cases or negative testing but they are NOT edge or corner cases. Edge or corner cases would be cases outside the bounds of those normal operating cases. I say normal because these are scenarios that have to be learned because they will or can happen. Whether the scenarios are benign, complex or dangerous they all have to be learned and tested. The concern here is that the proper depth and breadth of engineering and testing is not accomplished because these scenarios are seen as outside the bounds of proper due diligence. This is where corners will be cut to save time and money.
No Minimal Acceptance Criteria - Recently the GAO admonished the DoT for not creating test cases to ensure the minimal set of criteria is known and verified to prove autonomous vehicles perform as good or better than a human. The GAO stated – “The Transportation Department, for its part, said it concurs that a comprehensive plan will eventually be needed. But in a prepared statement published alongside the GAO report, a department official said such a plan is “premature,” because of “the nature of these technologies and the stage of development of the regulatory structure.” It is a myth that most of the scenarios cannot be created because of associated technology. There is almost no correlation between the technology involved and creating test scenarios to ensure that tech is doing what it should. The minimal acceptable scenarios should have already been created and been utilized for the vehicles already in the public domain. The second myth is that the majority of the test scenarios will come from public shadow driving. As I have already stated it is impossible to drive the miles required to do so. The solution is to include a top down effort to create a proper scenario matrix, create the minimal testable criteria needed to ensure these systems are safe and to use that system in the same geofenced progression the systems are being fielded for engineering, test or public use. Regarding the scenario matrix, there is the issue of using miles and disengagements to measure AI and testing progress. Miles and disengagements mean very little and can be misleading without the scenario and root cause data. The primary metrics that should be used are critical scenarios that need to be learned and those that have been learned.
Handover (L2+/L3) Is Not Safe – While there are limited situations where system control, specifically steering, must be handed back over to the human driver, the practice, in general, cannot be made reliably and consistently safe, no matter what monitoring and control system is used. The reason for this is it takes 5 to 45 seconds to regather situational awareness once it is lost. In many scenarios, especially when they are complex, dangerous and involve high speed or quick decisions and actions, the time to acquire the proper situational awareness to do the right thing the right way cannot be provided. The solution here is to skip handover or L2+/L3 activities where they are not imperative.
Remote Control of Autonomous Vehicles – I have seen at least one company introduce a system to remote control an AV. They are doing so using cellular communication systems and without a full motion DIL. While there may be scenarios where this is the best option to assist the driver or passengers and will perform satisfactorily using the current approaches, the system latency and lack of motion cues could cause significant problems. Especially when the scenarios are complex and involve speed, quick movements and loss of traction. The solution here is to leverage what aerospace, DoD and the FAA have done and ensure these issues are remedied or another approach is taken.
V2X Update Rate is Too Slow - The current update rate being discussed most is 10hz. (Or updates 10 times per second). In many critical scenarios that is not often enough. For example - two vehicles coming at each other in opposing lanes, with no median, at 75mph each would require 60hz to deal with last moment issues. If the first message reliability is not 99% and a second is needed the rate moves to 120hz. There are other scenarios which would raise it more. The industry needs to looks at the most complex of threads in the worst of conditions and set the base update rate to accommodate that.
Vehicles as Targets for Hacking and Weaponization – I am sure we have all seen cases where vehicles have been hacked. It is not a leap to suggest these systems are prime targets for weaponization. Particularly those systems that remote control these vehicles or where source code is actually being provided to users. While many in the industry are aware that cybersecurity needs to be addressed what is being missed is the fact that most companies and organizations literally avoid several key cybersecurity best practices. A clear example of that is Privileged Account Management. This has led to almost every hack that has ever occurred. Unless addressed will we will never significantly reduce them.
Hardware Reliability – Building a self-driving car that meets the reliability requirements equal to our current system is one of the most challenging technological developments ever attempted. It requires building a very sophisticated, reliable electromechanical control system with artificial intelligence software that needs to achieve an unprecedented level of reliability at a cost the consumer can afford. Boeing claims a 99.7% reliability figure for its 737. Which is equivalent to about 3,000 failures per million opportunities. A modern German Bosch engine control module achieves a reliability of about 10 failures per million units which is about 6 times worse for a single component than our current system of flawed drivers. This level of quality may be extremely hard to produce in volume and to be cost competitive.
Common Mapping Versions - Map versions have to be common for every user in any given area. We cannot have different services providing different maps for which there are crucial differences in data. For examples changes due to construction will cause system confusion and errors. A solution would be to create a central configuration management process or entity that ensures commonality and the latest versions are being used.
Exaggerated Capabilities – Far too many of those involved in this industry, from those who are creating the technology, the press, oversight organizations and those who are in downstream industries, are exaggerating the current capabilities of these systems. As there are no minimal testable scenarios, even for progressive geofenced engineering or public use, this is all too easy to do. While those exaggerations may lead to funding and improve moral they create a false level of confidence. Given all of the other issues we discussed, and that sensor systems still cannot handle bad weather, this can only contribute to backlash when tragedies are caused by the issues I have already mentioned. It is not an exaggeration nor hyperbole to state that if these issues are not remedied avoidable tragedies will occur. The Joshua Brown accident was bad enough. When a child or family is harmed, the public realizes it was avoidable, that backlash will be significant if not debilitating.
My name is Michael DeKort. I am a former systems engineer, engineering and program manager for Lockheed Martin. I worked in aircraft simulation, the software engineering manager for all of NORAD, the Aegis Weapon System, and on C4ISR for DHS. I also worked in Commercial IT and Cybersecurity.
I received the IEEE Barus Ethics Award for whistleblowing regarding the DHS Deepwater program post 9/11 - http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=4468728
I am also a member of the SAE On-Road Autonomous Driving Validation & Verification Task Force - (SAE asked me to join the group because of my POV on this area and my background)
brad
Wed, 2018-05-16 20:49
Permalink
Why that headline
Thanks for the detailed post -- it seems to be responding to a lot of different things not in this article, and many things I have not said. In fact, I agree with and have said several of the things above.
I have said many times that both simulation (sensor or post-perception) and shadow driving are useful tools but far from adequate for full development and testing. Currently real world operation (with safety drivers) is the only technique we have available.
What do you refer to on the corner case point?
Minimal acceptance criteria: I don't dispute that people can start working on performance tests, and there is no reason you can't start some basic ones right now. However, there is not much evidence that such a test set would be useful to the leading companies, who already have developed much more extensive test sets and routines -- on test tracks, and in sims, and to a limited extent on roads. A basic test set might be of value to a team just starting out. However, no test set of this sort can as yet prove they "perform as good or better than a human." Proving this is an unsolved problem at present, the subject of entire conferences of research papers. Rand tried to argue that you can't ever do it, though I disagree.
As such, I don't think anybody would be even remotely as good as Waymo at doing this. The government could ask Waymo to help with this but there would be obvious issues of bias there. You can do basic tests now but you want to be careful when you are certifying a moving target.
Miles and disengagements are a very low value metric. I know that Waymo does not care too much about the numbers California makes it report. What it really cares about is issues and interventions that could lead to a safety incident, in particular a "contact." Of course it has its own way to determining that and it would be very difficult for an outside tester to do that unless the outside tester had a very, very impressive sim and test track. Waymo does find it valuable to track the frequency of required interventions and faults, but the California rules do not understand that concept, nor can they easily.
Much information is yet to be learned about Uber's problem, but the recent leak is consistent with something that might not be found with any external testing. We know Uber's systems do track and identify pedestrians and they have encountered and correctly handled presumably many thousands if not vastly more pedestrian encounters in their testing operations. The leak claims in this case that it tagged the woman as a false positive, but it obviously does not routinely classify real pedestrians as false positives, though it does so possibly much more frequently than it should. Whatever it was that triggered the false positive classification could easily be something that would not be in a test suite, though one hopes it would. (More curious is that, even if it did classify her has a false positive, why it did not correct that impression as she got closer and more obvious in the sensor data.)
Handover is not safe: Again, I have said I concur with Google that indeed, this is risky. So why are you writing this to me?
Remote control: Yes, I met with this company when they were forming, advised them networks were not up to it, and proposed they test the networks before they got too deeply into it. They have modified their plans and I have not seen the more recent ones.
V2X Update: I am a known critic of the utility of V2X, but I don't think increasing the rate is the solution to its problems.
Vehicles as targets: Again, this is something I say all the time, read my recent essay on the "disconnected car."
Hardware reliability: Again, this is something I have said, however, I believe there are solutions which work not by building reliable hardware and software, but rather systems that are expected to fail, and which handle the failures gracefully. As far up the chain as you can go -- ie. handling failures of the failure handling.
Common mapping: Don't agree on that. Maps will regularly get out of date. Anybody who tries to make a car that can't handle a map that has gotten out of date is foolish. Attempts to have central maps have a terrible track record. Google gave up on importing data for google maps and switched to the far more expensive process of building its own maps for a reason. The team who built those maps, which moved to the Chauffeur/Waymo team, decided to ignore all their prior work and build yet another generation of mapping tools.
So once again, since it looks like you are 80% agreeing with me, I don't understand the title of your comment.
Chris Edwards
Fri, 2018-05-18 17:28
Permalink
FYI Link
Found this which might partially explain that comment:
https://medium.com/@imispgh/my-name-is-michael-dekort-ac19e666231f
Or not.
Anthony
Wed, 2018-05-16 18:12
Permalink
equivocating
I think you're equivocating on what you mean by "cruise control." Regular cruise control, where you set a speed and the car drives at that speed, is relatively uncontroversial. Especially the type that, for safety reasons, doesn't work below a certain speed. It also works nearly perfectly. Adaptive cruise control, on the other hand, is not uncontroversial. When used on the highway it tends to work nearly perfectly. But some car manufacturers have created adaptive cruise control which even works at low speeds, and this is even more controversial, because sometimes it just doesn't work.
brad
Wed, 2018-05-16 20:22
Permalink
There is a difference
I agree -- but I think ACC is now a very widely accepted product, in deployment for well over a decade, no stories in the paper about how somebody was killed by ACC.
Anthony
Wed, 2018-05-16 21:05
Permalink
Tesla?
Weren't a couple of the recent Tesla fatalities related to the adaptive cruise control part of autopilot?
brad
Wed, 2018-05-16 21:26
Permalink
Tesla fatalities
Of course, but that's autopilot -- ACC plus lanekeep plus FCA plus a bit more -- and we all know that's controversial. But also deployed for several years without regulation and as yet the government has no plans to regulate it. My point is that the idea of regulating these sort of technologies before deployment is the highly radical one, which needs justification. One must examine if the pre-regulation will be effective, if it will save lives, what it will cost, and how much it will delay deployment of what is, once it reaches the right performance, a live saving technology. In the past the decision has been not to pre-regulate.
Anthony
Thu, 2018-05-17 16:13
Permalink
Putting a car on the road
Putting a car on the road without a human driver is the radical idea. It's nothing like cruise control, and it's nothing like teen drivers. And it's not pre-regulation. The pre-existing regulations already require a human driver. If we want to change that, and we should, the burden is (or should be) on the car manufacturers to prove that it will not cost lives, what it will cost monetarily, etc.
brad
Thu, 2018-05-17 16:32
Permalink
Burden to prove it will not cost lives?
Generally, regulation in the USA doesn't work that way. Companies do not have to prove to the government their products are safe before releasing them. The government needs to decide they are unsafe, and the company can sue if that decision is arbitrary. There are other countries which work the way you describe. The default in the USA is you can do it unless it's forbidden. States have been moving to remove regulations which would have forbidden it just for language reasons, ie. language that required a driver, and they could write regulations, but the default is they should not until harm has been shown. Actually, even more restrictive places like Europe still only move a little bit on the spectrum. By and large new products are allowed unless dangerous, not forbidden by default. However, it varies.
Anthony
Thu, 2018-05-17 16:55
Permalink
USA
By regulation I assumed we're really talking about the creation of statutes, not regulations. Allowing self-driving cars for the most part requires statutes. The governors have been able to enact some interim regulations in the meantime, but really it's a job for the legislatures. How that political process works varies greatly, and is generally controlled a lot by special interest groups. But how it *should* work is that self-driving cars shouldn't be allowed on the roads until it can be proven that they won't cost lives.
> Companies do not have to prove to the government their products are safe before releasing them.
That depends a lot on the product. Many states have inspection requirements for vehicles, though.
> The default in the USA is you can do it unless it's forbidden.
The default in the USA is that you can't operate a motor vehicle on the public highways without a license.
brad
Thu, 2018-05-17 17:28
Permalink
Operate a motor vehicle
No, that is not the default. That is an explicit rule in the vehicle code. We're talking about what's the state of things when the regulations don't speak to them. And that is, if the regulations don't forbid it, it's allowed.
As such, car companies were allowed to develop anti-lock brakes, ESC, cruise control, ACC, FCW, FCA and autopilot without going for permission, even though these all take over the controls from the driver and have safety consequences. ALB was the first of these, and it actually interrupts pressure on the brakes! Harder to sound more scary than that. But of course, as we know, their intention is to make braking better, and they do. (Though some argue they increase driver overconfidence.)
Anthony
Thu, 2018-05-17 17:34
Permalink
Potato potato
Self-driving vehicles are completely different from all those other things, and in most states they're not allowed.
We should fix that, when and if self-driving vehicles become safe.
brad
Thu, 2018-05-17 18:46
Permalink
Not allowed?
On what do you assess your claim that they are not allowed? When Google began its project, their legal team looked at it in detail, and decided that safety driver operation was probably legal in almost all states, and certain was in California. That even technically standby supervised operation (the so called level 3) was also legal. That unmanned operation was in more question, but probably legal in many states. However, it was realized that this was not the intention of the vehicle codes -- they didn't ban robots because they simply didn't think to -- and so that if operations were begun without talking to the state, it would result in bans, which nobody wanted.
My friend Bryant wrote a law review article suggesting the same. http://cyberlaw.stanford.edu/publications/automated-vehicles-are-probably-legal-united-states.
So what research have you done or are you aware of that supports your claim? Or do you now realize the claim may be in error?
Anthony
Thu, 2018-05-17 20:08
Permalink
research
Self-driving means there's no human driver at all. Last time I looked at it, which admittedly was years ago (but my point is what the state of the law was before deregulation), most states had problems with it. A law might require the driver to wear a seatbelt. Another law might prohibit the owner of a car from allowing his vehicle to be operated on the public roads except by a licensed driver. Another law might prohibit leaving a vehicle unattended while running. This one wasn't popping up as much several years ago, but another law might prohibit texting, or even using a cell phone while driving.
I looked at Bryant's law review article, and he seems to notice all these problems, yet he concludes that "Current law probably does not prohibit automated vehicles." I'm not sure why.
brad
Fri, 2018-05-18 01:19
Permalink
Various issues
General feeling was, if the law allowed you turn on ACC or lanekeeping and watch the car, there was still a person "driving" the car, they were just using systems to do the driving tasks. So the person who pushed the "go" button was the driver, and had to do what the vehicle code says the driver must do, or get a system to do it for him/her. Allowing full unmanned operation was not as clear in these vehicle codes. Some might allow it, some would not. Nobody imagined they would be wise to start unmanned operation without talking to the states to clarify it, though. But nobody has been ready to try unmanned until Waymo this year.
But as far as carrying a person in the driver's seat who is licenced, that seems to be OK, and was OK in the past, as is operating with a safety driver, in a fair number of states.
Anthony
Fri, 2018-05-18 05:30
Permalink
Potato potato
As I said, once explicitly, and several times implicitly, self-driving means there's no human driver at all. As you call it, "full unmanned operation." And it seems we're essentially in agreement about this. There need to be regulations that were or will be passed in order for it to start happening.
Assisted driving, where there's a licensed driver in the drivers seat telling the car what to do, is not self-driving. That driver has to obey all traffic laws, which generally includes not using electronic devices (or even having certain things visible from the drivers seat), having an unobstructed view of the road, etc. There's also invariably a requirement that the vehicle is safe to operate. And once you get someone other than the owner driving for a fee, there are likely car rental regulations that have to be followed.
It'll be nice being able to drive your own car simply by putting in an address, making some minor adjustments (suggest lane changes, tell it the cruising speed, etc.), and maybe taking over if something completely unexpected happens. Tesla is definitely working toward this. But until the cars can drive without a licensed human driver in them at all, this won't really revolutionize travel the way self-driving cars will. It'll just make it more convenient.
So bottom line, nitpicking semantics aside, is that we need new laws in order to allow this revolution to happen.
brad
Fri, 2018-05-18 12:31
Permalink
"Self driving"
I recommend you change your vocabulary, because people will misunderstand it. Most people use "self driving car" as a technology-spanning term, to mean anything that isn't driver assist. That's why I like to use terms like "unmanned" to make it clear when we're talking about a vehicle that can drive with nobody in it. (Or a sleeping person for that matter.) Alain Kornhauser is trying to get people to use Driverless for unmanned and "self-driving" for standby-supervised. His terms are even less clear. But the subject of this article is safety driver operation. That is tricky because you have people who are trying to build some type of self-driving car (all the way up to unmanned capable) but because they are testing it, they have safety drivers. That is legal, and always has been. Even a car that is unmanned capable, operating in that mode, with the human never touching the controls all day. If the human is there it's generally viewed as legal. The question of whether it's legal if the human is not there is moot for everybody but Waymo right now.
Anthony
Fri, 2018-05-18 13:02
Permalink
Thanks for the suggestion.
Thanks for the suggestion.
sdcsighted
Wed, 2018-05-16 18:53
Permalink
Teen drivers
I have used your “teen” example as an argument for the need for regulations. I can’t speak for everyone, but I believe testing with a safety driver is necessary, and companies should be allowed to do it... with the caveat that it is being done safely. Framing the argument as “testing with a safety driver is dangerous and should not be allowed” is a straw man IMO.
In my state, people have to take a class, get a permit, go through hours of drivers training with an instructor, practice for many more hours with a parent in the car, then pass a written test and a driving test before they are allowed to have a license and drive alone on public roads. Even after they get their license, there are certain restrictions, like they can’t have any other minors in the car with them unless they are family members etc.
People also have to wear a seat belt in my state. A generation ago, kids could ride around in the back of a station wagon (the law only went into effect in 1986).
Point being, laws were put into place in the interest of public safety, because cars are dangerous. Why should SDC prototypes that are being tested on public roads be treated differently and exempted from any additional regulations?
Michael D. Setty
Thu, 2018-05-17 11:39
Permalink
Who Cleans the Robocars?
Methinks this Slate author has a good point.
https://slate.com/technology/2018/05/who-will-clean-self-driving-cars.html
Anthony
Thu, 2018-05-17 15:58
Permalink
Contractors
Would be interesting to have a Uber-style model, where the car drives to nearest available contractor's location whenever it needs cleaning.
brad
Thu, 2018-05-17 16:33
Permalink
I describe this in my story from about 10 years ago
Called "a week of robocars." Of course, I understand that most new readers have not read the old stuff.
Ross
Thu, 2018-05-17 15:15
Permalink
Pointing and calling for safety drivers
I wonder if some variation of pointing and calling used on Japanese trains (https://www.atlasobscura.com/articles/pointing-and-calling-japan-trains) would help maintain safety driver attention.
Do safety drivers have existing specific check tasks, e.g. "traffic light green", "crosswalk clear" or is it more a wait-for-anomaly type of monitoring?
brad
Thu, 2018-05-17 16:34
Permalink
I don't know about every company's training
But generally safety drivers are charged with thinking like a defensive driver (ie. anticipating potential problems) and checking to see if the car appears to be handling them, and intervening if it does not. I don't know if anybody has given them specific checklists.
Ross Stapleton-Gray
Fri, 2018-06-22 11:37
Permalink
Hulu as contributing factor...
Safety driver was apparently watching Hulu at the time: https://www.engadget.com/2018/06/22/uber-self-driving-crash-driver-watching-hulu/
Anthony
Fri, 2018-06-22 17:03
Permalink
might have saved Uber, I think
"this isn't good news for Vasquez or Uber"
Better for Uber than if the driver's statement that "she had been monitoring the self-driving system interface" had been true.
What a dumb thing for the driver to lie about. Dumb to have answered any questions, but especially dumb to lie about something that the police so obviously could, and would, check on.
Add new comment