Uber right turn, what government should do and minimum standards for robocars

Topic: 

Yesterday we saw the state of Arizona kick Uber's robocar program out of the state. Arizona worked hard to provide very light regulation and attracted many teams to the state, but now it has understandable fear of political bite-back. Here I discuss what the government might do about this and what standards the courts, public or government might demand.

Waymo / Jaguar

Waymo's big announcement today was a partnership with Jaguar to base their next vehicle on Jaguar's expensive electric car. They are going to buy a lot of cars. I think it's a surprising choice. While the luxury of such vehicles is nice, and electric makes sense, I somehow suspect that for a taxi people prefer vehicles like the minivan they now use, with high seats, easy entry and automatic doors. Less green, though.

Making right turn

Some folks who have been investigating the video (I hate to watch it myself) have suggested that the car shows signs of starting a turn, and that the right turn indicators might be on. This provides some context which might provide an explanation, though not an excuse, for the system failure. In other words that very sloppy code, planning to exit the lane it's in, erroneously decided it need not treat a pedestrian in its soon-to-be-former lane as to be avoided. We're still at the point of speculation, and still waiting for Uber to release the real logs of what transpired in their spirit of full cooperation.

What should the government do?

Some have reacted to this tragedy to look for more regulation. It does appear that Uber has lived up to its reputation as a "cowboy" and put the public at unnecessary risk. That is my standard of when regulation can make sense -- when it is shown that the companies can't be trusted to act reasonably without it.

At the same time it's not at all clear any regulatory body would be better at writing safety rules to follow than the companies are, and that the field would not change so quickly that the rules became obsolete before long.

As such, I hold to my existing position of relying on the existing regulation which exists in tort and traffic law. It's already illegal to hit people, and always will be. There is evidence to suggest Uber will fail the usual tests in the courts on what good and best practices are, on what reasonable duties of care are. If so, they will be punished, and quite severely. In fact, I think that Uber's self-driving program may receive the "death penalty" because of this incident, in that both the public will not trust it and management will shut it down or entirely revamp it.

If Uber receives a severe punishment in traffic or civil courts, and gets blocked from operation in other states, this will provide a strong message to all other players. A stronger one than NHTSA regulations might offer. If this does indeed become so strong a penalty that Uber's self-drive project gets an effective death penalty, I can't imagine the need for anything stronger than that. The public regulators might have a better moral sense than Uber, but they won't be better at writing rules on how to keep safe from a technical standpoint. Certainly not rules that will still make sense in 2021.

That's why companies like Waymo and Zoox actually hired former bosses at NHTSA, the National Highway Transportation Safety Agency. While obviously these men help their employers navigate the regulatory environment, part of their role is to help the companies design safety protocols. (They are actually not allowed to do any lobbying or other professional interaction with their former colleagues for 3 years.)

Many may not appreciate that this is the norm in regulating automotive safety technologies. Pretty much all the technologies out there -- seatbelts, airbags, anti-lock brakes, stability control, blindspot warning, adaptive cruise control, forward collision warning, lanekeeping and more -- were all developed and deployed entirely without regulation, and then sold for years, even decades before regulations were applied. When the regulations were applied, they were typically to say things like, "This technology is so good, we're going to require every car to have it at some basic level." It would be highly unusual for regulation to describe how to build the technology before it is out with customers.

I know that some people will feel that some regulation should have been there, but at least on the surface (we don't know enough yet) there are few reasonable regulations I can think of that would have stopped Uber. While Uber's car did not perform to reasonable minimum standards, I don't think Uber deliberately put a sub-minimum car on the road.

There is one area I think regulation might have helped, and that's on the number of safety drivers. It is essential that teams be able to reduce to one, and then zero safety drivers in time, but we might consider regulations that had something to say about the switch from 2 to 1. (The switch to zero is already considered in many regulations.)

Problem is, Uber had racked up a lot of miles, more than most teams out there. So a rule saying you need a certain number of miles before dropping to 1 is hard to apply here. Their reported intervention rate is low, so we could look into requiring 2 drivers until the intervention rate reaches a certain level. Unfortunately, as discussed yesterday, there are lots of different types of interventions and they are treated differently by the different teams. It will be very hard to define a universal standard for what one is. Worse, as noted, pressure to reduce intervention counts could actually be the cause of accidents by making safety drivers reluctant to intervene.

One likely, but expensive method, requires detailed simulators from every team which can tell if an intervention was truly needed or was just cautionary. We would also need to consider the question of interventions due to software fault alerts (which cause alarms and don't need a second safety driver) compared to interventions due to problems on the road (where two safety drivers can play a useful role.)

In general when considering the need for regulation, one should examine what things the companies might be motivated to lie about or be unsafe about. If there are high liabilities for accidents -- as I believe there are and will be -- there is low motive to lie. You're only lying to yourself, since you will certainly pay handily for any accident which is your fault, and fault will be readily apparent.

This leaves the issue of companies being reckless in order to lower costs of testing and development. While this is a risky gamble for them, it is nonetheless a gamble that perhaps Uber was willing to take. Courts tend to punish such attitudes harshly, but intent can be hard to prove.

I would consider the following set of safety driving regulations. They would be based on measuring "miles per safety-necessary intervention." Teams can use simulation to distinguish safety-necessary interventions (ie. without the intervention a violation of the vehicle code or other's right of way would have occurred) and others. If they don't have such simulation tools, they can do a human analysis, but "inconclusive" will be counted as safety-necessary.

  1. Low maturity software, whose major revision level is less than some threshold of miles per safety intervention must have two safety drivers on duty while testing.
  2. Higher maturity software, above a certain threshold, may have only one safety driver while testing in the type of road situations it has reached that threshold.
  3. When only one safety operator is present, that operator's attention to the road must be measured, and the operator taken off duty if it drops too low. In addition, solo safety drivers must have regular breaks to avoid fatigue.
  4. No task of the safety driver shall require them to look away from the road for more than a short glance.
  5. Fully unmanned operation (zero safety drivers) shall be covered by other rules.
  6. Inherently unmanned vehicles (ie. cargo robots) may be monitored from a chase vehicle with appropriate local-radio takeover mechanisms. Robots below a certain amount of kinetic energy (ie. low mass and speed) will have lower requirements.

The minimum levels

In not braking for a pedestrian crossing 3 lanes and entering the car's lane, I have advanced that Uber's vehicle performed (well) below the minimum standards for a vehicle. We don't know if is generally performs below that level, or just had some very unusual event take place that made it fail in only this situation. For now though, it seems pretty bad. There are so many ways that the car's sensors and systems should have been able to detect and react to this pedestrian.

At a basic level, I believe we should expect a vehicle can do the following:

  1. Detect and brake for an obstacle in their lane when that obstacle is clearly visible or clearly moving into their lane on urban streets.
  2. Broadly, this requires perceiving an obstacle being approached at a time better than the 0.7 second reaction time of high-performing humans, and with room to stop given the current stopping distance in current road conditions.
  3. Ideally, perception should take place with a margin above this, allowing less than full braking to be applied.
  4. Where practical, on clear city streets, swerving should also be available, but only if there is high confidence it will not worsen the situation.

At 25mph less than 60 feet is needed -- well within the range of all LIDARs, radars, stereo cameras and more.

At 40mph, for human drivers, this means a need to perceive 140 feet (42 meters) out. This is well within the range of typical LIDARs, and even within the capability of widely separated stereo cameras (depending on illumination.) It's also well within the range of radars. Typical range of low-beam headlights for human eyes is around 160 feet.

At full highway speeds of 75mph, the distance is 108m -- this is seriously pushing the limits of near-infrared LIDAR (which is to say most LIDARS) especially on a person in black clothing. It's also beyond stereo cameras, but not radar or 1.5 micron long-range LIDAR.

In reality, while you may gain some distance due to faster reaction times, these stops require full hard brake stops, which present a problem if somebody is riding your tail, regardless of whether that's illegal or not.

(Understand that I suspect Uber falsely believed that their vehicle did perform to those levels and was unaware that it would not.)

However, there are several exceptions to this

  1. If the road is curved, cars are not expected to slow to a speed low enough to be able to brake for any stopped object that suddenly appears around a corner. While they could do so, human drivers almost never slow to this speed, for better or worse.
  2. If an obstacle enters the lane suddenly, such as a pedestrian jumping into the street, it is not expected that anybody, human or robot, can surpass the laws of physics.
  3. On limited access freeways or other locations where pedestrians are forbidden and/or fenced off, a vehicle may exceed the maximum speed necessary for such a full stop, particularly if following another vehicle, and in particular so as not to be an impediment to traffic.

The highway rule matches human activity. Humans routinely overdrive their headlights on the highway. Because pedestrians and cyclists are not just forbidden from highways but physically fenced off, we all drive like they can't be there. (Pedestrians who do try to cross highways are very frequently killed because of this, but we have come to accept that rule.)

In addition, on the highway it is quite normal to drive at very fast speeds behind another car which blocks our view of what's in front. We rely on that other car to detect things, and we follow it with a short headway, just to be sure we won't hit it, given our reaction time to its brake lights.

I would call these minimum levels "good practices." Think that "best practices" would be a level higher than this. In time, we will ask production cars that are deployed on the road to reach a higher level. During testing, with safety drivers, it is only necessary to ask for good practices. And in fact, in the earliest phases of development, with very attentive safety drivers, I believe it is OK to deploy a vehicle that does not even meet these minimums, because the attentive human does. But you must take good care to assure that human truly is that attentive.

Comments

"It's already illegal to hit people, and always will be." We'll see. Has the "safety driver" in the Uber case been charged with reckless driving yet? Does Arizona law even treat them as a driver?

What is "the existing regulation which exists in tort and traffic law" anyway? I tend to read that as "the pre-existing regulations," but those regulations would have prohibited what Uber was doing and would have subjected them to strict liability for doing it. Hopefully they're still subject to strict liability. And just recently they have been prohibited from doing their testing. But how they got permission in the first place, I don't actually know.

For better or worse, while the vehicle code makes it unlawful to violate somebody else's right-of-way, and unlawful to not brake for a pedestrian if you can, these are not necessarily things you get charged with. They may just be tickets. A ticket for a robocar is a serious thing, though. Hitting anything when it is your fault is also a tort. If you kill somebody with no family however, there is an issue because there may be no plaintiff. Here, it could make sense that the state would step in and pursue the tort, or turn it into a fine.

Whether or not tickets are criminal or civil varies from state to state. But I didn't say "criminal" anyway. Has the driver of the vehicle gotten a ticket for reckless driving yet? The behavior of the pedestrian is completely irrelevant to that, as the offense of reckless driving was committed before the pedestrian even came into play - *if* there isn't some rule that Arizona passed saying that the "left seat occupant" of an autonomous vehicle in autonomous mode isn't a driver for the purpose of traffic laws. But if I were a prosecutor I'd probably charge her with manslaughter. The lack of attention to the road was a but-for cause of the death of the pedestrian, and the scenario of a pedestrian crossing at an illegal place and the vehicle failing was very much foreseeable. Moreover, the status of the person as "driver" of the vehicle is irrelevant, as negligent homicide and manslaughter are offenses which aren't part of the Arizona traffic code.

As far as "a ticket for a robocar," whatever. A human being should go to jail for this. It was absolutely appalling. Absolutely reckless on the part of the "safety driver," and in my opinion that reckless behavior was a but-for cause of the pedestrian's death. Whether or not there was also recklessness on the part of any Uber employee(s), it's too early to say, but it doesn't look good. The failure was egregious, the cars were know to have been performing poorly, the company went from two employees in the car to one, they hired a felon to act as the sole "safety driver," etc. Of course, I kind of doubt that any Uber employees will be charged individually in this case, as corporate executives get away with stuff like this all the time.

Maybe one day tickets for robocars might make sense, but we're not there yet, at least not for Uber (maybe Waymo is ready, at least on the routes where they are capable of going without the "safety driver"). I was thinking about this today, and I think we can probably do without too much new regulation for the situations where there's a human in the driver's seat whose job is to take over whenever necessary. That human should be responsible for what happens when they had a reasonably ability to prevent it (and maybe in some other situations when there is a strict-liability offense). But I think there need to be significant regulations before we allow these cars to go completely driverless. By all means the industry should be involved in determining what these regulations are, but until we can devise some tests for the companies to prove that their vehicles are safe to run without any humans able to take over on a moment's notice, we shouldn't allow it.

I'm sorry if that makes testing harder in the mean time, but the public roads are not the place for this unless it can be proven to be safe first.

But around the world, various laws on this have either declare the safety driver, or the person who activated the vehicle into autonomous mode back at HQ as the driver. But due to vicarious liability rules, it's going to be the company that put the car out there who bears liability for all tickets and in rare cases, criminal acts, unless there is some major (and rare) thing which can separate the safety driver from their employer.

The whole idea of "tickets" I think is archaic and not the right approach when it comes to robocars. Their financial penalties meant for humans probably should be multiplied for companies if extra incentive is needed, but the reality is the actual result of any "ticket" should be:

  1. It is confirmed from logs that the vehicle did violate the code. I will bet that most of the time it will not have, but sometimes it will.
  2. The team should implement and test a fix
  3. The team should demonstrate that their fix prevents the problem that resulted in the ticket, or in rare cases, argue for change to the code.

Tickets are something you do to punish known unsafe behaviours that people do, in most cases even though they know they should not do them. With robocars I don't see it working that way very often. In fact, I think that should be an entirely different class of infraction, with much more serious penalties.

Indeed, the reason this may not go to "criminal" is that requires an ill intent. It has to be negligence that was deliberate, rather than a mistake which arose out of otherwise good faith efforts. You usually need a smoking gun, a memo of the form, "Damn it, it's too hard or expensive to hire more safety drivers to meet our schedule. Even though we know we increase our accident risk lots by going down to one, let's do it anyway."

Maybe there is such a memo.

More likely there will be training manuals for the safety driver saying, "Under no circumstances take your eyes off the road for more than a very short time." There will be safety drivers testifying they were told this in their training sessions. But we await more data.

There's little doubt in my mind that the Uber executives have acted with ill intent. Whether or not they were dumb enough to have documented that ill intent, I don't know.

I don't think the smoking gun will need to be as obvious as you make it out to be, though. A warning email documenting a known problem where the cars fail to detect pedestrians in certain situations would be a smoking gun, especially if the warning got passed all the way up to a high-level executive and was ignored.

There is a difference between ill intent and zeal to move forward very fast. For true ill intent you need to show they wanted to hurt people. You'll never show that so you can show a high degree of negligence, ie. that they didn't care if they hurt people. But obviously they do care about that, the question is, do they care enough.

The point is that lots of cars, pretty much all cars out there at one point or another in their history had things they could not see, flaws that were known to the team. They say, "Well, it misses some small fraction of pedestrians so we have safety drivers to prevent that."

Every team does this

What you can get Uber on is that the failure was large, and they knew it to be large, and you might get them that the failure is not justified in such a mature product. And you might get them on moving down to one safety driver when that's unjustified. On bad training for safety drivers, perhaps. On not having tracking of safety drivers, perhaps.

It needs to be pretty wanton. Definitely not, "We know this car will make mistakes, so we will put it on the road with safety drivers." The safety driver system works at Waymo and other companies. It's valid best practices for now, though Uber may be below the proper levels of that. But you think you can prove their state of mind, that they really didn't care who they killed? Because of a memo from somebody they fired long ago?

"For true ill intent you need to show they wanted to hurt people."

No, you don't. Gross negligence or recklessness is a type of ill intent which is sufficient to show. And I suspect multiple executives at Uber are guilty of both. I suspect it, but until more details are made public, I wouldn't say it has been proven yet.

"It needs to be pretty wanton."

And based on what we know now, I expect it will be. We'll need more information before we can be sure, but the cars seem to have been performing very poorly. The three main facts which suggest this are the number of miles per intervention, the egregiousness of the failure, and the fact that Khosrowshahi had, a few months ago, considered shutting down the self-driving operations. I think it was much worse than "it misses some small fraction of pedestrians." It sounds like things were working so poorly that the cars shouldn't have even been on the road *with* safety drivers, especially not at night. But on top of that, they hire a convicted felon as the safety driver. No doubt a poorly paid, overworked, underskilled convicted felon. And the felony the safety driver was convicted of was one which showed disregard for the lives of others - armed robbery.

"We know this car will make mistakes, so we will put it on the road with safety drivers."

"We know this car will make many mistakes which will put people's lives at risk, so let's pay some low-skilled convicted felon a crap wage for doing this difficult, tedious job for which they have to perform correctly multiple times a week in order to not kill anyone."

As I said, you will never get that they wanted to hurt people, so they best chance of a criminal negligence would require a pretty extreme disregard for public safety. That is different from making mistakes, like hiring the wrong person to be a safety driver, or giving poor training to safety drivers. The hard reality is that it's rare for these sorts of things to be criminal. Uber can show all sorts of things they did to protect safety. Not as good as Waymo and others, but they did them. You are going to need things along the lines of, "Damn it, we have to move faster!" "Well, if we clearly disregarded some obvious safety steps, we could go faster." "Who cares who we hit, do it!"

There is a famous line alleged to have been said by Anthony, which has been denied along the lines of "we should have the first fatality." Anthony was fired so this is not likely to bite Uber, but even if we imagine he was still in charge, I don't think it would go to criminal intent. He could argue that what that means is "Everybody knows there are going to be fatalities, and while nobody wants them, the winning team, the most aggressive team, is the one that will probably have the first fatality." Don't believe whatever spin he puts on it? Does't matter. In criminal matters, you must prove it beyond a reasonable doubt.

You will need lots of evidence to prove beyond a reasonable doubt that Uber's team was so reckless that they deliberately deployed systems less safe than they could make them, expecting it would probably kill some people and they didn't care. The reason you can't prove that is that everybody has always felt (correctly) that fatalities will be very damaging for the teams that cause them, possible project-ending events, and even if you don't care at all for human life, you care about your project.

"As I said, you will never get that they wanted to hurt people"

This isn't first-degree murder. You don't have to show that they wanted to hurt people.

"so they best chance of a criminal negligence would require a pretty extreme disregard for public safety"

And I think they have shown that.

"deliberately deployed systems less safe than they could make them, expecting it would probably kill some people and they didn't care"

"would probably kill some people" would be depraved-heart murder (second-degree murder in Arizona, "Under circumstances manifesting extreme indifference to human life"). I'm not saying that anyone is guilty of that (the "safety driver" comes closest but even the "safety driver" I wouldn't charge with that). Negligent homicide or manslaughter would be more likely. Here's the Arizona jury instructions for each:

"The crime of negligent homicide requires proof that the defendant: 1. caused the death of another person; and 2. failed to recognize a substantial and unjustifiable risk of causing the death of another person. The risk must be such that the failure to perceive it is a gross deviation from what a reasonable person would observe in the situation. [The distinction between manslaughter and negligent homicide is this: for manslaughter the defendant must have been aware of a substantial risk and consciously disregarded the risk that [his] [her] conduct would cause death. Negligent homicide only requires that the
defendant failed to recognize the risk.]"
---
"The crime of manslaughter requires proof that the defendant: 1. caused the death of another person; and 2. was aware of and showed a conscious disregard of a substantial and unjustifiable risk of death. The risk must be such that disregarding it was a gross deviation from the standard of conduct that a reasonable person would observe in the situation."
--

The "safety driver" seems to be guilty of negligent homicide at the least, and possibly manslaughter once you consider all the warnings she must have received from Uber about the importance of keeping her eyes on the road. As far as the Uber executives, we don't know yet. "Everybody knows there are going to be fatalities, and while nobody wants them, the winning team, the most aggressive team, is the one that will probably have the first fatality" combined with an attempt to be the most aggressive team would qualify as manslaughter, I'd say.

Truth is, manslaughter charges in traffic deaths are very rare, particularly when the vehicle had right of way. Though it is also quite rare to have video of the (safety) driver to conclusively show 5 seconds of not looking at the road, which could indeed rise to recklessness. And in that case vicarious criminal liability could apply. Proving recklessness on the part of the company beyond a reasonable doubt will be a lot harder, I suspect.

Interestingly, Arizona doesn't have a separate charge for vehicular versions of negligent homicide and manslaughter. In this case that's probably to the advantage of the prosecutor. I agree that manslaughter would be tough to get a conviction on, though. If they get it, I think it'll be because Uber did a very good job of explaining to its drivers how dangerous it is for them to not pay attention to the road.

Of course, more likely would be that there's a plea deal anyway, and a plea deal wouldn't be a plea to manslaughter.

(Talking here about the "safety driver.")

How about regulations that require monitoring and reporting of the attentiveness of safety drivers? It sounded like you recently agreed there was some consensus on improving monitoring here, and it seems that reporting requirements could have helped. I think the accuracy of the reporting wouldn't be too hard to validate through random sampling of video footage of the driver.

I would agree, this is a type of regulation that could be applied. And as a plus, technology to monitor gaze is reasonably available. Chances are, in fact that any driver who could not keep up a proper level of attention would be taken off duty and told to return to base. As such, reports would be meaningless as they would always show attention above the threshold. The reporting requirement would however cause the level to be kept high.

Exactly what I was thinking. The reporting requirement itself has the desired side effect, assuming there are good ways to keep the reports from being manipulated.

Surely they would test any new software revision before allowing it to be used on the streets? If so I don't understand how it could have failed in such a predictable situation as this.

Maybe there should be mandatory independent testing before a new revision of software and or hardware is allowed on public roads.

The problem is that even the teams, who know vastly more about how to test these cars than any independent lab or regulator does, are still working out how to certify them as safe. So how could an independent tester do so? They are not safe, they need safety drivers, except Waymo.

Too sad: Uber settles with family of woman killed by self-driving car.

I understand that the family probably took the right decision for themselves, but it would have been healthier for the industry and for society as a whole for these folks to not have been silenced like this. I would have preferred to see legal discovery against Uber, in order to achieve the kind of transparency that Uber will never allow except under a subpoena.

Perhaps there are other ways that the vendors of autonomous pedestrian killers can be required to improve their transparency... but this tort would have been a very good start.

Brad, I found these questions on RISKS-DIGEST and thought you might know the answers; I sure don't:

>... how does an autonomous vehicle respond to a police
> officer directing traffic at a broken signal? What if the signal is working
> normally, but an officer is directing traffic to disregard the signal? What
> if it's not an officer but a person wearing a Halloween costume and a
> Crackerjack badge? What if it's a civilian who has taken it upon themselves
> to direct traffic, as has happened during widespread blackouts...?

Right now, all vehicles except Waymo's have safety drivers in them which would respond to police. In eventual unmanned vehicles teams are working both on ways to understand the common directions of officers using machine learning, and also simply to refer such highly unusual situations back to the control center. The first vehicle to come to such an intersection would pause, and at the control center they would watch the video and give the car instructions. In addition, the intersection would be marked, so vehicles either route around it, or are monitored from the control center as they approach it. If there is no data service at the intersection, the vehicle would pull over and stop if it can't understand what's going on, and the fact that it never made it out of the data dead zone would also cause the intersection to be marked as to be avoided.

Of course, this is only if the vehicle is unmanned. If there are passengers who are mentally competent, which would be the case for almost all passengers, their assistance could be called upon, and the tablet in the vehicle would let the passengers tell the car when to go and which path out of the intersection to take.

So in fact it's a problem only in a very rare situation, namely:

  1. The first car to go through this intersection happens to be unmanned
  2. There is no data service in the intersection (by surprise)
  3. The situation is too confusing for the software to appraise

As noted, the lack of data will need to be a surprise. Totally unmanned vehicles will not generally be routed through intersections with no data service unless they are known to be in good operation. (For example, other vehicles in the network recently went through the intersection and saw no problems.) Usually it takes a few minutes for somebody to get up and start directing traffic. If the fleet is large, information will spread quickly.

In addition, companies would want to share information on trouble on the roads. In fact, already tools like Waze (owned by Google) tend to pick up and report on things like problems at intersections within very short times, thanks to the efforts of human drivers.

Add new comment