Uber reported to have made an error tuning perception system


The newsletter "The Information" has reported a leak from Uber about their fatal accident. You can read the article but it is behind a very expensive paywall. The relevant quote:

The car’s sensors detected the pedestrian, who was crossing the street with a bicycle, but Uber’s software decided it didn’t need to react right away. That’s a result of how the software was tuned. Like other autonomous vehicle systems, Uber’s software has the ability to ignore “false positives,” or objects in its path that wouldn’t actually be a problem for the vehicle, such as a plastic bag floating over a road. In this case, Uber executives believe the company’s system was tuned so that it reacted less to such objects. But the tuning went too far, and the car didn’t react fast enough, one of these people said.

This is true as far as it goes. As I explain in my article on robocar sensors you have the problem of false positives (ghosts) and false negatives (blindness.) Generally, the rule is you must never get a serious false negative, because if you do, you might hit somebody. So you have to build your system so that it will not miss another vehicle or pedestrian. You can sometimes miss them, ie. not spot them in every frame, or not identify them when you first start perceiving them, but you must not fail to spot them for an extended period of time, particularly as they get close to you.

If you do that, you are going to get some false positives -- ghosts that aren't really there. Or things that are there, like blowing trash or birds, that you should not brake for. If you brake every time there is blowing trash or a bird, you have a car people find unusable, which is also no good.

If you can't do both -- never miss important obstacles on the road, and very rarely brake when you don't need to -- you are probably not ready to go out on public roads. You can, but only with diligent safety drivers.

It's not out of the question that a prototype car might have some occasional blindness problems. That's why it's a prototype, and why you have the safety drivers. Before you go out on the road, you need to make this extremely rare, though. So if the Uber fatality had involved some rather unusual situation, one might accept this sort of failure as within the realm of normal operations.

Problem is, this was not an unusual situation. This was a person simply walking across the road. The only thing slightly unusual is she was walking a bike. But that's not that unusual. She was walking the bike behind her so her full 3D image was seen by the LIDAR.

A typical situation where you might get a blindness would be if one sensor detects the obstacle and the others don't. Say the LIDAR sees something, but there are only a few points returned, while the camera and radar see nothing. That's the kind of situation where your tuning might decide there is nothing there. If you're good, you might mark the area as one for special investigation in future frames, in case it's something right on the edge of your sensor range.

A woman standing in the road should be seen by all the sensors, and seen more and more clearly the closer she gets. This is what is perplexing -- the report by police that the vehicle never slowed at all. Even if you can imagine the sensors had a problem at 50m or more (they should not have) they have little excuse for not seeing her quite well at 40m, 30m and less. Yes, if you don't brake until 20m out, you will still hit her, but not nearly as hard.

It is more probable that instead what we have is the system seeing her and classifying her as not an obstacle -- which as noted you do for things blowing in the wind, or for birds. But she's no bird or trash bag. She's a lot larger, and a lot more steady in path. She doesn't look like a plain pedestrian because of her bicycle, but again, that's not that unusual a thing. And more to the point, key to the algorithm above is "If you're not sure you can ignore it, stop." That means you fail safe -- if you definitely identify it as a bird, keep driving. But if you can't figure out what it is, don't hit it. She did have some trash bags on her bicycle -- though they are mostly blocked by her body. An unlikely but possible error would be seeing those trash bags, positively identify them as trash bags floating in the air, but miss that they are on a bicycle. I doubt it.

Their sensors should have seen her too many ways -- dimly in the LIDAR at 100m but clearly from 60m onward. Decently in the radar (but easily mistaken for the returns from stationary objects as she was moving perpendicular to the car.) Clearly in motion detection on the camera images and in parallax detection. Reasonably clearly with stereo vision with a long baseline after about 40m (too late, but in time to slow a lot.) And while neural network computer vision is still a research area, she should still have been tagged by that in plenty of time, even in the dark.

Error rates and safety drivers

If your car misses something once every 100 miles, but your safety drivers catch 999 out of 1,000 misses, then you will only have a problem every 100,000 miles. That's just on the limit of possibly acceptable, because human drivers have a small collision every 100,000 miles or so, and one that gets to police every 500,000 miles or so. If you can attain better, like once in a million miles for the two combined, you are not putting the public at zero risk, but at a risk that's less than a typical car sent out on the road.

How good the safety drivers actually are depends on how good the car is. If you drive with adaptive cruise control, you probably need to adjust steering fairly often, even if it's good with the brakes. As such, you remain fully attentive, and cruise control is considered safe for consumers. Once the car gets down to one error every 100 miles, safety driving gets a bit more boring, and it's harder to stay diligent. That's one reason most teams use two drivers.

Uber, it has been rumoured, was driving with an intervention needed every 13 miles. But they did it with one safety driver, and she obviously was not acceptable at her job, as the famous video shows her looking away from the road for 5 seconds before the accident.

While the safety driver could have been better -- obviously -- this is not the sole source of the problem. If Uber's systems were so poor that they would misclassify something as basic as a pedestrian walking a bicycle, they were not ready to go out with less than two alert safety drivers.

This type of alleged misclassification is still very strange for a team as experienced as Uber's. "Pedestrian walking bicycle" is not an obscure thing. In their testing they should have encountered it very often, as well as plain pedestrians. We will need to see what explanation their is for deciding that's a non-obstacle to figure out what this means.

Warning the safety driver

I have no knowledge of Uber's practice in this area, but it is also possible for the system, when it has uncertainty about the situation, to issue an audible warning to the safety driver. When you have two safety drivers, one usually is working as "software operator" and is watching the diagnostics coming from the system, but this watch is not constant, and with only one safety driver, you could only deliver audible alerts.

For example, when the perception system is deciding it has a false positive, it could judge probability and make an audible alarm to tell the safety driver, "pay particular attention here." Of course, the safety driver is supposed to be paying attention at all times, but no human is perfect.

Such a system would have to be "tuned" as well. If it is beeping all the time it would quickly get ignored. There is also the risk that if it is pretty good, it might further encourage the safety driver feeling they can safely look away from the road for 5 seconds as Uber's driver did. It should be combined with a driver gaze monitoring system.

I do believe you could make the system beep when it is in the state suggested in the leak: "We have identified an obstacle but concluded it is not one we should slow for." That state should be rare enough that a beep for that would not be overdoing it. It seems that Uber did not have any alerts for the safety driver, however, certainly not in this case.

Naturally, if a car decides to slow because it is uncertain, the safety driver feels that and would normally look forward if otherwise distracted from doing so.


This "leak" about the "sensors" not being properly "tuned" is not very helpful. It seems intended to make one believe that they can solve the problem by simply turning up the sensitivity. But obviously you can't do that without increasing the number of false positives. They don't provide any details on what sensors were confused. It's perhaps plausible that a camera might be somewhat confused in darkness, but as your article notes, the combination of lidar, radar, and camera shouldn't have been confused. So I think we need more information, and the cause will likely turn out to be something completely different from simply sensor tuning.

I too am puzzled by this failure for all the reasons you mention. As a human driver, I am pretty responsive to even plastic bags blowing across the road. If I can slow down for something like that, I do. I've hit things that didn't cause an accident but did cause a mess and other things that have flattened my tires. So I would think that the class of completely ignorable detected objects would be pretty small. Surely a bunch of swirling leaves, or whatever benign ignorable presence is detected, would trigger intense subsequent scrutiny for all the pipeline cycles until it is passed. Even leaves or blowing sand or an insect swarm suggests likely future confusion and therefore heightened caution and slower speeds are sensible. That's how humans drive. And, how was this classified as trivial anyway? What a crazy failure there! I'd love to know what their classifier believed it to be.

But you are correct. The right strategy -- we don't know if Uber follows it -- is when you observe something you classify as a non-obstacle, and you don't have 100% confidence it's a non-obstacle, that you should slow a bit and keep evaluating. This is, as you say, a common human strategy. In fact, it's what humans do when we see jaywalkers, people close to entering crosswalks, and children playing by the road.

So either Uber does not follow this approach -- which would be their error, or they classified the victim ans 100% confident non-obstacle, in which case their systems have some deep flaws. Because while I know that bugs happen and classifiers can be wrong, they should not be that wrong.

I hate to keep sounding like a broken record, but what is your point? What is your proposed solution?

FWIW I agree with the things that you are saying about “the right strategy” or “their error” etc. but right now AFAIK there is nothing that requires them to have zero false negatives, nor two safety drivers, etc.

Should Uber and other companies get to make their own decisions about safety and test however they want? Or do you think there should be regulations that all companies should have to follow, in the interest of public safety?

I would have said that the companies should be responsible, because they will pay dearly for any mistake. Uber has been kicked out of Arizona, let its testing permit lapse in California (and may have trouble getting it back -- they've been kicked out of California before) and stopped testing everywhere else. They have paid some undisclosed settlements. More problems should arise after the investigations conclude. There is a serious chance their entire program will be cancelled by the CEO.

So that's a lot of penalty, and it should have been enough to stop this. Nonetheless, I can see people now making the argument, "Some companies can't be trusted to even do the most basic things on their own, so we need a regulator to tell them what to do." It's less clear how to do that. The default will be that the regulator knows less about what to do than the company. I have an upcoming article on this.

This case has one unexpected wrinkle -- because the victim was homeless, their damages may not have been as high as with a non homeless person. However, no company would have planned for that.

Could a regulator come up with a better safety plan that Waymo would? The top regulators now work for Waymo and Zoox, and now Uber too. Can a regulator come up with a plan that is inferior to what Waymo would design, but make it mandatory over Waymo's objections? Quite possibly. Would Uber have obeyed regulators when all the above penalties did not stop them from making some of their mistakes?

As I said in comments on r/SDC, you raise good questions, but I think they are part of a Step 2 discussion about what sort of regulations make sense, who gets to decide, how are they enforced, etc. Step 1 is “Should there be regulations or not?”

I admit that regulating a new technology can be difficult, but I don’t think, “It’s hard” is a good enough reason to argue against regulations of any sort, because there are lives at stake here. As I mentioned, the victim was not an early adopter of new tech who knew there would be risks; she was a random member of the public.

There’s a great Masters of Scale podcast interview with Zuck about his “move fast and break things” mantra and how entrepreneurs should just release their imperfect products and let real world users be the feedback mechanism so bugs can be found and products improved. That may be okay if your software is a new feature on a social media platform, but not if your software is controlling a 2-ton SUV cruising around on public streets (in testing mode, years away from a possible release, at that)...

When people say no regulation, they mean no additional regulation. There is already lots of regulation, but it's in the tort system and the rules of the road. Unsafe driving in all forms is already illegal. Hitting somebody is already a very well understood tort which we mostly ignore because the $200B insurance industry deals with it for us, but in fact anybody who causes an accident is legally required to pay enough to make it right. On top of that there are the modest numbers of robocar regulations that have been put in place, mostly around testing.

This level of regulation handles all human driving, and also all human driving on behalf of corporations, which take liability for all actions by their employees while driving. Some would say it's already enough, even too much.

I have proposed in the past that if people don't find that sort of regulation sufficient could start by proposing not pre-regulation of an as yet undeployed technology, but simply tweaking the torts and traffic rules a bit. Then, later, add more regulation to fix what needs fixing. Yes, there will be accidents which might have been prevented by good regulations if we had the wisdom to right them. But there will also be arguable more accidents during regulatory delays. And if the regulations cause no delays, then they probably weren't needed! Regulations almost surely will create barriers to entry for small players as well.

In most states, without new regulations, robocars can't be legally operated at level 3, 4, or 5.

In my opinion, removing the requirements to pay attention to the road when operating a robocar should come along with regulations requiring testing before a robocar is certified for such driverless operation.

Indeed, some type of regulation is required, but as you note it's actually deregulation -- allowing something, not forbidding it.

Do you think there is a company that would release a vehicle without lots and lots and lots of testing? To unmanned operation? If that would not happen, you don't need a regulation requiring it. You only need regulations when companies can't be trusted to act in the public interest because their interests are at odds with public ones.

I think Uber put their vehicles on the road in autonomous mode without first adequately testing them. They seem to agree, and the state of Arizona also seems to agree. Moreover, I think the executives at Uber knew what they were doing was dangerous, and they did it anyway, in a state that had inadequate regulations. Did any executives even get fired for that? The "safety driver," who would probably be in jail right now if it weren't for the lack of clarity over the law in this situation, is the only one I know of that even got fired over the crash.

If all companies are going to do adequate testing anyway, then there's no problem requiring adequate testing *before* putting the vehicles on the road.

Strictly, most vehicles get on the road when they are pretty sucky, needing intervention very, very frequently. That's how it starts. Uber did make a big mistake with their system, but they made a bigger mistake by not having a good safety driver, having only one, and not having systems to help and ensure the safety driver was doing a good job.

We have yet to learn if Uber's bug was something really obscure that triggered in the wrong place at the wrong time, or something grossly off that is a sign of substandard QA.

I'm sure Uber isn't the only company (who used to be) endangering the public by using the public roadways to test their "pretty sucky" cars. That doesn't justify it.

Remember that every team, at least in its early period, has a car that will, if not monitored, cause an accident. Uber had a car that seems to have missed something very basic, but the real problem was the safety driver discipline. I think it's possible regulations might be written for safety driver discipline. I mean there already are regulations requiring safety drivers in the states that have regulations. Maybe regulations will require driver monitoring, but I suspect that's going to become standard now anyway.

Again, the goal of regulation is to stop companies who can't be trusted to protect the public interest, usually because it is not aligned with their interests. For a regulation to require something, it needs to be a decently understood best practice. But once something becomes that, do you think companies will decide not to do it? If so, then you might need some regulation. I think it's pretty likely that after the Uber fatality, superior safety driver practices will be established. You could write them into law too but with only minimal gain, I expect, as I am not sure why people would not do them (especially since there is free driver tracking software.)

Obviously, this was not a well established best practice before the incident. It teaches a tragic lesson.

> Remember that every team, at least in its early period, has a car that will, if not monitored, cause an accident.

Only because every team chooses to put a car on the public roads that will, if not monitored, cause an accident.

> Again, the goal of regulation is to stop companies who can't be trusted to protect the public interest, usually because it is not aligned with their interests.

Agreed, except for the anthropomorphism of "companies." The goal of regulation is to stop company executives who can't be trusted to protect the public interest, usually because it is not aligned with their interests. And I think that's pretty much all of them. If the head of CompanyX's autonomous vehicle division has a choice between shutting down the division and losing her job or continuing despite some risk to the public (and the company's shareholders), criminal laws are the most reliable way to get her to choose the former.

> For a regulation to require something, it needs to be a decently understood best practice.

No, I don't think so. The law can simply require cars to be operated at all times by humans who are paying attention and in control of the vehicle until those decently understood best practices are designed.

Granted, Uber as a company wouldn't be guilty of violating that law. But the "safety driver" was, and as far as I know that "safety driver" hasn't yet been charged with a crime despite not following that rule causing the death of someone. I'm not completely sure why that is. Is it because Arizona allows self-driving cars without a "safety driver"?

> But once something becomes that, do you think companies will decide not to do it? If so, then you might need some regulation.

I think that once we have decently understood best practices, honest companies will beg for those best practices to be written down in a regulation. They don't want to be undercut by a race-to-the-bottom, and they want to reduce liability and litigation costs by being able to point to a comprehensive written document which says that the practices they follow are the best practices as a matter of law.

> I think it's pretty likely that after the Uber fatality, superior safety driver practices will be established.

I think the whole concept of the "safety driver" is flawed. But that's a whole 'nother can of worms.

In that case "car must be operated by human" would be best practices for car operation. And I think that would be wrong, it is too inflexible to change. The problem with standards is that they can only encode conventional wisdom -- like that a car must be driven by a person -- and so they are a big barrier to innovation. People are inventing new and better ways to be safe, and we want that.

I agree that big companies love regulation because it slows down small companies. That is also not generally good for society.

I am much more a fan of working to align the goals of the companies/executives with the public goals. The broad public goals, like safety. Not specifications of how to be safe.

The best way to do that is to make it clear, "operate safely or face big consequences" and the consequences have to be ones the executives will be scared of. (The biggest problem is that executives sometimes incorrectly presume they will not get caught breaking rules when they evaluate what to do.)

If a company knows "you have no business if you don't do it safely" they are incentivized to do it safely. They can't really lie except to themselves, though executives can lie to other executives.

In the Uber case though, the SDC team had their backs against the wall trying to prepare for the CEO’s visit:

“And there also was pressure to live up to a goal to offer a driverless car service by the end of the year and to impress top executives. Dara Khosrowshahi, Uber’s chief executive, was expected to visit Arizona in April, and leaders of the company’s development group in the Phoenix area wanted to give him a glitch-free ride in an autonomous car. Mr. Khosrowshahi’s trip was called “Milestone 1: Confidence” in the company documents.”

It’s no surprise then that they tuned to reduce false positives. They really had nothing to lose, because if they couldn’t hit that first Milestone 1, they risked getting shut down right then in April. So they were actually incentivized to take more risks with no real consequences, which I argue was a factor in causing the accident.

Keep in mind that AL had (allegedly) told them a year and a half earlier that he was “pissed that they didn’t have the first death.” As you mentioned previously, Uber is so damn lucky that the victim was homeless, and that they convinced her family to take a quick (and likely small, in the scheme of things) settlement. $245M to settle a lawsuit over stolen trade secrets, and they likely only had to spend ~1% of that for a loss of life, crazy.

I'm still not sure how they convinced the family to take a quick settlement. Punitive damages could have been huge, and the victim's homelessness would have been irrelevant there. Maybe there's some provision in Arizona law which would have made punitive damages hard to get. But this would be a great case for them, if indeed Uber tuned to reduce false positives because they "had nothing to lose."

Hopefully one of the state or federal investigations will be able to get to the bottom of this, and punish Uber if that's what they did.

These are much harder to get than people imagine, and would require more than we've seen so far. We may learn more. However, at the time, the police were saying "no fault for Uber" and actual damages would end up being low for a homeless person, so they may have felt motivated to take the settlement. The settlement would not have been at a punitive damages level, but it probably was way above what you would get in an actual damages suit. So you chose X, when the odds are 99% you will get X/2 and 1% you will get 50X. Risk averse people will take X.

The police said a bunch of things that turned out to not be true, and were known to not be true before the settlement was announced (and that's just what was known publicly).

As far as how hard it is to get punitive damages in Arizona, I don't know. It's not a state whose laws I have studied. Uber's behavior is exactly the kind that punitive damages were designed to try to prevent, though. If the odds of getting them were only 1%, Arizona has some pretty messed up laws.

So rare you read about them in the newspaper when they happen. Thousands of other lawsuits go by where you don't read about them in the newspaper. For punitive damages you need malicious negligence. Proof of intent. "Who cares how many fucking people we kill, let's just get this thing working." That sort of statement. It's hard to find.

I'm not sure where you're getting "malicious negligence" and "proof of intent" from, but here's what Arizona case law (unfortunately there is no statute) says: "Punitive damages may not be awarded absent a showing of "something more" than mere tortious conduct. Linthicum v. Nationwide Life Ins. Co., 150 Ariz. 326, 330, 723 P.2d 675, 679 (1986). Punitive damages are appropriate only where the defendant's wrongful conduct was guided by evil motives or wilful or wanton disregard of the interests of others. An "evil mind" may be shown by evidence that the defendant intended to injure the plaintiff or "consciously pursued a course of conduct knowing that it created a substantial risk of significant harm to others." Rawlings v. Apodaca, 151 Ariz. 149, 162, 726 P.2d 565, 578 (1986). Punitive damages are "undeserved as punishment" unless defendant acted with a knowing or culpable state of mind, or defendant's conduct was so egregious that an evil mind can be inferred. Gurule v. Illinois Mutual Life and Casualty Co., 152 Ariz. 600, 601, 734 P.2d 85, 86 (1987). A jury may infer an evil mind if defendant deliberately continued his actions despite the inevitable or highly probable harm that would follow. Id. at 602, 734 P.2d at 87." https://www.courtlistener.com/opinion/1388832/piper-v-bear-medical-systems-inc/

That said, Arizona's standard (which I am finally looking at today instead of assuming it's similar to the jurisdiction where I live) appears to be significantly higher than most of the country. This perhaps explains why there was a settlement.

To get punitive damages, they actually had to have this evil mind. They wanted bad things to happen. In fact, Uber and everybody else does a great deal to stop bad things from happening. Uber did not do enough, though. But that's the difference. Wanting accidents (or in this case wanting other things so much that you don't care what accidents you cause) gets punitive damages. Just sucking at preventing accidents even though you did not want them -- that's just regular negligence. Of course, every case is different. But in the end the plaintiff is going to figure the odds of getting punitive damages -- which are always low unless you have a major smoking gun -- and make a decision.

Wanting other things so much that you don't care what accidents you cause. Exactly.

Don't care is quite different from "don't care enough." People are allowed to make trade-offs. They can have meetings where they look at things and say, "This would save some lives but would raise the cost of our car too much, so let's not do it." They don't get punitive damages necessarily for that.

They get punitive damages when they say, "OK, so if we make the coffee super hot, we'll sell more because it is still hot when you get to the office. We will end up giving serious burns to about 1 person in a million, and we anticipate we'll have to settle for $80,000 per case, but we'll make more than $80,000 in extra sales, so let's do it."

The truth is, you would have a very, very hard time convincing me that there are teams that don't care about safety. There are teams that don't care enough, that could do better, possibly some going faster than is prudent. But all of them put a lot of effort into safety, have regular meetings about how to improve it, look for new ways to do better. Even Uber, as far as I know.

I showed you the case law, but you continue to use your own terminology, without any reference to the law. At this point I think it's safe to agree to disagree.

(Incidentally, I don't think the coffee case would justify punitive damages. In fact, add an adequate warning and it might not even justify compensatory damages.)

I am not saying that you could not get punitive damages in certain cases, and possibly with Uber -- we really don't have enough data, just rumours and leaks for now. I am saying that it tends to be quite hard to get them, and it could be eminently sensible to take a nice settlement.

The cases you read about in the newspapers are the ones with punitive damages. You don't read about the many more that didn't. It biases us.

While that line has been disavowed, I know Anthony well and while he says brash things, I would not interpret this as meaning he wanted the first death. He wanted to have the most aggressive team, and since there will eventually be deaths, the most aggressive team is likely to have the first death. It's actually a mix of aggression in design and aggression in scale. The more miles you are driving, the more likely an incident, even if you are being cautious compared to others.

I did say “allegedly” but come on, he said it. Of course it meant that he wanted the most aggressive team, and the fact that he made the comments to said team means that he wanted them to work harder so they could catch up (okay) and that he didn’t care about safety (not okay).

I’m puzzled by the fact that you dismiss the above, but said this:

“For punitive damages you need malicious negligence. Proof of intent. "Who cares how many fucking people we kill, let's just get this thing working." That sort of statement. It's hard to find.”

Is it just because AL denied saying it? IANAL, but even so, it seems pretty damning. Call those witnesses who claimed he said it, call AL who will deny or plead the 5th, call witnesses to talk about AL’s character, etc.

So I disagree that Uber was about to shut down the SDC program if that "milestone 1" was not achieved. Or even now, I think, they will restart the program eventually. What are the alternatives?
Just going on with ride sharing means entering into competition with Waymo which they can not win, if they still want to pay the driver. All they have is an app that connects driving people with riding people. Nothing that could not be replicated by Waymo eventually, only at half the cost. A no-brainer. So they would have to move to other markets, being continously chased by Waymo and Cruise and maybe they survive as a shadow of themselves in some niche.

At this point their best plan of action might be to try to sell the company, maybe even to Waymo. But they can't expect to get much, in part due to their awful reputation which was made even worse by trying to get into the self-driving-car-building business.

I didn't say the cars must be operated by humans. I said the cars must be operated by humans *until* the car companies prove that they can do better than humans. I don't think that's a barrier to innovation at all, as this sort of testing is something the car companies have to do anyway.

"I agree that big companies love regulation because it slows down small companies." You don't agree, then, because that misstates what I said. Companies which create safe products like regulations which prevent companies which create unsafe products from forcing them into a race to the bottom. This is especially true when the safety issue is one that affects innocent third-parties who can't simply choose not to buy the product, and robocars are a great example of that.

"'operate safely or face big consequences' and the consequences have to be ones the executives will be scared of" sounds like a description of exactly what regulations are. For executives to care, you have to threaten them with more than just a lawsuit that, in the worst case, causes them to claim bankruptcy, and more likely costs them nothing as the company will just indemnify them. For executives to care, you have to threaten them with criminal liability for not operating safely. And for those threats to not be unconstitutionally vague except in the most egregious of circumstances, the procedures which have to be followed in order to be considered "operating safely" need to be spelled out in statutes or regulations.

"If a company knows "you have no business if you don't do it safely" they are incentivized to do it safely." Again you're anthropomorphising companies. You're also assuming the company is able to "do it safely." There are plenty of scenarios where a company has nothing to lose by rolling the dice and hoping to get lucky. Better to try and to fail than to have never tried at all, eh? Except you killed someone in the process. And that's if you accept the anthropomorphism. In practice it's even safer than that for a CEO to take risks, as it's not even the CEO's money on the line.

And apparently some companies can even get away with killing someone due to their gross negligence and still have people willing to use their services. :( Sure, Uber lost its self-driving division, possibly permanently. But it seems like that division was doomed to fail anyway.

I'm a big opponent of over-regulation. But putting an unsafe robocar on the public roadways endangers the lives and property of others.

Under your scheme, there would be no robocars, I am pretty confident. Everybody I have even spoken to building the vehicles concurs that on-road testing is the only way known to make the technology. Test tracks can get you 1% of the way. Sim can do a lot (but far from all) and it needs the results of lots of on road operations to be built. The closest one might get to your proposal would be phantom driving, where a human drives and the system also decides and you note when it differs. But that's not going to work properly because we only get to see how the system performs instantaneously at any given moment. The results of extended periods of driving are not learned.

It is judged, and I agree, that operation with good safety drivers is no more dangerous -- and possibly less dangerous -- than having an ordinary human driven car on the road. As such, it is the fastest path towards creation of a technology that, the faster you make it work, the safer the world is. If you do it properly. Uber, it appears, did not.

I definitely think a lot more should be done through phantom driving. Waymo has probably graduated to the point where it's getting diminishing returns from that. But Uber hadn't. I don't say this just because of their one crash, as that could have been an extremely rare occurrence. I say it because they were having human interventions every 13 miles. Surely many of those interventions were superfluous, but surely many were not. And every time there was an intervention, the data you have to prevent the next one is probably roughly equal to the data you would have through phantom driving anyway.

I'm not sure what your distinction is between instantaneous performance and the results of extended periods of driving. These cars don't get tired, do they? Through phantom driving you don't just get to note when the decisions are different. You can play back "the tapes" and note all the mistakes which were made by the software whether they would have caused a difference in driving or not. Did the car notice the mother with the baby stroller standing at the corner and put her in the right category? When were the different perception systems in disagreement about these things, and what confidence level were they returning when they were wrong? (What confidence levels are they returning in general? What confidence levels are being returned for safety-critical results?) At some point you'll get to where the car is noticing nearly everything, and categorizing nearly everything correctly, both with a very high level of confidence, at least among the things that are crucial to safe driving or which you would expect it to get correct. Then you can go into autonomous mode, with someone behind the wheel ready to take over if necessary. Waymo might be there, or almost there. Uber, I think it's safe to say, wasn't.

And now comes the point where I might seem to move the goalposts. I was intentionally vague when I said "the cars must be operated by humans." Arguably, a car with a "safety driver" (who is actually paying attention to the road) *is* operated by a human. I don't really like the "safety driver" concept, but I'm not willing to say it should be illegal (again, as long as the "safety driver" is actually paying attention to the road). Maybe the only tweak to the law needed regarding "safety drivers" is to make it explicit that they are subject to distracted driving laws just the same as any other driver (or maybe even moreso).

Incidentally, I feel pretty much the same way about the "safety driver" concept as I do about the "Tesla autopilot" concept. It's dangerous, especially if you don't have extremely skilled people behind the wheel, because you have to guess what the computer is "thinking" based on what it is doing, but I'm not willing to say that it's so dangerous that it should be illegal (as long as the person behind the wheel is actually paying attention to the road).


> It is judged, and I agree, that operation with good safety drivers is no more dangerous -- and possibly less dangerous -- than having an ordinary human driven car on the road.

Depends on the car, doesn't it? I'm sure it's true with Waymo. Then again, I wouldn't be surprised if Waymo started out in phantom driving mode.

> If you do it properly. Uber, it appears, did not.

Certainly the "safety driver" failed. And her failure was so egregious that I don't blame it on a lack of training (unless whatever she was looking at was something that was sanctioned by Uber, which I don't think we've found out). But moreover, I don't think the system was at a point where they needed to do testing in autonomous mode. (In hindsight Uber even seems to agree with this, as they've taken the cars out of autonomous mode indefinitely.)

You can't learn a lot by just asking the car, each millisecond, what it would do. Say the human starts braking for some situation ahead, any kind of situation. We'll never know if the car would have decided to brake 300ms later (still in plenty of time) because we never got there. Humans are very clever, we can sense what things are at greater distances than robots can. For now, we're better -- in the forward direction at least. Where robots have an edge is that they are always looking in all directions. They don't ever look away. They don't zone out (well, not without that being detected and fixed.) They don't forget to check something. So they can do better than humans, but not the same way we do it.

So yes, you can collect endless driving logs, and ask the software "what would you do at this situation?" But it's very hard, and in some cases impossible, to ask, "OK, what happens if the car had started braking 300ms later, what would the sensors have looked like then, how would the computer react to that?" You can create the situation in SIM, and test some elements, but you can't test the real situation.

> You can't learn a lot by just asking the car, each millisecond, what it would do.

Of course you can't learn a lot by just asking the car, each millisecond, what it would do. That's why I pointed out lots of *other* things that you would ask about. It's also why you don't learn a lot from simply having the car drive on the streets with a safety driver.

I actually think you probably learn *more* from comparing how the car would drive to how the computer would drive *at all times* rather than just when the "safety driver" decides that things are so urgent that s/he has to take over. Obviously it's not a *mistake* every time the two scenarios differ, just like it isn't necessarily a mistake every time the "safety driver" takes over. The key is that, when there are major discrepancies, you analyze *why* there is a discrepancy (interview the driver if it isn't obvious), and then you determine whether the computer *missed* the reason or just properly decided to handle it differently.

> We'll never know if the car would have decided to brake 300ms later (still in plenty of time) because we never got there.

Why can't we run a simulation in which the car is 300ms closer? Why can't we look at whether or not the car noticed the situation at the same time as the human, and just correctly decided that it had more time to react to it? If the car didn't notice the situation at or before the human noticed the situation, then this is a bug which needs to be fixed, *regardless* of whether or not the car would have braked 300 milliseconds later.

Obviously the car has to be programmed to record whenever it notices a situation which might at some point become a safety issue. Presumably it's already doing this, though.

> Humans are very clever, we can sense what things are at greater distances than robots can. For now, we're better -- in the forward direction at least.

And for now, because of that, we shouldn't have robots driving for us.

Maybe you're okay with robots which take a little bit longer to sense things, and maybe that's okay. But even then, you can always be more gentle about how egregious something missed has to be before you'll consider it a bug.

> So yes, you can collect endless driving logs

Yes, I assume this is already done.

> But it's very hard, and in some cases impossible, to ask, "OK, what happens if the car had started braking 300ms later, what would the sensors have looked like then, how would the computer react to that?"

I can't imagine a scenario where that'd be impossible. 300ms later, without braking, will just be 500ms later (or whatever) with braking. And that's with really hard panic braking, which is very rare and which usually indicates a fairly severe situation.

Most of the time the response to a situation which we can't really expect a robocar to pick up on is merely to come off the accelerator and cover the brake, in which case there's still plenty of time to collect data to run it through the simulator (and to ask at what point, if ever, the car notices the situation).

Finally, if the car is at the point where the only situations it's failing to notice are ones where we don't expect it to notice, *then* the car is probably ready to start being used in autonomous mode with a "safety driver." At that point, you're not going to be having an intervention every 13 miles. Interventions are going to be extremely rare once you get to that point.

So far, everybody agrees the only way we can have confidence in the ability of these cars to handle the real world is for them to do it -- under supervision. That's what safety driver testing is. Nobody thinks that testing in sim will make people feel that confidence. Testing in sim is useful, and people do it, but they don't imagine that they can go from one day testing only in sim and test tracks to being ready to drive on the road. Everybody started off the road, and after a certain amount of that, they started testing on the road with safety drivers. At most, you can propose a lot more testing in sim first, and maybe that can be debated, but at the end of it, you reach the point where you say, "It's time to test on the real road." I can't see how we could get from drawing board to deployed service without having a phase of supervised real driving on the road. The only question is when you switch over.

Back when people started doing this, sim technology was nowhere near ready to do at the level you describe. Perhaps soon it can be. But it wasn't an option on the table. Nobody is letting these things on the road unmanned until they've done lots and lots and lots of driving with safety drivers. If you know another path, tell us.

Using a safety driver is fine for testing. At some point we should develop a driving test for robocars, I suppose. But that's not what it's being used for. It's being used for development. Cars are being put on the road which we are certain will fail the test. That's not testing. That's development.

The bulk of development should be done without any risks to the public, who has not signed up to be a guinnea pig for this. If there are parts which *can't* be done otherwise, maybe it's something to consider. But it's not clear to me which parts of development can't be done without letting the cars drive themselves on public roads.

While I am less sure there is such a big difference -- I think these products will be in development for decades -- I am interested in what alternative you think is available now (or more to the point, was available 9 years ago when things began) so that no "beta" vehicle is ever put on the road, only an "release candidate." How do you get to release candidate with no real road driving?

I'm curious about your response to this question: Do you think it was a well-established best practice, even before the Uber fatality, that you should have more than one employee inside the vehicle when your car is still so flawed that you have an intervention on average every 13 miles?

Use of only one safety driver is pretty uncommon in the industry. Uber may well, if there is a lawsuit, be found to not be following reasonable best practices which is what a lawyer will want to show to show negligence. (You need far more to get punitive damages than basic negligence.)

You asked before if I think companies will decide not to follow clearly established best practices. You seem to agree that Uber did in fact intentionally fail to follow clearly established best practices. So the answer, clearly, is yes.

Regulations help make such intentional failure to follow clearly established best practices a matter of criminal law. This will prevent such things much better than the invisible hand. It also, frankly, *protects* company executives. When their boss questions them on why they're wasting so much money doing X, they can point to the law, and say that the law requires them to do it.

The issue is not that you can't get good from a regulation. The problem is they also create bad. They slow things down. They do that because people wanted to slow things down, but regulations tend to slow things down even more than was wanted. So you slow down safety, which is not what you wanted. Especially if you are trying to regulate a technology which is not yet even deployed. That's unprecedented, most car technologies of this sort only get regulations years to decades after deployment.

> Especially if you are trying to regulate a technology which is not yet even deployed. That's unprecedented, most car technologies of this sort only get regulations years to decades after deployment.

"Not yet even deployed" is misleading here, because the technology in question has killed a member of the public while it was being tested. It's unprecedented... because it's hard to think of another technology that can kill a random person, totally unrelated to the user/tester, before it's been deployed.

Even after deployment, it's rare. I used this example before, but an early adopter of a brand new cell phone might have it catch on fire... and that's a worst case scenario.

I also don't really buy the argument that regulations will slow down safety, because I am not advocating for crazy restrictive regulations, like "AVs have to be 100x safer than humans before they can be deployed." I just want to protect against the reckless companies that are way behind and are cutting corners to get ahead, especially in the testing stages. For example, if there was a law in place that said that each test vehicle had to have at least 2 humans inside instead of 1, it wouldn't impact Waymo or Cruise very much, so it doesn't "slow down safety" in terms of society getting safe robotaxis any later. But it might have prevented the Uber accident, and it might prevent other accidents due to distracted safety drivers.

There are testing regulations in many places, which cover testing because it's clear it's a prototype.

And I am talking about car technologies -- airbags, seatbelts, adaptive cruise control, lanekeeping, anti lock brakes, stability control, tesla/Audi/Mercedes autopilots -- all deployed long before regulation, and in fact in many cases still not regulated, or the regulations are of the form, "All cars must do this it is so good."

"airbags, seatbelts, adaptive cruise control, lanekeeping, anti lock brakes, and stability control" are not at all analogous to robocars. tesla/Audi/Mercedes autopilots are closer, and maybe they should be regulated to some extent.

Is not a robocar technology (nor are any of the rest, including autopilots) but they do take over some of the controls for you. I'm just saying that if you say we need some regulations, you are doing a major upheaval over how it worked before. Which is not impossible, but the burden of proof is on the big change. How do you create a regime that is competent enough to write the regulations? NHTSA and the DMVs have both agreed they don't have that capability. You want congress to do it?

There have been no car technologies of this sort. The last car technology of this magnitude was the car. All the rules of the road since the rules of the road for cars were created assume, implicitly if not explicitly, that there is a human being in control of the vehicle at all times. This type of car technology is unprecedented.

First, if a state's laws only implicitly, and not explicitly, assume there's a human being in control of the vehicle at all times, that potential loophole should be explicitly closed right away. Then we need regulations in order to *speed things up*, not to slow things down. We need laws to allow robocars to exist. But we need sensible ones.

Could it have been that the LiDAR detected the person + bicycle, but because the camera and RADAR brought back negatives on the detections, the system assumed the LiDAR was a false alarm or maybe even faulty? Then when the camera detected the pedestrian + bicycle, the previous state (faulty LiDAR) was still in effect while the RADAR returned nothing...leading to a catastrophic error?

Seems like a classic error that 'Object Level Fusion' can make.

Depraved indifference sounds more accurate than accident. I think that both the driver and chief engineer overseeing the AV program in Arizona should face criminal charges for the death they caused.

A Tesla Model S in Utah that the 28-year-old woman driver says was on autopilot reported crashed at about 60 mph into a stationary fire maintenance truck at a red light, with no attempt to brake or swerve.

Sounds like something the Tesla autopilot would do. It does not see traffic lights. Anybody who approached a light not ready to brake had a serious misunderstanding of how their autopilot works. It should see the truck but it's not very good on stationary things that don't look like the rear ends of cars. I am pretty sure the old autopilot doesn't see those, don't know about the new one.

But that's Tesla. This article is about self-driving prototypes from Uber. Tesla does not have a self-driving offering.

It makes Uber look pretty good. Uber has trouble braking for pedestrians at night wearing dark clothing. Tesla has trouble braking for big red fire trucks in daylight. Uber immediately discontinues robocar testing. Tesla criticizes "super messed up" media coverage.

Tesla is said to be "not very good on stationary objects", but nobody seems clear on the details. Is this fixable with current sensors, or will we continue to see similar news stories indefinitely. Does the somewhat unusual shape of the fire truck play a role? or simply the fact that it was stationary?

Perhaps this is an area for regulation. It would be nice to have confidence that autopilots are certified for ability to stop for stationary objects, including but not limited to fire trucks.

The driver can be blamed for "a serious misunderstanding of how their auopilot works", but it certainly isn't intuitive that the autopilot is somewhat blind to stationary fire trucks.

Autopilots are driver assist tech, like cruise control. Nobody got upset when their cruise control would not stop for a red light, or a stationary truck. The problem is people are treating Tesla's driver assist as more than it is. Which Tesla both encourages and warns against at the same time, causing the confusion.

The specific things autopilot doesn't see are not enumerated by Tesla. Some are things it seems some of the time and not others. Tesla just says, "there are many things it doesn't see, so keep your eyes on the road." People use it for a while, find it seeing most things, and forget that warning. Is the answer to not allow driver assist? Possibly, but that's not the current thinking.

I'm not sure what the answer is. The better the driver assist gets, the more indistinguishable it is from self-driving tech. The less frequently the driver has to intervene, the more complacent they will tend to be.

Ideally, with any technology, you want to advertise that prior to deployment you have done thorough factory testing, which has eliminated the most glaring bugs, leaving only some obscure corner cases to be tripped over by paying customers. When a customer does find a bug, you want to be able to say that the bug only shows up in rare cases requiring a specific combination of multiple factors.

With Tesla and fire trucks, we don't know what combination of factors is causing the failure to brake, but it doesn't seem to be some difficult corner case, such as perhaps a small object in poor visibility weather. Since this wasn't the first collision with a fire truck, one wonders whether the problem is fixable with the current sensors. I doubt that they can afford very many more collisions with large stationary objects before attracting unwanted regulatory attention.

I don't know about the autopilots of other companies, but the problem with Tesla's autopilot is that it is self-driving car tech; it's just really bad self-driving car tech that has to be constantly monitored for mistakes.

Good driver assist technology should work essentially perfectly within its domain. Outside of its domain, it shouldn't even try. Nobody got upset when their cruise control wouldn't stop for a red light because it's obvious that the technology doesn't work in that domain. As far as stopping for a stationary truck, I bet people *have* gotten upset that adaptive cruise control won't do that. Especially if it won't stop for a stationary truck when a stationary truck pops up out of nowhere on a controlled highway.

That said, I don't know if this is something we should be regulating or not. As far as I can tell, Tesla's autopilot *is* safe if it's used properly. I don't really see the point of it personally, as constantly monitoring for mistakes is harder than just driving. But if some people find it useful, if it's safe when used properly, and if it's reasonable to expect people to be capable of using it properly, I guess we shouldn't regulate it away. (The latter criterion is one I don't know the answer to, and it's why I don't know if we should regulate it or not. Is the average driver (or even the significantly below average driver) *capable* of using Tesla's autopilot safely? So far it looks like it, as all the crashes I'm aware of were of people intentionally doing obviously dangerous things (and not just trying and failing to use the technology).

Tesla's autopilot is a combination of 3 driver assist technologies -- adaptive cruise control, lanekeeping and forward collision avoidance. It might look and smell a bit like self-driving tech, but it is not.

The reason it is not is the goal that developers have. Robocar developers are always thinking, "I need to make a vehicle that sees everything, never makes a mistake." ADAS developers are thinking, "It's fine if I know I can't see various things, or make mistakes here and there because this is only used with somebody overseeing it."

This is a difference of kind, not degree. It's why I have written so much against the levels for suggesting they are just 2 levels of the same thing.

So you are saying that Tesla has two completely separate development teams for autopilot and for full self-driving?

It seems to me that what Tesla is doing, at least since hardware version 2, is using its customers as unpaid "safety drivers" for developing its robocar technologies. Sure, they know autopilot is going to make mistakes. But their intention is to fix those mistakes with software updates.

Yes, on paper, autopilot is just a combination of 3 driver assist technologies. That's what they disguise it as in order to keep the regulators at bay.

They keep to themselves. Because they believe (or Elon tells them to believe) that a camera/radar solution is going to work, I am sure they would share work between the teams. Driver assist and robocars are different products but they do make use of the same basic components, so improvements to some of those components would be imported. I think they would be mistaken if they had the same team do both, but that's their mistake to make and prove me wrong on.

The mistake is this. I don't think you will just improve your ADAS until it turns into self driving. I think you will improve your ADAS and some of the things you learn can get used in your self-driving. And if you are designing full self-driving, some of the stuff you build could get taken and imported into your ADAS project. It's better than you need, but that's OK.

Tesla has said that full self driving requires a suite of extra sensors (cameras and radars) not present in the autopilot sensor suite. So they have two different projects for sure. I don't think they don't share, however.

If you are a car company with a large ADAS team, it can be your curse if you decide to start a robocar project and have just one team do both. You will slow down the robocar project, and lose to the people who are only doing the one.

Tesla's website says:
"Autopilot advanced safety and convenience features are designed to assist you with the most burdensome parts of driving. Model S comes standard with advanced hardware capable of providing Enhanced Autopilot features today, and full self-driving capabilities in the future."

so at least for the Model S, they make it sound like all the hardware needed for future full self-driving capabilities comes standard, presumably including sensors. I would infer from this that there aren't two separate product development teams, just one with plans for a series of software upgrades resulting in self-driving capabilities.

If you go to buy a model S, it will sell you "full self drive" as an extra pile of cash, and says it adds extra sensors.

As I said, Tesla may have just one team. That's a mistake if you want to get fastest to a robocar. It may not be a mistake if your business is today selling lots of electric cars with autopilot and the robocar is a speculative future business.


I was quoting from the page at: https://ww.tesla.com/models
scrolling down to the section on Autopilot, which also says:
"Full Self-Driving Hardware
Every new Model S comes standard with advanced hardware capable of providing Enhanced Autopilot features today, and full self-driving capabilities in the future—through software updates designed to improve functionality over time."

Yeah, this what I was referring to as well.

The strict separation between ADAS and self-driving doesn't seem to be the path Tesla is following. You say it's their job to prove you wrong about that, but, no, it isn't.

I guess Tesla contradicts itself. I will believe the model S page, and more to the point, I will add to that recent press reports that they are now starting to worry that they are going to have to bump the sensor suite even more. Which just about everybody expected they would.

I don't see where it contradicts itself. The self-driving packages says it adds extra "active" sensors. It's software, not hardware.

I mean the ordering page I linked to which charges you extra money to get full self drive readiness, which it says and extra sensors to enhanced autopilot

"This doubles the number of active cameras from four to eight, enabling full self-driving in almost all circumstances, at what we believe will be a probability of safety at least twice as good as the average human driver."

That? I think the key word there is "active" and that the cameras are there whether you buy it or not, they're just not "active."

That would resolve the seeming contradiction, anyway.

Everything I'm reading confirms this. The car has 8 cameras in it. Paying for enhanced autopilot activates 4 of them. Paying for full self-driving activates 8 of them. (At least, it would if that feature was actually rolled out. Right now the most common speculation is that people who have paid for full self-driving aren't getting anything beyond what people who paid for enhanced autopilot are getting.)

Add new comment