The utilitarian math overwhelming says we should be aggressive in robocar development. How do we do that?
A frequent theme of mine has been my identification of "proving you have done it" as the greatest challenge in producing a safe robocar.
Others have gone further, such as the Rand study which incorrectly claims you need to drive billions of miles to prove it.
Today I want to discuss a theoretical evaluation that most would not advocate, but which helps illustrate some of the issues, and discussions the social and philosophical angles of this new thing that the robocar is -- a major life-saving technology which involves risk in its deployment and testing, but which improves faster the more risk you take.
People often begin with purely "utilitarian" terms -- what provides the greatest good for the greatest number. The utilitarian value of robocars to safety can be simply measured in how they affect the total count of accidents, in particular fatalities. The insurance industry gives us a very utilitarian metric by turning the cost of accidents into a concrete dollar figure -- about 6 cents/mile. NHTSA calculated economic costs of $277B, with social costs bumping the number to $871B -- a more shocking 29 cents/mile, which is more than the cost of deprecation on the vehicle itself in most cases.
The TL;DR of the thesis is this: Given a few reasonable assumptions, from a strict standpoint of counting deaths and injuries, almost any delay to the deployment of high-safety robocars costs lots of lives. And not a minor number. Delay it by a year, and anywhere from 10,000-20,000 extra people will die in the USA, a 300,000 to a million around the world. Delay it by a day and you condemn 30-80 unknown future Americans and 1,000 others to death, and many thousands more to horrible injury. These people are probably strangers to you. They will not be killed directly by you, they will be killed by reckless human drivers in the future whose switch to a robocar was delayed. The fault for the accidents is on those drivers, but the fault for the fact those reckless people were driving in the first place will be on those who delayed the testing and subsequent deployment. This doesn't mean we should do absolutely anything to get these vehicles here sooner, but it does mean that proposals which risk delay should be examined with care. Normally we only ask people to justify risk. Here we must also ask to justify caution. And it opens up all sorts of complex moral problems.
This suggests a social strategy of encouraging what I will call "aggressive development/deployment with reasonable prudence."
Reducing fatalities
Let's just look at the total fatalities. The goal of every robocar team is to produce a vehicle which is safer than the average human driver -- it produces better numbers than those above. Eventually much safer, perhaps 8x as much, or even more. Broadly it is hoped fatalities will drop to zero one day.
This is so much the goal that most teams won't release their vehicle for production use until they hit that target. Rand and others argue that it's really hard to prove you have hit that target, since it takes a lot of data to prove a negative like "we will cause less than one fatality for every 100 million miles."
On the other hand, we let teen-agers out on the road even though they are clearly worse than that. They are among the most dangerous and reckless drivers among us, yet included in those totals. We let them out because it's the only way to turn them into safer adult drivers, and because they need mobility in our world.
On the other end, especially after Uber's fatality, some are arguing that testing robocars with safety drivers is reckless, not just the sloppy way Uber did it.
The Rand study cited above, which claims that since you can't have proper statistical certainty about a high safety level until you test for an untenable amount of miles, has made people speculate that deployment should be delayed until a way to prove this is figured out. Corporate boards, making decisions on deployment, most ponder how much liability they are setting the company up for if they deploy with uncertain safety levels.
A learning safety technology
Robocars are a safety technology. They promise to make road travel safer than human driving, and seriously cut the huge toll of deaths and injuries. Unlike many other life-saving technologies, though, they "learn" through experience. I don't mean that in the sense of machine learning, I mean it in the sense that every mistake made by a robocar gets corrected by its software team, and makes the system better and safer. Every mile in a place where surprise problems can happen -- which primarily means miles on public roads -- offers some risk but also makes the system better. In fact, in many cases, taking the risk and making mistakes is the only way to find and fix some of the problems.
We might call this class of technologies "risk-improved" technologies. Other technologies also improve, but not like this. Drugs don't get better but we learn how to administer them. Medical procedures get better as we learn how to improve them and avoid pitfalls. Crashes in cars taught us how to make them safer.
Software is different though. Software improves fast. Literally, one mistake, made by one car in one place will be known immediately to its team. They will understand it quickly and have a fix in some cases within hours. Deploying the fix immediately is risky so it may takes days or weeks, but it's not like other fields where changes take years. Since mistakes are, if they are serious, made in public, the lesson can often be learned by all developers, not just the one who had the event. (Software also notoriously breaks by surprise when you try to fix it, because it's so complex, but the ability to improve is still far greater than anything else.)
Likely assertions about the future path
I believe the following assumptions about the development and deployment of robocars are reasonable, though things can be argued about all of them.
- At least in the early years, improvements in their safety level will come with time, but mostly they will come with miles on real world roads. The more miles, the more learned about what happens, the more problems fixed, the more data about what works is gathered.
- Some of this learning can and should be done in simulators and test tracks. However, this is not as effective, and we are much less far along at making it approach the effectiveness of on-road testing, if we ever can.
- Deployment will largely happen on a growth curve starting the day that the teams are ready, and will proceed along an exponential growth curve, slowing down as markets saturate. For example, if it starts at 0.25% of saturation and doubles every year, it will reach saturation in about 9-10 years from when the effort starts. If it starts 1 year later, it (roughly) reaches saturation 1 year later.
- This deployment curve will also apply in other countries. Chances are if the most eager places (USA, China) are a year later, then the other places are also pushed back, though possibly not a full year.
- Safety levels will also increase with time, though that slows down as increasing safety gets harder and harder due to diminishing returns. It is assumed the vehicles can get to at least 5x better than the average human, though not much more than 10x better.
In particular, the following steps take place:
- Off-road testing, where no civilian is put at risk
- On-road testing with safety drivers, which generally appears very safe if done properly, but has failed when not done properly
- Early deployment with no safety drivers, at the chosen safety level (probably comparable to human drivers,) with the safety level increasing to be far superior to human drivers
- Growth -- probably exponential like popular digital technologies, with improvement in safety tapering off as limits are reached
- Saturation -- when growth tapers off, at least until some new technology attracts additional markets.
Later, we'll examine ways in which these assumptions are flawed. But they are closer to correct than the counter assumptions, such as the idea that safety improvement will go just as fast without road deployment, or that saturation will occur on the same date even if deployment begins a year later.
Taking them as given for the moment, this leads to the strong conclusion. Almost any delay will lead to vastly more deaths. If "reckless" deployment will speed up development and/or deployment, this leads to vastly fewer deaths. Even ridiculously reckless deployment, if it speeds up development and deployment, saves immense numbers of total lives.
The reason is simple -- the deaths resulting from risky deployment or development happen in the first years, when the number of miles driven is very low. It doesn't matter if your car is 80 times worse than humans and has one fatality every million miles, you're "only" going to kill 100 people in your first 100M miles of testing. But your eventual safer car, the one that goes 500M miles without a fatality, will be the one driving a trillion miles as it approaches saturation. And in replacing a trillion miles of human driving, 12,500 people who would have died in human caused crashes are replaced by 2,000 killed in robocar crashes. For every year that deployment got delayed.
The magnitude of the difference is worse. When I suggested 100 lives might be lost to 100M miles of testing, I gave very little credit to the developers. Waymo has driven 10M miles and caused only one minor accident. They are doing some runs with no safety drivers, indicting they think they've already reached the human level of safety. They did this on fairly well regulated roads, so there is much more to do, but it could well be the great future benefits come at no cost in injuries or deaths, at least as they have done it.
Uber has not done so well. They have had a fatality, and in not too much distance. They were reckless. Yet even their reckless path would still result in a massive net savings of lives and injuries. Massive. Immense. The reality seems to be that nobody is likely to get to 10s or 100s of millions of miles with a poor quality vehicle unless they are supremely incompetent, far less competent than Uber.
The dangerous car has to learn
As I wrote above, computer technology is adaptive technology. Each mistake made once is, in theory, a mistake never made again. Every time a human has killed somebody on the road, it has almost never taught other humans not to make that mistake. If it has, it did so very slowly, and mostly through new laws or changes in road engineering. It might even be argued that the rate of improvement in a robocar is fairly strongly linked with the amount of testing. The more it drives, the more it is improved. The early miles uncover all the obvious mistakes, and soon, they start coming less often. 100,000 miles might get it to a fairly competent level, but a million more might be needed for the next notch, and 10 million for the one after that. But that also means that it drives these larger numbers of miles with less risk. As mileage grows, the system is improved and risk per mile reduces. Total risk per year does not grow as quickly as miles, not nearly as quickly.
We are not pure utilitarians
If you simply look at the body count, there is a strong argument to follow the now maligned credo of "move fast and break things." But we don't look purely at the numbers, for both rational and emotional reasons.
The strongest factor is our moral codes which treat very differently harm caused by us and harm caused by others. We feel much worse about harm caused by ourselves than about harm caused by others, or by nature, even though the pain is the same. We feel much more strongly about harm caused by ourselves that we could have prevented.
We see this in medicine, where the numbers and logic are surprisingly similar. If we released new drugs without testing, many would die from those that are harmful. But far, far, far more would be saved by the ones that actually work. Numerical studies have gone into detail and shown that while the FDA approval process saves thousands, it leaves millions to death. We allow, and even insist on this horrible tragedy because those who are not saved are not killed directly by us, they are killed by the diseases which ail them.
All the transportation technologies killed many in their early days, but modern culture has grown far less tolerant of risk, especially mortal risk. It makes rare exceptions: Vaccines may seriously harm 1 in 10,000 (it varies by vaccine) but are recommended even so because they save so many. The makers are even given some immunity from liability for this.
In addition, whatever the math says, we are unlikely to tolerate true "recklessness," particularly if it is pointless. If there is a less risky path that is just as fast, but more expensive, we want people to take it. Sending cars out without safety drivers is very risky, and the cost of safety drivers is affordable to the well-funded companies. (Indeed, the safety drivers, by gathering data on mistakes, probably speed up development.)
It might seem this logic would encourage any risk that promises live saving. That we should sacrifice babies to demons if it would get us deployment and saturation a day sooner. But we don't think this way, which is why we're not pure utilitarians, and at least partly deontological (caring about the intrinsic right or wrong of our actions.) Different moral theories strike different balances between the two evils under debate -- namely putting the public at extra risk due to your actions, and leaving the public at higher risk due to your inaction.
Whatever non-utilitarian philosophy might be advocated, however, when the numbers are this large, it still has to answer the utilitarian challenge: Is the course advocated worthy of the huge numbers? We might well believe it is better to prevent 100 immediate deaths to save 40,000 future deaths -- but we should understand numerically that this is what we are doing.
There is of course tremendous irony in the fact that some of these questions about harm through action and inaction are at the core of the "trolley problem," a bogus version of which has become pernicious distraction in the robocar world. Indeed, since those who discuss this idiotic problem usually end up declaring that development should slow until it is answered, they find themselves choosing between avoiding the distraction or causing delay in development -- and the loss of many lives. The philosophy class trolley problem is present in these high level moral debates, not in the operation of a vehicle.
Public reaction has a big effect on the timeline
One of the largest factors altering this equation is that early mistakes don't just cause crashes. Those crashes, especially fatalities like Uber's, will cause public backlash which has a high chance of slowing down both development and deployment. Uber had to shut down all operations after their incident, and some wondered if they would ever come back. Other companies besides Uber felt the problem as well. There will be backlash from the public, and possibly from regulators.
The public can be harsh with its backlash. When Toyota had reports of accelerator pedals sticking, it caused them much pain, and while it still gets argued, and the Toyota software was pretty poor, the general final report was that there was not actually a problem. Toyota still took a hit.
On the other hand, even with multiple people killed while driving with Tesla Autopilot (without paying proper attention) and several investigations of that accident, there seems to be no slowdown in the sales of Teslas or Autopilot or the price of Tesla stock. This factor is very difficult to predict. This is in spite of the fact that many people mistakenly think the Tesla is a kind of robocar.
This effect can be wildly variable, depending on the nature of the incidents. Broadly, the worst case would be fatalities for a vulnerable road user (like a pedestrian) particularly a child. Many other factors will play in, including the apparent preventability and how identifiable the human story behind the tragedy is. Uber was "lucky," and Elaine Herzberg very unlucky, that their fatality involved a homeless woman crossing at a do-not-cross-here sign.
The public is disturbingly inure to car crashes and car fatalities, at least when caused by humans. They barely make the news. Even the 5,000 annual fatalities for vulnerable road users don't get a lot of attention.
Robocars are risk-improved, but have a counter flaw: They put non-participating members of the public at risk. Aircraft rarely kill people on the ground. Drugs don't kill people who don't take them. Cars (robotic and human driven) can kill bystanders, both in other cars, and vulnerable road users. This does not alter that overwhelming utilitarian math, but it does alter public perception. We are more tolerant of casualties who knowingly participated, and much, much less of 3rd party casualties.
Legal reaction
In addition to public reaction, the justice system has its own rules. It does not reward risk-taking. In the event of an accident, the people saved in the future are not before the jury. High risk taking can even result in punitive damages in rare cases, and certainly in higher product liability negligence claims.
Thinking beyond individual casualties to analysis of risk
Each car accident is an unintended event, each death a tragedy. The courts and insurance claims groups handle them in accord with law and policy. But policy isn't about individual events, but rather risk. If you, or your robot, hurt somebody, the courts act to make them whole as much as possible, at your expense. This will always be the case. If you show a pattern of high risk (such as a DUI) you may be banned from the road.
Public policy and our philosophy, however, mostly deal with risk rather than individual tragic events. Every driver who goes on the road puts him/herself at risk, and puts others at risk. For most people, this is the most risk they place others in. When a person or company deploys a robocar on the road they are also putting others at risk (and the passenger.) It's a similar situation. A mature robocar is putting those people at far less risk than the driver is, though it's never zero. A prototype robocar with good safety drivers appears to be also putting the public at less risk than an ordinary driver. A prototype robocar with a poor safety driver is putting the public at more risk.
Our instinct is to view the individual accident and death as the immoral and harmful act. We may be better served to ask if the act we judge the morality of is the degree to which we place others at risk. If you look at the original question, "Is it moral to kill a few additional people today to save hundreds of thousands in the future" you may say no. If the question is "Is it moral to very slightly increase the risk on the roads for a short time to massively decrease it in the future?" the answer may be different.
Aggressive Development with Reasonable Prudence
All this leaves us with a dilemma. If we were purely utilitarian, we would follow a rather cold philosophy: "Any strategy which hastens development and deployment is a huge win, as long as bad PR from incidents does not slow progress more." And yes, it's coldly about the PR of the deaths, not the deaths themselves, because the pure utilitarian only cares about the final number. The averted deaths from early deployment are just as tragic for those involved, after all, but not as public.
We must also consider that there is uncertainty in our assumptions. For example, if the problem is truly intractable, and we never make a robocar safer than human drivers, then any incidents in its development were all negative; the positive payback never came. The risk of this should be factored in to even the utilitarian analysis.
All of this points to a strategy of "aggressive development with reasonable prudence." To unpack this, it means:
- Risk should be understood, but taken where it is likely it will lead to faster development. Being overly conservative is highly likely to have huge negative results later
- If money can allow reduced risk while maintaining speed of development, that is prudent. However, restricting development only to large super-rich companies likely slows development overall.
- Risks to external parties, especially vulnerable road users, should get special attention. However, it must be considered that many of those saved later will be such parties.
- In combination of those factors, what can be done on test tracks and in simulation should be done there -- but what can't be done as quickly in those environments should be done on the public roads
- Needless risk is to be discouraged, but worthwhile risk to be encouraged
The next level of reasonable prudence applies to deployment. While (outside of autopilots) nobody is considering commercial deployment of a vehicle less safe than human drivers, this math says that is would not be as outrageous a strategy as it sounds. Just as delay in development pushes back the date of deployment, delaying deployment pushes back saturation. For it is at saturation that the real savings of lives comes, when robocars are doing enough miles to put a serious dent in the death toll of human driving, by replacing a large fraction of it.
It should be noted that the early teams, such as Waymo, probably were not aggressive enough, while Uber was not prudent enough. Waymo appears to have reached a 10 million miles and a level of safety close to human levels, at least on simple roads, and is close to deployment. They did it with only a single minor at-fault accident. They did this by having a lot of money and being able to craft and follow a highly prudent safety driver strategy.
Uber on the other hand, has not reached those levels. They tried to save money on safety drivers and it came at great cost. While no team is immune from wanting to save money,
The assumptions
Since many will challenge this reasoning, they will challenge the numbers which suggest it. I do believe there are many open questions about the assumptions which can be examined, but that the numbers are so immense that this is not a priority unless the challenge suggests an assumption is wrong, not by just a little, but by a few orders of magnitude.
That this is doable at all
There is a core assumption that making a robocar that can drive much more safely than the average human is possible. If it's not possible, or very distant in the future, then these risks are indeed unjustified. However, there is a broad consensus of belief that it is possible.
We need road testing
Some question the need for the risk of on-road testing. With more work, we should find ways to test more things on test tracks and in simulators. This is probably true, but few believe it's completely true. In any event, none would doubt that doing this takes time, and thus delays development. So even if you can find a risk-free way to test and develop a decade from now, the math has caught up with you.
Delayed deployment is delayed saturation
It's possible, but seems unlikely, that if deployment is delayed, either due to slowing down development or waiting for a higher level of safety, that growth would happen even faster after the delay, and that penetration would "catch up" to where it would have been.
Of course, we've never gotten to do that experiment in history. We can't test, "What if smartphones had been launched 4 years later, would it have taken 4 more years for them to be everywhere?" We do know that the speed of penetration of technologies is going up and up. The TV took many decades to get in every home, while smartphones did this much faster, and software apps can do it in months. Car deployment is a very capital and work intensive thing, so we can't treat it like a software app, but the pace will increase. But not by an order of magnitude.
Penetration speed depends on many things -- market acceptance and regulations for starters. But they follow their own curve. The law tends to lag behind technology, not the other way around, so there is a strong argument that the tech has to get out there for the necessary social and regulatory steps to happen. They won't happen many times faster if the deployment is delayed.
Human drivers will get digital aid and perform better
Perhaps the largest modifier to the numbers is that human drivers will also be getting better as time goes on, because their cars will become equipped with robocar-related technologies from traditional ADAS and beyond. So far ADAS is helping, but less than we might expect, since fatalities have been on the rise the last 3 years. We can expect it to get better.
It is even possible that ADAS could produce autopilot systems which allow human driving but are almost crash-proof. This could seriously reduce the death toll for human driving. Most people are doubtful of this, because they fear that any really good autopilot engenders human complacency and devolves to producing a poor robocar rather than a great autopilot. But it's possible we could learn better how to do this, and learn it fast, so that the robocars are producing a much more modest safety improvement over the humans when they get to saturation.
Conclusion
It's unclear if society has faced a choice like this before. The closest analogs probably come from medicine, but in medicine, almost all risk is for the patient who wants to try the medicine. The numbers are absolutely staggering, because car accidents are one of the world's biggest killers.
This means that while normally it is risk which must be justified, here it is also caution that must be justified.
Teams, and governments, should follow the policy of aggressive development and reasonable prudence, examining risks, and embracing the risks that make sense rather than shying away from them. Government policy should also help teams do this, by clarifying the liability of various risk choices to make sure the teams are not too risk-averse.
That's quite a radical suggestion -- for governments to actually encourage risk-taking. Outside of wartime, they don't tend to think that way. Every failure because of that risk will still be a very wrong thing, and we are loathe to not have done everything we could to prevent it, and especially to do anything that might encourage it. We certainly still want an idea of what risks are reckless and which are justified. There is no zero risk option available. But we might decide to move the line.
In spite of this logic, it would be hard to advise a team to be aggressive today with risk. It might be good for society, and even be good for the company in that they can be a leader in a lucrative market, the public reaction and legal dangers can go against this.
We might also try to balance the risks as guided by our morals. Liability might be adjusted to be higher for harm to bystanders (especially vulnerable road users) and lower for willing participants.
We should not be afraid of low, vague risks. A vehicle that is "probably safe" but has not been proven so in a rigourous way should still be deployable, with liability taken by those who deploy it. There should not be a conservative formal definition of what safe is until we understand the problem much more. If some risk is to be forbidden, by law or by liability rules, there should be a solid analysis of why it should be forbidden and what it will save to forbid it, and what it will cost. We must shy away from worrying about what "might happen" if we are very unsure, because we are pretty sure about what will happen on the roads with human drivers if deployment is delayed, and that's lots and lots of death.
Comments
Anonymous
Thu, 2018-12-20 16:39
Permalink
Is there any evidence that
Is there any evidence that releasing robocars earlier is actually going to cause them to be used by reckless drivers sooner?
brad
Thu, 2018-12-20 17:13
Permalink
Evidence
No, it's a logical argument. Rather about the very slightly different question about releasing them later. Everything has an adoption curve from its time of release. Of course, some things are released too early and don't get their adoption until later (or after an improvement.) However, if a product is ready for release, then yes, releasing it later seems to be likely to push back adoption. The iPhone was released in 2007, and in 2012 it had a certain adoption rate X. If it had been released in 2009 instead, would it have reached rate X by 2012? It seems very unlikely. Would it have gotten the software ecosystem and everything else that happened around it by 2012?
Even with a product released too early, the early release teaches lessons which hasten the real release when it happens.
However, I do believe that many regions are now ready for robocars, at least for safe ones. It is not too early. Many people are chomping at the bit for when they can get one. People are paying $5,000 to get very basic functionality added to their Tesla.
Anonymous
Sat, 2018-12-22 12:10
Permalink
not logic
It's a question of human choice, not logic. Robocars are a huge win if they replace the average driver. But maybe not if they only replace the average Uber driver. It's not at all clear that the initial robocars will be doing the former.
IMO they should be better than the average driver *who follows all the traffic laws* before they should be allowed on the road. We make humans sit in the passenger seat and observe for ~15-16 years before we let them get behind the wheel, even supervised.
brad
Sat, 2018-12-22 13:45
Permalink
Not be allowed on the road
So let's unpack that. At some point they will match the average human driver. Some other time, let's say 2 years later, they will match the best human drivers.
If deployment is delayed by those 2 years, it costs perhaps 3 or 4 lives because in the the early 2 years they won't drive more than a billion miles. If saturation is delayed by 2 years it costs 10,000 lives.
So are you saying they should not be allowed on the road until they can match those good drivers? At the cost of 10,000 minus 4 lives?
Anonymous
Sat, 2018-12-22 19:12
Permalink
No
No, I'm not, because I think you are presenting a false dichotomy in addition to exaggerating the numbers and making guesses about the results.
And I think that's one of the problems with utilitarianism, which causes me to regard utilitarianism as evil.
You shouldn't be allowed to kill 4 *innocent* people just because you think that'll stop the deaths of 10,000 other people. Killing innocent people "for the greater good" is wrong. I *am* saying that.
brad
Sat, 2018-12-22 23:43
Permalink
And most people agree with that
However, the builders of the robocar are not attempting to kill any number of people, innocent or not. Their intent (which is a large part of what matters) is to make driving safer. Nobody goes out intending to kill, but we all go out knowing there is a chance of doing so. We know there is more chance when we speed. We know there is more chance when we drive in bad weather. We know there is more chance on some roads than others. But we do it. Are we all evil?
The robocar developers primary goal is to reduce that chance. Reduce it a lot. If it's moral to drive in bad weather just because you want to get to a party, even though that's a greater risk than these cars present, is it immoral to take risks to develop the cars to save millions of lives? Or is the party that good?
Anonymous
Sun, 2018-12-23 07:58
Permalink
More of a chance of killing?
More of a chance of killing? I don't think so. The kinds of drivers I want to take out of the average when calculating how safe a robocar should be before it should be allowed on the streets are drivers that are those that commit serious crimes. Drunk drivers. Reckless drivers. They shouldn't be allowed on the roads, and the crashes they cause shouldn't be counted in the average.
Speeding a little bit, driving carefully in bad weather, these things don't significantly, if at all, contribute to the chances of *killing* someone.
And speeding is already illegal. If you drove a million miles while speeding you'd probably get pulled over enough times that you'd lose your license. I just want to hold robocar manufacturers to the same standard as everyone else. If the cars can drive as well as people who aren't breaking the law, then it should be allowed. If the cars can only drive as well as people who break the law, then the operator ought to be deemed to be breaking the law.
brad
Sun, 2018-12-23 21:57
Permalink
Contribute to the chance
I would be interested in statistics, if you can find them, of what fraction of different types of accidents, including fatalities, are caused by the reckless/drunk contingent, and what are caused by ordinary folk. I am sure that many accidents are committed by them -- alcohol has role in 40% of fatal accidents but "rule" can include that a passenger was drinking so I am not sure what the stat is for truly drunk drivers. In any event, I suspect half of the fatalities are from ordinary folk, but it's just a guess.
And yes, the ordinary folk do indeed take risks all the time that increase the chance of death for themselves and others. I know I do.
I speed. So does everybody else around here. I speed almost all the time. I have probably driven 300,000 miles speeding. I have never gotten a speeding ticket in the USA (I got a couple 30 years ago in Canada and 1 from speed cameras in Switzerland.) So no, you definitely don't get pulled over enough times to lose your licence.
Anonymous
Mon, 2018-12-24 07:55
Permalink
Let's go with half. Or let's
Let's go with half. Or let's just take drunk driving. 30% of fatal crashes are caused by drunk driving, and probably around 1.5% of drivers at night on the weekends are drunk. So that alone seriously skews the numbers. And drunk driver's aren't likely to take a robotaxi. If they were, they'd take a regular taxi.
As far as speeding, if the car operators are willing to deal with the results of the speeding tickets, including loss of license to operate after a certain number of tickets (across all vehicles they operate), then I guess they can speed. If their speeding causes a death, they can face vehicular manslaughter charges. In any case, the kind of speeding that you did for 300,000 miles probably isn't the kind that significantly, *if at all*, increases the risk of a serious car crash. It wasn't really what I was talking about.
brad
Mon, 2018-12-24 11:20
Permalink
If it's half
So the cars have to get twice as good as the average driver. Or 3 times. I think you would find there are many who aim for that target.
No, when I speed, I am increasing my risk and the risk to others. Not a lot for just me, but some. 85% of the other people on the road are speeding with me. Together we are causing a more significant increase in risk for all.
Anonymous
Mon, 2018-12-24 21:49
Permalink
Yes
Yeah, 1/3 to 1/2 of the number of major crashes should be low enough.
Percentage-wise the increased risk of driving at the same speed as most of the other drivers is little to nothing. It may even be safer in many cases.
brad
Thu, 2018-12-27 09:51
Permalink
Speed
It is probably true that if most people are going 75, joining them is better than trying to drive 65 among them. It is not true that this is safer than everybody driving 65, though. In any event, there are always the top 20 percentile drivers who are leading the pack, not going with the flow. I have been among them at times. We are taking more risk for ourselves and others than we could. The point is, driving is fully of risky activities, taken for trivial reasons in comparison to risk taken with the goal of making driving much, much safer in the long term.
Anonymous
Thu, 2018-12-27 12:27
Permalink
If you're speeding more than
If you're speeding more than 80% of other drivers, and you kill someone, you should go to jail.
Anonymous
Sat, 2018-12-22 20:53
Permalink
Let's unpack it
"At some point they will match the average human driver."
So, for instance, they drive normally most of the time, but at night on the weekends they drive like drunk drivers 1.5% of the time?
In what way do they "match the average human driver"? It's obviously unlikely that they'll match the average human driver across all statistics at the same time. So what exactly does that even mean?
"Some other time, let's say 2 years later, they will match the best human drivers."
That 2 years figure seems awfully arbitrary. This goes back to my first question. What exactly is the failure scenario that's causing them to be such bad drivers a significant portion of the time? Why can't we geofence around it? In what situations are they killing people? Why can't we slow them down in these situations?
In most environments the car is going to have to be extremely poorly designed to kill innocent people. I don't think it'll take anywhere near 2 years to get from the average human driver to the average human driver who follows the law. Though I'm still not sure what exactly "the average human driver" means. How do you calculate the average of 15 drunk drivers and 985 sober drivers? You can use statistics, but...see my first comment.
Is there somewhere in particular that you're getting that 2 years figure from?
"If deployment is delayed by those 2 years, it costs perhaps 3 or 4 lives"
Is this "net" lives? As in, for instance, it kills 100 innocent people, but prevents the death of 62 innocent people and 42 drunk drivers? Or is it just 3 or 4 innocent lives lost altogether? Can you remind me how to calculate this?
"If saturation is delayed by 2 years it costs 10,000 lives."
*Lots* of assumptions here. The two biggest flawed assumptions I can think of are that adoption follows the same curve regardless of when deployment starts and that adoption is evenly distributed among drivers. The latter is especially unlikely. Adoption is likely to initially be robotaxis, and will mostly replace taxi drivers (including uber/lyft drivers). And I haven't found any statistics, but I suspect that taxi drivers are less likely than the average driver to be involved in a serious car crash. At the least they're probably less likely to be drunk than the average driver on the road (and drunk driving alone accounts for around a third of all fatalities).
Moreover, and I guess this goes to the first point, I don't think the deployment of robotaxis will greatly affect the deployment of personally owned robocars (or personally owned pseudo-robocars a la Tesla "full self driving"). And those I think are going to be regulated differently. And while I don't agree with utilitarianism, it probably makes sense from a utilitarian standpoint to regulate personally owned robocars less than taxi-company-owned robotaxis. (The taxi drivers who are replaced by robotaxis are less likely to cause fatal car crashes than the non-taxi-drivers who are replaced by personally owned robocars.)
brad
Sat, 2018-12-22 21:19
Permalink
Match the human driver
What is typically meant is that the vehicles have rates of accidents that suggest their rate is similar to that of average human drivers. The statistics on humans are extremely well known -- an entire industry has rooms full of trained mathematicians who do nothing but study those all day. The stats on robots are less well known but not unknowable. Admittedly one statistic is hard to figure, namely fatalities. Except for Uber ignoring every reasonable standard other teams all worked out, there have not been any. They have not driven far enough to have one, even if they were at the level of the average American (80M miles.)
The 2 years is indeed a made up number. Name your own. It doesn't matter a lot what it is, except the longer it is, the more lives are lost.
No, I am saying if the deployed vehicle is worse than a human driver, then 2 more years of testing will, during those two years, kill a number you can count on one hand, if that. Unless it's a lot worse than a human driver.
I don't claim adoption follows the same curve regardless of when deployment starts, but I do claim that it is very hard to assert that adoption is vastly faster if it begins later. As in, if you start deployment in 2020 and would get to saturation (tapering off of grown) in 2030, I think it's a very hard argument to suggest that starting in 2023 would also get to saturation in 2030. But even if that's true, the earlier deployment is still a big win. But make the case that it's true.
Anonymous
Sat, 2018-12-22 21:50
Permalink
0
Yes, rates of accidents. The thing is, not all accidents are equivalent. Even accidents of the same type (minor, major, fatality) aren't equivalent. And it's unlikely that a robocar will have accidents in the same proportions (by type) as human drivers. So then it becomes a question of how many major accidents that are non-fatalities are equivalent to one fatality? How many fatalities of jailwalkers are equivalent to how many fatalities of speeders are equivalent to how many fatalities of children playing in the street?
"The 2 years is indeed a made up number. Name your own."
0 years. I just don't see much, if any, time that the cars will be in-between. Either things are done right, and the cars should be better than humans (within the operational design domain), or things are done wrong (see Uber) and the cars are much worse than humans. But maybe I'm grossly misinformed on that.
"But make the case that it's true."
I think you have to separate robotaxis from personally-owned robocars (which I'll just call robocars from now on). In terms of robotaxis, once they're safer than human taxi drivers, I think they'll be adopted by taxi companies as quickly as they can be built. It's not going to be much of a curve.
In terms of personally-owned robocars, there will be much more of a curve. On the other hand, there's a lot of flexibility here for the manufacturers to work around the regulations on driverless vehicles by playing in the grey area between driverless vehicles and autopilot. I don't think a regulation against driverless vehicles will significantly affect the adoption of Tesla's "full self-driving" or something similar, if at all.
brad
Sat, 2018-12-22 23:48
Permalink
It's not zero time
As you get safer and safer, it gets harder and harder to improve. The gap between human level (every 80M miles) and some nice super-human level like 400M miles seems small but in other ways it is large. The way you get gains at that level is only, as far as we currently know, through experience on the road. Encountering more and more troublesome situations until you know there are fewer and fewer of them left. That doesn't take zero time.
It's like the 99% rule. The first 99% takes 99% of the time, and the last 1% takes the other 99% of the time. Except it's worse.
Anonymous
Sun, 2018-12-23 08:08
Permalink
I don't know
You're making assertions here that go against my intuition and are not backed up by any evidence. So I really don't have much of a response.
You say that gains at that level can only be derived through experience on the road. Well, depending what you mean by that, I either agree or don't understand why. If what you mean is just that literally the car has to be on the road, yes, I can see this. How humans will act in certain situations is not something that can be easily simulated. However, there's no reason that the car has to be in control of the driving in order to observe and record these things.
In any case, I'm not talking about the gap between human level and super-human level. I'm talking about the gap between average human level and average human level after subtracting the small percentage of humans who intentionally commit serious traffic infractions and cause more than half of the serious car crashes.
brad
Sun, 2018-12-23 22:00
Permalink
Well, you may think that
But essentially every single robocar expert in the world, including all those on all the teams of any significance, feel that on-road testing (with safety drivers) is essential to developing their car. Perhaps in the future an alternative will be presented and some will try it out, but for now, your intuition goes against the conclusion of literally everybody who has any skill or experience. Sorry to be so harsh, but that's the way it is for now.
Anonymous
Mon, 2018-12-24 07:46
Permalink
Frankly, I think you're
Frankly, I think you're making that up. Maybe these experts think that testing with safety drivers is the *best* way (the cheapest way, the fastest way, etc.). If you want to argue that, you can (but know that it's a normative question, not a technical one). It's clearly not the *only* way, though.
Clearly not, since *that isn't how humans learn how to drive*.
brad
Mon, 2018-12-24 11:17
Permalink
Clearly not
The cars do not learn to drive the way humans do. It is not yet within our capabilities to make a machine that can learn in the way humans do.
When everybody says it is the only way, they mean it is the only way currently within our capabilities. We probably could work hard and find other ways. It would take a few years. During those few years, millions will die.
Anonymous
Mon, 2018-12-24 21:55
Permalink
So, in other words, it's *not
So, in other words, it's *not* the only way. It's the fastest way. At least, you think/hope it's the fastest way. And you think that's justifiable. You think it's justifiable homicide. Literally.
brad
Thu, 2018-12-27 09:43
Permalink
Not literally
It is the only way at present, which makes it the fastest way. It is not yet known if there is a slower, safer way. Finding out whether there is costs lives.
Neither is justifiable homicide. For one thing, in a homicide a human has to kill another human. I don't think releasing a machine which has a risk of harming a human unintentionally would fit the definition of homicide. Nobody wants anybody to get hurt here, everybody wants to reduce that as much as possible. What is being discussed is arguably justifiable risk taking, not justifiable homicide.
Anonymous
Thu, 2018-12-27 12:25
Permalink
Literally
Releasing a dangerous machine out onto the public streets there by accidentally killing someone is homicide. It may or may not be a criminal act, but when one human accidentally kills another human, it is a homicide.
You think it's justifiable, based on some measure of the number of lives lost in the alternative (although I'm not sure of the exact criteria).
I don't. In particular, I don't think it's justifiable to put one person's life at risk, without their permission, in order to prevent the death of someone else, or 10 or 20 or 100 someone elses.
And before you say that we do that all the time, maybe we do, but beyond a certain point of risk, it's illegal to do so. We can argue about the level of risk that should be illegal. Most roads that are 65 should probably be 75. But unless you think drunk driving should be legal, you shouldn't count drunk drivers in your statistics about what is a justifiable level of risk. (And the difference between 65 and 75, if it's measurable at all, which I still doubt, is tiny in comparison).
brad
Fri, 2018-12-28 11:04
Permalink
Doing it all the time
As you note, we do it all the time. To pretty high levels of risk. For example, truck drivers kill 4,000/year in the course of their jobs. So we can get more convenient shipping. (Rail is safer and cheaper but takes far longer.) This is far, far, far, far, far, far more than prototype robocars could ever kill. While no one individual driver may add that much many companies with all their driving create vast amounts of risk, again dwarfing the risk that Waymo or even Uber ATG are putting the public to. In fact, if you want to put the risk of Uber's human drivers on Uber, then Uber cab is putting the public at vastly more risk than Uber ATG self-driving project ever has or will.
So you can claim that nobody has the right to put the public at risk in this way, but it's simply not true. They do it all the time, and they "have the right" -- in the sense that they are not stopped a priori. Rather, if they cause harm, they are required to pay for it in court. That's the way our system works. It does not forbid risk, but it puts the cost of the risk on those who cause it as well as it can. Which is not perfect, but it's how it works, and of course it trades one cost (injury or death) with a financial cost, though it is hard to compare those. Yet compare them we do. Put dollar values on human life we do. Every day.
We do it because we don't want a world where we forbid risk. That would be far worse, for so many reasons. We only forbid the really clear and obvious and extreme risks and sometimes not even those. And of course we are not uniform about it. We sometimes forbid minor risks, we don't really have a consistent set of rules.
Anonymous
Fri, 2018-12-28 15:31
Permalink
Risk
You're right that we do allow some risk. You're wrong that we allow "pretty high levels of risk." The risk that we allow is the risk inherent in a human driving *and following all the traffic laws*. That risk is very tiny, especially in terms of crashes involving deaths.
Uber's self-driving project exceeded that risk. The portion of Uber's self-driving project where they were driving with emergency braking disabled and with only one (overworked and underpaid) safety driver grossly exceeded that risk.
Anonymous
Mon, 2018-12-24 08:11
Permalink
Also, what about Tesla? Isn't
Also, what about Tesla? Isn't the vast majority of their testing done through shadow driving and simulation? You can say that the driver acts like a "safety driver" for Tesla, but there's a key difference: The driver of a Tesla is not under any pressure to *not* take over.
brad
Mon, 2018-12-24 11:22
Permalink
Tesla
Tesla is not working on a robocar with what they say and what they learn from driving is insignificant compared to what Waymo learns per mile. And yet we have seen several fatalities with autopilot on.
Anonymous
Mon, 2018-12-24 21:38
Permalink
Tesla is working on a robocar
Tesla is working on a robocar.
Several fatalities over more than a billion miles is pretty damn good.
brad
Thu, 2018-12-27 09:47
Permalink
Tesla's record
I am curious as to the source of your number, but whatever the source, that number describes a human monitored semi-automated driving system, which while very nice for what it is, is not a robocar. Note that Tesla has another (good) factor altering their number, in that their cars have the highest crash safety ratings of any cars sold. If you crash in a Tesla you are less likely to die than in other cars. In the USA, the number is around 12 fatalities in a billion miles for average human drivers in average cars, with a large fraction of those fatalities being the driver or other occupants of the at-fault car. (the majority of fatalities are single car accidents.)
Anonymous
Thu, 2018-12-27 12:04
Permalink
Link
https://electrek.co/2018/11/28/tesla-autopilot-1-billion-miles/
A "human monitored semi-automated driving system" is what all the companies have right now, apart from the few who sometimes go completely driverless. Although, during most if not all of the fatalities (with Tesla, and the Uber one), the human wasn't actually doing that job.
brad
Fri, 2018-12-28 11:10
Permalink
Not quite
There is, in my view, a very large difference between a system that is designed to work only with human monitoring (which is what Tesla has) and a system which is designed to work without human monitoring, but is still in prototype phase, so is tested using human monitoring.
They are two very different things which look the same in a superficial analysis.
Anonymous
Fri, 2018-12-28 15:55
Permalink
Kudos to Tesla
I think there *should* be a large difference between those two. But I also think that 1) Tesla is clearly building a system to work in certain environments without human monitoring - they're just not done, and regulations wouldn't allow them to release it right now even if they were done; and 2) Uber, by disabling (or not bothering to enable) emergency braking, clearly released a system that was *designed* to work only with human monitoring.
Of those two, I think Uber made a huge mistake, and hopefully has learned from it. Tesla, on the other hand, has found a way to get thousands of safety drivers to drive billions of miles for it for free (and without dealing with those pesky regulations). Kudos to Tesla.
Anonymous
Fri, 2018-12-28 16:55
Permalink
both
To put it another way, Tesla is clearly building both. They're building a car meant to work with human monitoring, and they're also building a car a meant to work without human monitoring. They're building the former in large part in order to gather data to help them build the latter.
I think that's ideal. I think that's the way you get from zero to robocar in a way that's safest. I think that's much better than a naive form of "safety driving" where you throw a driver into a buggy robocar and have them figure out how to balance between the contradictory goals of being safe and intervening as little as possible.
brad
Sun, 2018-12-30 11:56
Permalink
Almost everybody outside the auto industry
Believes that approach is not a good one. You certainly can learn something by watching human drivers supervise an ADAS tool, but Tesla is not really watching us, not the way you want to. You need complete sensor logs of every anomaly.
And yes, I see the point that when Uber decided their system had too many false positives to emergency brake, you can argue it was "designed" to need human monitoring, but I don't agree with it. It's designed to not need it, but it is not yet fully functional. I ask what is the team trying to build, what will they ship. That is what they are designing for.
Telsa is designing for, and shipping, an ADAS autopilot. They say they also plan some day to ship full self drive, but the product they sell is not that nor designed to be that. But I still maintain there is a large difference, of kind, not of degree, between a prototype robocar and a shipping or prototype autopilot trying to be a robocar some day. As I said, they look similar which may be confusing, so the counter to my point is not to say they look similar.
Anonymous
Sun, 2018-12-30 17:26
Permalink
I'd like to see a source for
I'd like to see a source for your assertion about what almost everybody outside the auto industry believes. Not that it matters, because appeal to popularity is fallcious reasoning.
brad
Mon, 2018-12-31 09:55
Permalink
Popularity?
When I say, "everybody outside the auto industry" I mean the people in the field outside the auto industry -- those at Waymo, Zoox, Uber, Lyft, and many other teams and startups in industry and academia -- those with expertise who are not at a big automaker. I include Cruise in this camp too even though it is at a big automaker.
The big automakers (mostly) came to this problem with an ADAS mindset -- we're good at ADAS so let's improve that until we get a robocar. The others did not and are not constrained by that thinking.
The source is me. I track the plans and activities of the teams, and the principle is nearly universal. With very few exceptions, belief in the incremental ADAS approach is strongly correlated with coming from the ADAS world.
Tesla is this interesting case in the middle. They are a car company, but with non-traditional origins. However, because they began their efforts using MobilEye (an ADAS product) that guided their path to be like other car companies, but more aggressive and a bit smarter about it.
Anonymous
Mon, 2018-12-31 09:20
Permalink
Also
I'd also like to see what your definition is of "robocar" and "ADAS," and if there's something in-between what it is. This would probably make a good post all to itself.
What Tesla is designing for is more, I think, than just an ADAS autopilot. I believe they want to take the *need* for human monitoring out of the equation as much as possible. They certainly have said as much. "Ultimately, you’ll be able to summon your car from anywhere ... the car can physically get to you." At the same time, I don't think they're designing for the exactly same type of "robocar" as the taxi companies. I think they're actually designing for something better - a robocar/ADAS hybrid, I guess, if you think that robocar and ADAS cover the full range of possibilities.
brad
Mon, 2018-12-31 10:04
Permalink
It's in the name
ADAS is driver assist. The human is responsible for the vehicle, working with technical controls. In robocar operations, the system is responsible for driving, the human making only strategic decisions, if that.
The big debate is whether there really is something in between them. There is the concept that NHTSA called level 3, which I call a standby robocar, but that's really a robocar that needs to exit its operating area at speed.
It is of course possible to have cars that work in different modes at the request of the driver or in different regions. That's obvious, but I don't consider it something "between" the two.
Anonymous
Mon, 2018-12-31 11:47
Permalink
Autopilot
I don't see where the hard line is between the two. Whether or not a human is responsible for the vehicle or is only making strategic decisions is subjective when you have a really good advanced autopilot.
brad
Mon, 2018-12-31 14:29
Permalink
Not really
A robocar has to be 1,000 to 10,000 times better than a "really good advanced autopilot." The robocar has to go a million miles without making a mistake that could lead to any crash, and perhaps 200 million miles before making a mistake that would cause a fatal crash. An advanced autopilot would be seen as very, very good if it could go 1,000 miles without a mistake. Even 100 miles is pretty good. In fact, Anthony Levandowski just made headlines by getting his autopilot to go 3,000 miles across the USA without needing an intervention, being the first to do that, and he did it on a pre-selected route.
A difference of 10,000 times is not just a matter of degree. It's a difference of kind. And it causes a very different design philosophy.
Anonymous
Mon, 2018-12-31 17:48
Permalink
Why?
Why does a robocar have to go a million miles without making a mistake that could lead to any crash, and perhaps 200 million miles before making a mistake that would cause a fatal crash? Is this by your definition of "robocar"? Doesn't this contradict what you said above in this very article?
Even assuming that this type of perfection is necessary to be, by definition, considered a robocar (I asked you above for your definition to avoid this moving target, but whatever), why exactly is it that an autopilot can't achieve this? The difference between going 1,000 miles without a mistake and going 1,000,000 miles without a mistake is a difference of degree, not of kind, unless you think there is something fundamental that can't achieve the latter standard. And I'm not even sure what that would mean. If you can achieve that level of perfection in a robocar, surely you can achieve it in an autopilot, as there's nothing in the definition of "an autopilot" that requires less perfection (is there?).
I go back to my original question, because apparently there's something in your definitions that is still unstated. What is the defining difference between ADAS and a robocar? I'll add what is the definition of autopilot, to see if there's something fundamental about the definition of "autopilot" that puts it in the ADAS category and not the robocar category.
If for some reason you don't want to get into the specifics (maybe you feel it's too proprietary), please just ignore my questions and just say that. I can accept if you believe that there's a difference of kind and that you just don't want to say what it is.
brad
Mon, 2018-12-31 19:09
Permalink
Why does it have to go that far?
I thought that was what you were demanding. Of course, since you keep using "Anonymous" rather than putting in a pseudonym, I don't know who is who.
Somebody was suggesting the cars be as good as good human drivers. Average human drivers have a police reported accident every 500K miles, so I took the proposed challenge to be a suggestion we might like the robocar to do 1 million miles. Likewise average human drivers go 80M miles before a fatality. Of course, the vast majority of human drivers never have a fatality!
So was it you who proposed this or somebody else? Please change the name field. Your browser will remember if you don't clear cookies.
I explained adas vs robocar in another post, perhaps with somebody else. Autopilot (Tesla) is very definitely an ADAS technology right now. I know, I have it. Autopilot is cool and all, but the reason people are impressed with it is they haven't seen what a real robocar is supposed to do.
FKA Anonymous
Thu, 2019-01-03 17:49
Permalink
Demands vs. definitions
I have some demands that I think that robocars *should* meet before they should be allowed on the roads without a safety driver. These demands are not the same as my definition of what a robocar is.
I'm not sure I even have a definition of robocar. That's one of the reasons I was asking you. My thought would be that Tesla Autopilot is both an ADAS technology and a level 2 robocar technology. I think with improvement (and regulatory approval, maybe) it could become a level 3 robocar technology (a human will be able to take "eyes off" the road until the system beeps because it needs assistance). The path from level 2 to level 3, at least on a simple operating domain like a controlled-access highway, is mostly a matter of fixing the bugs. I really don't see why Tesla can't improve autopilot to be able to handle controlled-access highways safer than a human. What is the scenario that they can't improve autopilot in order to handle? The path from level 3 to level 4 is less clear, in that there are a number of different ways to achieve it. Personally the one I'm most interested in would have a good map of the controlled-access highway system plus the ability to safely pull over to the side of the road if the human driver did not respond to a request to take over control when the exit is near. Being able to work or fall asleep on a long road trip is the number one feature I'd like in a car. In any case I expect the operating domain to be narrow at first, and gradually to be expanded.
This may or may not be the *best* way to build a robocar. But I can't see any reason why it's not possible.
You say "right now." Yes, right now Tesla's car needs a human monitoring it, even before it starts beeping at them. But right now Uber's car needs a human monitoring it too. And apparently at the time of their fatal crash they hadn't even implemented the part where it starts beeping at the human when a situation comes up that the car isn't programmed to handle.
brad
Thu, 2019-01-03 19:19
Permalink
There are no levels
I have several other essays on why the idea of levels is a dangerous misdirection. One particular confusion is that what they call level 3 is really an easier to implement variant of level 4, it has nothing to do with level 2. But research suggests it is a dangerous variant, and thus not that good a thing to work on, though indeed, in some constrained conditions like traffic jams and parking lots (ie. low speed) it can be worth building.
A number of teams (including Google) have considered doing highway first. It's certainly the easiest driving problem, though also the most dangerous, so it's a tough call. However, it's also the least interesting. A car that can drive on the highway gives rich people some time back in their lives. Sweet, but not the world-changing effect of an urban taxi.
Again, my point is what is the design goal? Uber wants to build a robocar. Tesla wants a fancy ADAS. Tesla needs a human because it is designed to need a human. Uber needs a human because it is designed to not need a human, but is still in development.
A proper robocar is capable of unmanned operation.
FKA Anonymous
Thu, 2019-01-03 20:24
Permalink
ok
Tesla wants to build a robocar.
As far as levels, I understand that you reject them. But you haven't offered any alternative definitions.
brad
Thu, 2019-01-03 20:48
Permalink
I have, but it's been some time
Short summary. There is ADAS which is not that related to robocars, though it is interesting.
There is the full robocar, which is a vehicle safe to operate unmanned on some roads and conditions.
There is a robocar with standby driver, which needs to move from where it can operate to where it can't at speed, and thus needs a human on standby to take over before it gets to the boundary zone. This is similar to what they call level 3.
So really, there's a robocar, and if people make it (they perhaps should not because it's dangerous) there's the standby mode. Everything else is simply where and when it can operate.
Anonymous
Thu, 2018-12-27 13:11
Permalink
Robocar?
A level 4 autonomous vehicle is a robocar, right? Tesla doesn't have one yet (most companies don't; maybe Waymo does), but they're building one. In fact, Musk claims they're trying to make a level 5 car.
brad
Fri, 2018-12-28 11:08
Permalink
Tesla projects
Tesla has indeed announced various claims. They have shown nothing to the public about their work on that, though, so it's hard to judge. All we see from Tesla is talk of autopilot and improving it. I would like to see them automate a supercharger station (so the cars automatically leave the supercharger and park in a regular spot after the guy who wants their spot unplugs the cable) first -- that would actually be useful. Though it is not clear if the car has proper close-in sensors to operate in a parking lot with pedestrians everywhere.
Anonymous
Fri, 2018-12-28 15:43
Permalink
Tesla
I'm not sure what you'd want them to show the public. In any case, I'm not asking you to judge them (I think it's pretty obvious you don't like them for some reason). I'm just asking you to recognize that some companies are working on the problem in a way that is significantly different from Uber and Waymo.
brad
Sun, 2018-12-30 12:06
Permalink
Don't like them
I don't have any particular "reason" to like or not like teams other than some are doing better and using better approaches. I am pleased with any approach that shows promise to work. I am less pleased with approaches that follow routes I think will be a waste, but I am also in favour of trying all the paths if people have the money and they can do so with reasonable safety.
This is also the approach of VCs. For example, George Hotz's comma.ai approach of all cameras in an open source box is one that I think less likely to succeed. It's not impossible, it's just a longshot bet. VCs try to invest in all the longshots since they win big if only one of them comes in. And so should the world, though it is tough for those who pour their life into a longshot that fails.
My main concern with Tesla has been that there is not enough clarity on the line between autopilot and robocar, and that a few people have lost their lives because of that confusion -- though not because Tesla doesn't explain that difference, it's more of a public attitude problem than theirs.
brad
Sun, 2018-12-30 12:08
Permalink
Oh, yeah, Anonymous
While I allow anonymous posting in the comments here, I would appreciate it if people filled in whatever consistent fake name they like so that when there is back and forth, readers can tell if it's one person or more who is posting anonymously.
Anonymous
Sat, 2018-12-22 21:57
Permalink
statistics
"Admittedly one statistic is hard to figure, namely fatalities. Except for Uber ignoring every reasonable standard other teams all worked out, there have not been any."
I wonder how many there would have been if a safety driver hadn't taken over, though. For Uber, probably a bunch. For the other teams, I have no idea.
brad
Sat, 2018-12-22 23:44
Permalink
Lots
Well, not lots of fatalities, but lots of accidents. That's how safety driving works. The car is not yet safe enough and will have accidents on its own. Given supervision, it has a very low rate of accidents -- for Waymo, quite a bit lower than ordinary human driving. For Uber, not so much, but they hired a safety driver who watched TV while she was supposed to monitor a crappy driving system.
Anonymous
Sun, 2018-12-23 08:15
Permalink
rules
I mention this because it's two different standards. To be allowed on the road with a safety driver, a robocar plus safety driver combination should be at least as good as a human driver who doesn't break the law. To be allowed on the road without a safety driver, the robocar itself should be at least as good as a human driver who doesn't break the law.
In both cases it's a tough thing to measure, as you have to guess about performance beforehand. So, to be clear, I'm not saying this as a codified standard, but just as a basic principle that should guide us toward coming up with the actual rules.
Anonymous
Sun, 2018-12-23 08:32
Permalink
disconnect
And maybe that's where there's a disconnect where I don't think there's any significant gap in development between "driving like the average driver" and "driving like the average non-reckless driver." Because the cars aren't going to drive recklessly. (Any more? Uber's did by disabling emergency braking, IMO, but I think that lesson has been learned.) So the question is, what kinds of mistakes is the car making? If the car is making mistakes (or mistakes are otherwise foreseeable) that can cause a fatal or otherwise serious crash in a certain operating domain, this needs to be fixed or that operating domain needs to be eliminated.
brad
Sun, 2018-12-23 22:02
Permalink
That's not how it works
Every car, in it's early days out on the road (with safety drivers) was as bad as Uber's car in some sense. Not the specific flaws, but ones of similar scope. With safety drivers doing their job, Waymo and others have certainly attained the record you want of being as safe as a sober, competent, non-reckless driver, but they certainly don't do that well without the safety driver, at least for the first several million miles, it seems.
Anonymous
Mon, 2018-12-24 07:40
Permalink
I'm not sure what you're
I'm not sure what you're disagreeing with. Maybe it's that you think that what Uber dud is acceptable? But "other people did it too," even if it's true, doesn't provide evidence that it's okay.
brad
Mon, 2018-12-24 11:15
Permalink
Yes, it does
Nobody else has done what Uber did. However, lots of teams have followed a variety of better procedures, with a good safety record.
Doggydogworld
Fri, 2018-12-21 12:47
Permalink
Tesla gets a free pass so far
Tesla gets a free pass so far because they've only killed their own drivers. As you say, "We are more tolerant of casualties who knowingly participated". Furthermore, Tesla customers blame these drivers for not paying attention (a line of thinking Tesla vigorously encourages). Customers feel no loss of control, remaining certain they are masters of their own destiny.
People attack Waymos in Phoenix and protest against them in the Bay Area, but nobody attacks or protests against Tesla. Why not? I think it gets back to the control issue, which Waymo confronts much more directly (and visibly).
The biggest legal problem is deep pockets. When a human driver kills or maims recovery is generally limited to the liability insurance limit (50-500k). Or zero if the driver is one of the millions of uninsured. Even if the driver was reckless, or drunk, or had lost his license due to prior incidents. But the ambulance chasers around here run "Hit by a company-owned car or truck?" TV ads because they know even a mild injury can trigger a huge payday when deep pockets are involved. No self-driving car company can afford to kill 100 people in the first 100 million miles. They'd be bankrupt after the first 10. That's one reason why Waymo is uber-cautious (or anti-uber cautious, haha).
This must be solved legislatively.
Russell de silva
Fri, 2018-12-21 13:48
Permalink
Fixing bugs in neural networks
I think you are right, it's pretty clear given the assumptions that being risk averse is going to cost lives.
I think the biggest assumption, and you did mention this, is whether it is fundamentally possible.
A related assumption is that the nature of bugs are typical software bugs where a failure in a given scenario has a clear logical mapping to a code fault. If the fault is in a neural network then retraining is necessary. At what point does the neural network behave like whack-a-mole? Even if the NN doesn't behave like whack-a-mole, but fixes become increasingly time consuming as the NN becomes more optimised, how does that affect the analysis?
I think it would mean that road miles are less important than expected as there would simply come a point where you get saturated with problems.
On the other hand I guess throwing all new data into the training set at the same time may be the best way to fix NN bugs.
brad
Fri, 2018-12-21 16:08
Permalink
It's not whack-a-mole
Everybody does extensive regression testing. Each new retraining of the neural networks will be run over all previously tagged sensor data and through every simulator situation to assure that the changes have not broken something else. You can't be 100% sure of course, but you know you didn't cause some problem from the past to reappear. In general, in time your test suite should become very large and you get more confidence.
Anonymous
Sat, 2018-12-22 12:24
Permalink
Teenagers
"We let [teenagers drive] because it's the only way to turn them into safer adult drivers, and because they need mobility in our world."
Mostly the latter. In fact, the former isn't true. We let teenagers drive because we respect their rights (and want to reduce the burdens on their parents), not because we're making a utilitarian decision based on driver safety. Teenagers will turn into safer adult drivers without driving at all - simply by growing up and learning not to make so many risky, stupid decisions. Not letting people drive until they're at least 24 would greatly reduce the number of car crashes and crash fatalities. Interestingly, once robocars are ubiquitous such a rule (substituting 21 for 24) might actually be politically feasible.
brad
Sat, 2018-12-22 13:38
Permalink
Partly true
Teens become less reckless as they age, agreed. However, there is also such a thing as driving experience, and you do get better at it the more driving you do. Until about age 70 when you start getting worse.
Anonymous
Sat, 2018-12-22 19:38
Permalink
Why only partly?
You said that driving experience is "the only way to turn [teens] into safer adult drivers." I replied that it's not the *only* way.
Moreover, I think if you look at the types of serious crashes that teens get involved in, you'll find that carelessness or recklessness plays a much larger role in them than lack of driving experience.
Changing the driving age to 21 or 24 would save many lives. But it would do so by punishing non-reckless teens for the mistakes of their peers.
Mr Subjective
Sun, 2018-12-23 04:17
Permalink
Subjectivity
This point has been alluded to in the comments above, but basically, if we're looking at averages then would you say that robo-cars killing 99 'innocent' people (lets say, for example, law abiding people who would not have died if they had been driving themselves, or maybe pedestrians, or maybe children) , verses 100 'human-driven fatalities', would show then to be safer than human drivers, if the human drivers who had been driving were aggressive, serial speeders and were drunk and only killed themselves?
As I've said before robocars are an objective technology trying to enter a subjective world and it's going to be extremely difficult to mix the two without geofencing robocars or banning humans.....and either of these, especially the latter, is going to seriously impact on people's convenience and freedom.
Phillip Helbig
Thu, 2018-12-27 01:49
Permalink
priorities
"The TL;DR of the thesis is this: Given a few reasonable assumptions, from a strict standpoint of counting deaths and injuries, almost any delay to the deployment of high-safety robocars costs lots of lives."
You've mentioned the meta trolley problem: however it is resolved, thinking about it causes more deaths than even a bad resolution. Probably similar logic applies here.
Somewhat related, and important in terms of fighting pollution and so on: most auto companies are offering huge SUVs which are fully electric, or are self-driving, or whatever. Only a small fraction can afford these. Much more benefit would be gained by implementing new features in low-cost models. Yes, it might be true that whoever pays 100,000 for a car can pay 120,000, so it is logical for new technologies to be adopted top-down. But perhaps this is a case where a subsidy would make sense.
Russell de silva
Fri, 2018-12-28 09:54
Permalink
Would a subsidy speed development or uptake?
It would be interesting to explore how a subsidy could advance the adoption or development of this technology.
Google/Waymo are looking to own a robo taxi fleet with ride share. Potentially purchasing transport as a service can replace private ownership will have major benefits in reducing the number of vehicles built and the size of those vehicles.
In this regard it seems quite different to electric vehicles.
Maybe city officials can speed the adoption by allowing robocars access to busways or even exemptions from congestion charges.
Michael Sullivan
Sat, 2018-12-29 07:28
Permalink
half of all drivers on the road today are below average
Clearly exactly half of all drivers on the road today are below average. That's a lot of people. If a goal is to get "bad" drivers off the road then Transportation as a Service (TaaS) will be a boon to our culture and to our economy.
Anonymous
Sat, 2018-12-29 11:08
Permalink
Median vs. average
Half of all drivers are below the median. Probably around 95% are above average, as probably about 5% of the population causes 50% of the serious crashes.
brad
Sun, 2018-12-30 11:54
Permalink
Source
I would be interested in your source as I have not seen histograms of what fraction of drivers cause what share of accidents. I have seen accident rates by age groups and other demographics, but not that.
Anonymous
Sun, 2018-12-30 17:51
Permalink
Source would be great
I would too. But you seem to be missing the point, which is not whether the number is 95% or 90% or 80% but that the number is not 50%.
brad
Sun, 2018-12-30 19:04
Permalink
My point is
We don't know what the number is, it's just a guess. The number is both stronger than you suggest, but also misleading.
My own examinations suggest that 1/3rd of drivers never have an accident reportable to insurance in their lives. I am such a person. But it would be an error to judge that they are perfect, that their driving creates no risk. I am very aware of my own imperfection as a driver in spite of my 40 years of perfection. (I have had a parking ding.)
If one were to demand that robocars drive as well as this third of drivers, how could you do it? Perfection isn't possible, nor can it be measured. Waymo has about 15 human lifetimes worth of driving, and one at-fault fender bender. Where do you put it?
You can measure the accident rate of the top 50% of drivers, and get a number, but if you measure the rate of the top 30% you get zero so you no longer can do a comparison. The real goal is to say, "If we put this car on the road, are we creating more risk than putting a human driver on the road?" Yes, you can debate what class of human driver, but only to a point.
Anonymous
Mon, 2018-12-31 06:35
Permalink
Uber
We don't know the exact numbers, but it's quite clear looking at the causes of fatal crashes that a small percentage of people are causing a disproportionate number of fatal crashes.
Assuming you mean Waymo as a robocar, and not as ADAS, I don't know where to "put it" because I don't know how many and what type of crashes would have occurred had there not been safety drivers.
Moreover, it's not something that is one dimensional. These cars are going to be good in some situations and less good in others. I don't know specifically what sort of situations Waymo has been tested in over those "15 human lifetimes worth of driving."
I think we should ask, if we put the robocar on the road, are we creating more risk than a human driver that we would be willing to hire to do the job. How exactly to measure that, depends on where the risks are. It depends on the specifics of the software. It depends on the specifics of the operating domain.
With Uber, it was pretty obvious where the risk was. In fact, an Uber employee says he warned Uber about exactly the scenario: A jaywalker at a time when the safety driver wasn't paying attention. I think that's obviously a huge risk, and it didn't take many miles of driving with only one safety driver and no emergency braking for that risk to turn into a fatality. (And as was pointed out, it could have just as easily been a child jaywalker instead of an adult. And I'm not sure it even had to have been a jaywalker. I would bet that there are legal crossing situations that are not detected by Uber's cars. Probably not along the limited one mile loop that they're currently running, but that goes to show you of what little use such a limited testing domain is).
So I think the threshold of when the to it is good enough is something that each operator has to ask for itself. In terms of regulation, I favor regulations similar to the regulations of human drivers. That is, not much before-the-fact, but mostly after-the-fact punishment, including criminal punishment when your actions were grossly negligent and you caused a death.
What Uber did was criminal, in my opinion. You've suggested before that all the companies are doing it, but I think that's just because we disagree with what Uber did. I don't have any evidence that any other company is doing something as egregious as what Uber did.
brad
Mon, 2018-12-31 10:14
Permalink
Crashes without safety drivers
I am not clear on the focus on the "type of crashes that would have happened without safety drivers." The answer is, "a ton in the early days, getting less and less until it gets good enough to be considered for release." The team of course looks hard at any "simulated contact" incidents because their goal is to get rid of them. It is probably tough to get the teams to publish that, though.
Waymo's testing has mostly been in easy driving situations, with some amount of more complex situations including complex urban and snow. Cruise touts that they have done much more complex urban. This is not secret.
And yes, the risk does indeed depend on what you ask it to do, and where.
No, not all the companies are doing what Uber did. All the companies are putting vehicles out which make errors, and they put two safety drivers in to stop that from turning into crashes. Uber used only one, and she was seriously negligent at the job she was given.
After all, consider the millions of people out there today driving on cruise control, ACC and Tesla Autopilot. Those systems are less capable than even Uber's car. Their drivers correct them. There is a paradox that the better a system gets, the more temptation their is for the person monitoring it (ordinary Tesla owner or professional safety driver) to slack off and thus not catch a mistake. When it can't handle things all the time (like regular cruise control which doesn't handle the car in front of you going slower than you or steering) there is no option but to be vigilant. The better it gets, the less vigil there is.
This is why the two safety drivers turn out to be important, and having them be serious professionals turns out to be important.
Anonymous
Mon, 2018-12-31 10:47
Permalink
Interestingly
While trying to look up your definition of "robocar" I came across a previous statement of yours about safety: "Robocars don't get approved for the road until they can demonstrate a safety record a fair bit superior to human drivers, and in fact safer than sober, alert drivers." Perhaps you've changed your opinion since writing that, but this is essentially the same standard I would use.
brad
Mon, 2018-12-31 11:12
Permalink
No change
Yes, I still feel it is likely that this is the sort of safety target many teams may decide upon. However, in this article here, I examine if that is too conservative a philosophy, if the right thing for society is that they release much sooner than that. It appears to be. Other than, as I point out, the risk to both the industry and to the team of backlash against early accidents, because the public doesn't look at the overwhelming math, it puts focus on individual tragic stories.
Which path is right or wrong depends on both emotion and your moral code. Large companies (especially car companies) tend to be conservative. Startups (and Google) tend to be more aggressive. The car companies have become much more aggressive after realizing that they might cease to exist if they stayed conservative, but there's a limit.
Anonymous
Mon, 2018-12-31 11:53
Permalink
utilitarianism
There's no such thing as "the right thing for society," in my opinion. There is no "overwhelming math," because comparing the deaths of one group of people to another group of people is not a math problem.
Or to put it another way, utilitarianism is evil.
brad
Mon, 2018-12-31 14:26
Permalink
That gets argued
But nobody actually believes either view strictly. If harm to others is wrong regardless of what benefits might come, then you can't ever take any risk of harming others. You certainly can't drive, which is probably the thing you do that's most dangerous to others. Instead you decide that your convenience of mobility is more important than risk to your own life and the lives of others. The ends justify the means. And we all do it, all the time.
It is more complex than any simple analysis. When we decide if the ends justify the means, we usually talk about particularly evil means, and feel no ends can justify them. When it comes to small evil means, like the risk involved in driving the roads, we are much more flexible.
Anonymous
Mon, 2018-12-31 17:24
Permalink
No
What I've said above are things that I believe strictly. I didn't say that you can't ever take any risk of harming others, though.
You shouldn't take any risk of harming others that they have not voluntarily assumed, but what risks people are deemed to have voluntarily assumed by travelling on public roadways is up for debate.
The risk involved in a human driving non-negligently on public roads is not evil. Not even a small evil. At least it's not evil on the part of the driver. The government theft of private lands to create those public roads is perhaps evil.
brad
Mon, 2018-12-31 19:16
Permalink
Not evil
Well, clearly it's hard to make formal definitions of evil. And yes, society and our laws accept driving as a reasonable thing, and do not prohibit it or punish it. In fact, we're pretty lax. Even drunk driving is "allowed" in the sense that we don't demand that cars give you a breath test before you drive. (Some people after a DUI do get that requirement.) Instead, we punish you if you get caught.
We punish you if you hit something of course. We also punish you if you are caught taking particular high risks -- unsafe lane changes, speeding, rolling stops, and the various things known as reckless or careless driving. Or sometimes even a missing light.
Some people go out rested and sober on nice days. Other times people go out tired with 0.07 alcohol in a snowstorm. We punish neither but the latter is probably creating an order of magnitude more risk than the former, maybe two orders of magnitude. They are both legal, though.
What we voluntarily assume is curious. I mean, we know there are tons of people over the legal alcohol limit on the road tonight, New Year's Eve. I am going out driving. Am I voluntarily assuming the risk? What about tomorrow, when they are sober? The reality is that even Uber with all its negligence only moderately increased the risk on the streets of Tempe. Probably not as much as the drunks tonight.
Teens go out on the road, even though we know they are creating much more risk. As do brand new drivers. As do people with bald tires ready to blow and worn out brakes. In most cases we tolerate this because we're not sure how to stop it. In other cases, like the new drivers, we tolerate it as the path to training better drivers. Or we just tolerate it because we need to get around.
Many of these risks are much worse than Waymo is exposing the public to. And they're doing it with far more public benefit as the potential reward.
Anonymous
Mon, 2018-12-31 23:32
Permalink
Risk
I don't think people are willing to voluntarily assume the risk of drunk drivers. We deem that risk too be too high, so drunk driving is illegal. Whether or not people are willing to voluntarily assume the risk of unmanned robocars, or more specifically at what quality level we are willing to accept it, is yet to be determined. As I've said before, I'm willing to accept a level of risk roughly equal to a driver who obeys the law, but some may argue that this is too high or too low. What I don't accept, though, is a utilitarian argument that killing more people today is okay if it will save many more lives years in the future.
How much Uber increased the risk depends a lot on whether or not you count the actions of the driver as the actions of Uber. I think it's fair to attribute the actions of the driver as actions of Uber, though I could see how one could argue otherwise.
brad
Tue, 2019-01-01 12:47
Permalink
Killing more people
Obviously nobody wants to kill anybody. But we accept that driving comes with that risk.
So why do we allow new drivers on the road with almost no training and a close to useless test? They are certainly riskier than experienced drivers.
And so the utilitarian argument holds no sway when the number are "cost 10 lives and save 100,000?" Usually people only take that view when then 10 lives are deliberately killed, ie. murder an innocent person to cure a disease and save many more. That view is taken because we think murder, for any reason, is a terribly immoral act. But we obviously don't think being slightly riskier on the highway is anything remotely like that.
The obvious analog as I point out is mandatory vaccination. 1 in 10,000 will suffer, but far more will be protected.
Anonymous
Tue, 2019-01-01 22:27
Permalink
Utilitarianism again
The utilitarian argument holds no sway for me ever. Maybe that's why I don't support mandatory vaccination. If you could secretly vaccinate a million people without their consent, and it would kill 10 people but save 10,000 lives, that would be wrong. The key is consent.
If a robocar can drive as well as a new driver who follows the law, I think that is acceptable. I would consent to that. What you seem to be presenting is something different though. You seem to be presenting a scenario where we have robocars that are significantly less safe than that, for the purpose of helping robocar companies (for-profit corporations, incidentally), come out with robocars sooner. I wouldn't consent to that.
Yes, public roads are communal property, so maybe I'll get outvoted. But my vote is that robocars should not be allowed until they're safer than humans.
brad
Wed, 2019-01-02 11:05
Permalink
Significantly less safe
As I write in the article, we fortunately don't have to worry about the scenario of "significantly less safe." Waymo has demonstrated that it's possible with good safety driver procedures to match human safety levels, with only one minor accident and no injuries due to machine error. In fact, Uber's fatality is not due to machine error, it's completely and entirely due to serious human error, on the part of the safety driver and those who hired her and trained her. That's because the machine errors are expected and frequent, and the safety driver's job is to correct them. When I use my adaptive cruise control its errors are expected and frequent and it's my job to correct them. If I don't it's not the fault of the ACC designer.
The question of consent is interesting. One could make the argument that when using the roads, you consent to the risk. You certainly expose yourself to it knowingly. That does not mean you consent to any specific tort done to you. Because our roads are public spaces, we have a whole different system of expressing this. As you know, if you ever engage in risky private activity like skiing or hang gliding or boating on somebody else's land or vehicle, you almost always have to agree to a little waiver which actually consents to much more than the risk, it usually waives all sorts of things it should not, like negligence on their part etc.
Society handles these situations in lots of ways, and they generally involve a mix of utilitarian and other moral codes, but mostly utilitarian. Almost all liability for torts in society comes down to money. We really do put prices on lives. In doing so, we make the corporations that expose us to risk do the almost purely utilitarian analysis using those numbers. Except when we don't, like the McDonald's coffee case. We don't have one answer.
Anonymous
Thu, 2019-01-03 05:35
Permalink
Yes
Yes, I think we do, as a society, consent to the risk level of a human driver, at least if that human driver carries a certain level of insurance. It's unfortunate that we have to make the decision communally, but that's the nature of public roads. So long as all the robocar companies follow the conservative lead of Waymo I think we'll be okay. It's only if companies get more agressive under the guise that it's okay to kill more people today to save more people's lives in a few years that I am concerned.
Uber's fatality was due to a lot of human errors, but the biggest three were: the choice to cross the road illegally, the choice of the safety driver not to pay attention to the road for an extended period of time, and the choice of the Uber executive to disable emergency braking. All of those choices were, I believe, but-for causes of the woman's death, and all were, in my opinion, proximate causes of the death, because it was forseeable beforehand that each of them had a very high risk of causing a serious car crash. Uber employees were responsible for 2 out of those 3 human decisions. The victim was responsible for the 3rd one. That is fortunate for Uber, as if the victim had been legally crossing or been a child, the payout probably would have been greater. They still probably would have gotten away with no criminal charges and have been back on the roads again, though, which is a travesty.
brad
Thu, 2019-01-03 10:52
Permalink
Disabling
Yes, the three causes were the improper crossing (which has half the legal responsibility) the safety driver (and the procedures under which she was selected and trained) and the technology failure. However, it is important to understand that they did not "disable" emergency braking. They did not have workable emergency braking, so could not use it. Today, you can buy cars which have emergency braking and cars which don't. I own one car that has it and one that doesn't. If I drive the one that doesn't and hit somebody, would you blame the death on my failure to drive my newer car? Or what about a person with one car who decided in the showroom whether to add AEB as an option or not? Are they culpable for not buying it if they hit somebody? (They are to blame for hitting the person, presumably, of course, the question is their showroom decision was an immoral act.)
And, in the future, when you can choose to drive yourself or ride in robo-Lyft, and you have an accident if you drive yourself, we will punish you extra? And if robo-Lyft takes 10 years longer to get here than it could have because of conservative approaches, who will we blame?
FKA Anonymous
Thu, 2019-01-03 16:17
Permalink
Disabling
You've repeated many times that Uber did not "disable" emergency braking. However, you have presented zero evidence of this, and all the evidence leads to the conclusion that Uber had emergency braking in place that would have prevented the crash, but that they turned it off. Multiple news stories have reported that Uber disabled emergency braking. That's what they did. Doing so was a but-for cause of the crash. It was, in my opinion, the human decision out of the three that was most foreseeable to cause a fatality.
You say they did not have workable emergency braking. That is wrong. They had workable emergency braking. It didn't produce a "smooth" enough ride, so they turned it off. They traded a smoother ride for a woman's life. They chose to fail-deadly instead of fail-safe. It should have been obvious that they were making the wrong decision, but they did it anyway. Maybe they made a utilitarian argument for it: Increase the risk of killing someone now for the sake of not getting the project shut down by the CEO, thereby saving more lives in the future. They guessed wrong, as they killed someone before the demo even could take place.
"Today, you can buy cars which have emergency braking and cars which don't. I own one car that has it and one that doesn't. If I drive the one that doesn't and hit somebody, would you blame the death on my failure to drive my newer car? Or what about a person with one car who decided in the showroom whether to add AEB as an option or not? Are they culpable for not buying it if they hit somebody?"
Possibly. Have you ever heard of the Learned Hand Formula? If not, look it up. In short, whether or not you are culpable for not having a safety feature is a factor of how expensive it would be to install the safety feature and how likely the safety feature is to save a life.
In this case, Uber *already had an emergency braking system in place*. They turned it off in an effort to trick their CEO into thinking that the car was performing better than it actually was.
> And, in the future, when you can choose to drive yourself or ride in robo-Lyft, and you have an accident if you drive yourself, we will punish you extra?
At some point in the future I expect that driving yourself will be illegal. But that's probably a *long* way in the future, as driving yourself is already extremely unlikely to kill someone, so long as you follow the rules of the road.
Uber's car, on the other hand, was extremely likely to kill someone. This is obvious in hindsight, but according to reports there were Uber employees who already knew that before the deadly crash happened.
brad
Thu, 2019-01-03 19:24
Permalink
Disable
The built in Volvo emergency braking was, like all Volvo ADAS, disabled. Everybody does that.
While we don't have the final report, what we do know is that Uber felt their system triggered too many false positives to be usable. You could say they 'turned it off' but the reality as far as we know is that it was not yet safe to deploy it, so it was turned off. If you turn off something that never worked, are you disabling it? Or just deciding it never worked and making sure it won't cause a problem?
So the key question is, did Uber in fact turn off a working braking system for a demo with the CEO, or did they never have a working braking system in the first place? We'll probably learn the answer in the NTSB report. The leaks and rumours do offer suggestion of it.
FKA Anonymous
Thu, 2019-01-03 20:32
Permalink
Doesn't matter, really
We do not know that Uber felt their system triggered too many false positives to be usable. We know is that they disabled emergency braking. We don't know why, though the evidence suggests that they did it for reasons other than safety.
Either way, it really doesn't matter. If your car is so bad that it's safer to disable hard braking than it is to keep it on, you shouldn't be putting it on the road in autonomous mode anyway.
Pages
Add new comment