Tesla sued over fatality but probably will prevail, but other issues are exposed.

Topic: 
Tags: 
NTSB diagram of horrific autopilot crash

Tesla was just sued by the family of the driver who was killed at 85&101 while using Autopilot last year. Their lawsuit isn't that well laid out, but it does touch on some other interesting issues, such as whether making driver assist too good causes complacency, even among the informed, and whether cruise control should speed up when you change lanes.

Read about those issues in my new Forbes.com article:

Tesla lawsuit may not win, but it uncovers real issues

Comments

Here's the issue: Beta-testing on innocent and unaware drivers and passengers.

In meteorology, we can create a completely off-line system that doesn't affect any users for alpha and beta testing. Putting beta testers of Tesla and other vehicles' automated systems on the roads put uninvolved people in jeopardy. That is immoral and ought to be illegal. Clearly, these risks could be mitigated if the testing was exclusively done at night or, say, Sunday, or on less congested roads. But, California during the day?? It needs to stop.

The very definition of beta testing is it is done with users, generally aware users. It is not possible to beta test an offline system that doesn't affect any users.

There are two questions. Are the users adequately informed about what they are testing? I think Tesla can establish that that's true. Though there is the classic concern of the way people are not reading contracts and warnings and just clicking "I agree" -- this is a problem everywhere.

The more interesting question is, "Is it impossible to truly inform the users?" As in, are some, or all users, incapable of understanding and correctly following the procedures to beta test, and if so, what should be done about that?

The next question is, "If some users will use it properly, and some will not, how should that be handled?"

Note that because Autopilot is constantly getting upgraded, they refer to it as a beta test. Silicon Valley now has a pattern of the "never ending beta." However, it could also be deployed as a non-beta-test driver assist product.

Note that a driver assist product is not intended to handle all situations. It will veer off the road from time to time. It will fail to notice obstacles and not brake for them some of the time. That is how it is designed, it is only designed to handle some of the driving tasks. It is not defective because it only handles some situations. At least that's how it works today.

So the question is not even really about the beta. With a beta it's always clear to the beta tester that they are trying a probably buggy product. Driver assist tools are not "buggy" when they don't notice a crash barrier in front of them and stop for it, in that doing so is not part of their design. A bug or flaw occurs when the product doesn't do what it's designed and advertised to do, not when it doesn't do everything.

However, the fact that safety and lives are involved causes concerned about the rules above.

Interesting article. The question of safety and the greater good will be the source of many arguments. It is a difficult one to solve if relying on public perception and opinion. Every fatality becomes personalized and newsworthy which is perfectly natural, but can also create distortions at odds with colder logic.

It has some parallels with how medical science makes decisions for the greater good. It too must deal with some tragic side affects of new medications and techniques. Their ideal goal is to explain the pros and cons honestly and then ask the public to put their trust in the decisions of experts. It doesn't always work, as the anti-vaccination movement shows. But in the end, good communication (including faster in the case of the NTSB), openness and honesty will be critical.

Note; small typo in article "This has indeed been a subject of public debate over might might be called the paradox of driver assist"

No doubt Tesla will be found at least partially liable, and little doubt that they've already tried to settle. The real question is how much the damages will be, and with the decedent being an Apple employee with a spouse and two children, the damages are going to be very high.

First of all, even if they did lose, as you say it will only be partially liable. The role Huang played in his own death, including using autopilot in an area he knew it was unreliable, and not turning his car away from the barrier, either because he had hands on the wheel and didn't look, or panicked, or because he did not have hands on the wheel -- that's going to be fairly high.

But I can't say Tesla will be found liable. To be liable you generally need

  • The product was defective, and acted outside of its design constraints
  • The design was defective and dangerous
  • There was a failure to warn about these flaws

I don't see any failure to warn. It is well understood that the state of the art in lanekeeping and AEB is not perfection as the lawsuit pretends, or even close to near perfection.

The main claim they could try is that the design itself is flawed, ie. the decision to deploy a non-perfect autopilot. That's untested territory, but remember that in all Tesla's materials, autopilot is described as ACC+AEB+lanekeep. ACC, AEB and lanekeep are well established products, and AEB is actually even required by Euro NCAP and soon will be required by NHTSA. Courts aren't going to rule that the standard design of AEB is defective.

The main shot they have is saying that combining the 3 well understood products is a defective idea. Far from a sure shot.

I think the claim that Huang shouldn't have used autopilot in an area he knew it was unreliable is going to backfire. Tesla shouldn't have enabled autopilot in an area they knew it was unreliable. And unlike Huang, Tesla actually knew what was in the updates they had released after Huang initially reported the problem, and Tesla actually had the technical expertise to know if the car was reliable enough to use autopilot in that area.

There are obviously a lot of facts that still have not been released, so it's hard to draw definitive conclusions, but I can't see any circumstance under which it wasn't a flaw in the lanekeeping system (either the design or the product) that caused the crash. If Huang's hands weren't detected on the wheel, he wasn't the one who caused the car to leave its lane.

That's their best argument: Defective design and/or product. They could win on some of the other arguments, depending on the facts and the jury, but defective design and/or product seems pretty clear just from the information we already know.

That said, Huang was probably at fault also. We'll have to see what facts come out, but he probably looked away from the road briefly, and that period of time coincided with the time that the car decided to veer out of the lane and straight into the safety barrier. I'm sure some experts will calculate how long it takes, but probably not more than a second or two. I'm not sure you can prove that he looked away long enough to definitively prove that it was comparative fault. And unless one of the witnesses testifies about some distraction that happened at the time of the crash, we can only speculate about why he looked away. There are lots of reasons that people look away from the roadway for a few seconds. As long as you have an adequate following distance, and your car doesn't decide to veer out of the lane on its own, it's a fairly safe thing to do.

By the way, in the article you say that Tesla's new crash avoidance system sounds like it'll be a good feature for all but those who drive carelessly. I don't know. What happens when you try to swerve into the car next to you in order to avoid crashing into the little old lady crossing the street, and the car won't let you? That's going to be a nasty lawsuit against them if something like that happens.

Tesla has declared that as long as Autopilot can get a basic grasp of the road, it operates, and it is up to the driver to make the final decision on whether it is suitable to use it on that road. Up to this point, that's how every car feature works. Car vendors have had speed limit readers and geo-databases of speed limits for many years now, but none limit the car or their cruise control that speed. It is left to the driver. The car company is not considered liable because they let the owner set the cruise control to 20mph above the limit. (Tesla limits Autopilot to 60mph on lower classes of roads, regardless of the speed limit.)

So, why was the lanekeeper defective? It's a 2018 level computer vision based lane mark detector. We know those are not perfect. We know they miss lines. The Tesla manual says they do. So how is it defective when it misinterpreted some worn out lane markers? Does a lanekeeper need to be perfect before it can be sold?

I don't know what he did. He might have panicked. My impression is he looks like somebody who keeps his hands on the wheel, though maybe not. I know people who don't.

The new Tesla system (I could not get it to activate) only activates when you are not turning the wheel. If you swerve by turning the wheel, it will obey you, as far as I know.

The lanekeeper was defective because a reasonable consumer would find a lanekeeper that leaves the lane and accelerates into a safety barrier defective. The lanekeeper "did not perform as safely as an ordinary consumer would have expected it to perform when used or misused in an intended or reasonably foreseeable way." https://www.justia.com/trials-litigation/docs/caci/1200/1203/

A lanekeeper does not need to be perfect before it can be sold, but a manufacturer is liable for damage caused by defects in the lanekeeper.

You don't know what he did. Neither do I. If the jury finds that the product or design was defective, it'll be up to Tesla to convince the jury that Huang probably did something wrong, in order to prove his comparative fault. They probably will succeed there. Whether it'll be 50/50 or 80/20 or 20/80, it's hard to say. Depends on the facts. Depends on the jury. (And actually, there may very well be three entities at fault. The State of California is also named in the lawsuit.)

There are two features that were recently activated. Lane Departure Avoidance and Emergency Lane Departure Avoidance. Lane Departure Avoidance only kicks in when the person's hands are off the steering wheel. I believe that Emergency Lane Departure Avoidance kicks in even when the person's hands are on the steering wheel (interestingly, the person's hands are on the steering wheel in the picture on the page describing it, but that might just be misleading). https://www.tesla.com/blog/more-advanced-safety-tesla-owners This would make sense, and would prevent a lot of crashes, but if it isn't coded perfectly it might cause other crashes, or it might make the wrong decision in a case where a crash is unavoidable.

Most early lanekeepers -- I have not tracked every model -- would not keep you in the lane if the turning radius got too short, for example. Were those vendors liable if you went around a corner and it didn't keep you in the lane? It was not expected to. All lanekeepers fail if the lane markers are mostly worn away (as was the case in this accident.) Is that a defect?

Strictly, LDA/ELD only kick in when you aren't torquing the wheel. Tesla does not have any way to determine if your hands are on the wheel. Other than if you are applying force, they are on it. They do have the camera but don't use it.

It's a much safer feature. It only does something when the wheel is not being used by the driver and the car is about to hit something or go off the road. Of course, it could also suddenly decide you are about to got off the road when you are not about to.

Tesla is ADAS. The driver is still responsible. That's not new. If Tesla loses this lawsuit, it could effectively forbid all ADAS that could make a mistake, which is a challenge since NHTSA is about to mandate AEB in new cars.

I don't think they can win, but you can't be certain, so they probably get a settlement, which is probably what they are after. Indeed, if Tesla is smart, they have budgeted for settlements for accidents with Autopilot involved.

No one is going to forbid anything. Tesla is being asked to pay for its mistakes. They have $2 billion in cash right now and are in the process of raising billions more. I'm not sure if settlements are in the budget, but this one, even if everything goes wrong for them, is not going to be much more than a rounding error, if that. As I said before, it's almost certain that they already tried to settle, and probably offered a significant settlement. But this guy's life was probably worth a lot (according to the calculations that will no doubt be used to calculate the "value" of his life), so his family can get a good lawyer to take the case on contingency.

We can play with other hypotheticals, but when your car suddenly swerves out of a straight lane and then accelerates into a safety barrier, I'd say that's clearly a defect. Your hypothetical is less well defined, and probably would be more of a tough call even if it were as clearly defined as what we know about this case.

Probably Tesla's lawyers will try to distract from the clear deviation of the car from safe operation, by trying to separate the system into parts and make analogies to other cars with some of those features (the alternative would be to admit the defect and focus on comparative fault). But the plaintiff's lawyers will keep the jury focused on the whole picture. And I think the jury will get it. The plaintiff will keep the kinds of techies who tend to miss the forest for the trees off the jury. (Hopefully this accident helped Tesla move away from such a deconstructive, and defective, design. Lanekeeping as an isolated feature that only knows about lines, is, and always has been, stupid. Tesla has already matured a lot since then, and hopefully they will continue to do so. If nothing else they can afford to give this guy's family a nice healthy paycheck for his contribution to beta-testing the system. Of course they can't call it that. But c'mon, cut the family a check. I'm sure they've already offered one. The family probably is asking for an unreasonable amount of damages.)

I don't think Tesla's system would currently make this mistake. So your comment that all lanekeepers fail in this situation is incorrect. Maybe all of them did at the time. But that's not the test in California. I linked to the jury instructions for the test in California. It also explains a little bit about the affirmative defenses. In this case, I think Tesla will be successful in finding comparative fault, but only as a partial defense. What portion, and what the damages are, is part of why this hasn't been settled yet. Now that I think about it, this being a three-party case is probably also complicating things. Because it seems that California may very well also have been partly at fault.

I didn't say all lanekeepers fail on these particular lines. Rather, I say that all lanekeepers have situations where they can't see or misread the lines.

But the question is this. If the lanekeeper is sold as "This will keep you in your lane as long as the lane is well defined, but let us warn you up front, there will be times and places where it makes a mistake and will swerve you out of your lane" -- then does it have a defect if it does that?

The answer is maybe. What the product is "sold as" is highly relevant in a lawsuit about an implied warranty, and possibly in a case about a failure to warn, but it's not conclusive for determination of whether or not there's a design defect.

What counts as "sold as," anyway? Just something buried in the fine print, somewhere?

Do you agree the court will use the consumer expectations test in this case? Or do you think they'll use the risk-benefit test?

The plaintiffs will try what arguments they can think of. The complaint only tries to claim the product is defective, it barely talks to failure to warn (but does mention it, and does mention consumer expectations.)

The interesting question is whether the court would blame Tesla for the hype. Tesla is clear on what autopilot is in all their marketing materials, all their press statements, the manuals, the things you have to click agree to turn on autopilot. But there is a hype about it being more, and also people who just presume it's more.

The two tests are the two possible ways to prove the product design is defective under California law. I'd say it was defective under either standard. CET, because people don't expect their car to do this, and risk-benefit, because if the car had checked a simple map showing the number of lanes, it could have realized this was not a lane (among other things; as the plaintiff points out, fixes were made after the crash, so it's hard to argue that fixes were impossible). Tesla will no doubt try to argue that the CET can't be used, and they might win on that point. I'm not familiar enough with CA law to say.

The hype is relevant in that it changes consumer expectations. But I think this will be a relatively easy case if the plaintiffs can use the consumer expectations test.

"I didn't expect my car to do this"

"It says in every manual, everything published by Tesla that it does this. When you turned it on it asked you to agree that you knew it does this. The customer says he came into our store to complain about how he doesn't like that it does this at that very off ramp. In every press communication we remind people it does this."

What more could Tesla do? Or is it impossible to sell a product of this form and adequately warn?

Warnings don't eliminate the duty to design a product in a way that protects consumers, to the extent feasible, from foreseeable misuse.

This is why most car manufacturers that are doing these sorts of things are using eye-tracking and similar techniques. It's not because consumers demand to be treated like children who can't follow directions. It's because the laws of plaintiff-friendly states essentially require it.

With that said, there's a lot that Tesla could do. Not calling the thing "autopilot," for instance. While not relevant to this case, they could also stop calling the other thing "full self-driving" and describing it as "automatic driving from highway on-ramp to off-ramp including interchanges and overtaking slower cars."

And really, what they should do, is set aside money for lawsuits, and be willing to pay for mistakes like this one. The crash that killed Huang was preventable. Tesla screwed up. They essentially admitted it when they released updates in response to the crash (though the jury likely won't be allowed to hear about that).

I'm fairly sure they did offer to settle this case. Chances are the family is asking for too much, and that's why it wasn't yet settled. It still probably will be settled at some point.

It's not a problem at all if they pay for specific mistakes, of course. The issue is whether you can make driver assist that is explicitly not going to handle certain things

You can't make driver assist that is explicitly not going to handle certain things, if those certain things are forseeable and could be handled inexpensively and not handling them causes injuries, without being liable for those injuries.

Maybe tort law is flawed in that way. Some people believe that if an adult misuses a product in a way that violates the operating instructions that this should be an absolute bar to being able to sue the manufacturer. However, that's not what the law is.

Tesla's lawyers have, I am sure, studied how their marketing materials and user agreement and warnings are written. Extensively. But there may be a limit on what they can do.

I think the good thing for Tesla, though, is that Huang knew Autopilot could not handle that lane, complained about it, and then used it there anyway. That's going to hurt their case.

I think Tesla can legitimately ask, 'what more could we have done, other than the impossible task of making the product perfect or vastly more expensive?' (You are allowed to say the expensive part.)

They could have added LIDAR of course. Too expensive especially on a 2017 model X. They could have had the maps. But they weren't there yet.

The fact that Huang knew about the problem (although he probably thought it had been fixed) will no doubt hurt the plaintiff's case. It is an argument for comparative fault. It doesn't make the case that there is a design defect go away, though.

The suggestions of Tesla's lawyers no doubt are overridden frequently. Musk is a loose cannon. And yes, there's only so much you can do. Marketing materials that say that a tire explodes if driven above 70 miles per hour aren't going to prevent liability for damages if someone is injured by them exploding at 71 mph. Not in plaintiff-friendly states, anyway.

What has Tesla done since the crash? A lot. And the product has only gotten less expensive. I have a hard time seeing how you are serious in asking this question. You outlined several things they could do yourself.

I'm not sure if this will be relevant in court, but the fact that Huang reported the problem cuts both ways. They could have, and should have, either fixed it then or disabled autopilot in that stretch of road. That probably will be relevant, if not for the products liability claim, then for the negligence claim.

Yes, there are many things they could have fixed, but there are also many flaws today, and many ways to fix them, and in fact they fix more of them every month, or even more often, and have done so since they fixed the car. There is a conflict between a doctrine of "you are punished if have a problem you could have fixed" and the common and normal flow of software products, which are in a state of constant improvement. The "could have fixed it" doctrine doesn't allow the continuous improvement process. You know about bugs, you are working to fix them, you have different calendars for fixing them.

Of course, courts could rule that continuous improvement is to be punished. It would be wrong and make the world less safe.

Strict liability is not punishment. It's just liability. You don't have to be perfect. You can make mistakes. But you're liable for them.

You may or may not be able to buy insurance to cover that liability. But you can certainly self-insure, and include the cost of the self-insuance in the price of the vehicle. If going with that sel-insurance option, it might make sense to charge for autopilot (and FSD, even moreso) by the mile. I wouldn't be at all surprised if Tesla and others go that route.

Another alternative would be for legislators (not courts) to change the law. That might make sense. Then wealthier families can buy insurance policies with higher limits, and poorer families can go underinsured. Keep in mind, though, that a lot of these cases are going to be brought by someone other than the driver.

Yes, if all Tesla has to pay is a basic strict liability, they should be fine. They have only had a few fatalities, and they can handle that. One presumes that they offered a settlement of some sort, but whatever they offered was not accepted. Or perhaps they want to take a hard line, and say it is not defective if they always said it would sometimes crash if you didn't intervene -- and they have said that.

They can tolerate it more because they've had so few fatal crashes. If somebody made a cruise control and said, "This just goes the speed you set it. If the car in front is going slower, it will hit it unless you take control" and then somebody sued calling it defective for doing that, a company would need to go to the wall to defend that.

If the car had just proceeded forward at a constant speed (like it would have with regular cruise control), there wouldn't be much of a case. On that point, I think the cases where a Tesla crashed into a stationary object in the road, like a trailer or a fire truck, are more like your regular cruise control hypothetical.

That said, eventually advanced collision avoidance will be so ubiquitous that all new cars will have to have it. That day is not today, and it's probably quite far in the future.

Another possible reason Tesla might be fighting this case is to try to set favorable precedents for the future. This is probably a good case for that, as the only person injured was the driver, and he was apparently not acting as carefully as he should have been. The argument that a continuous improvement process should be treated differently than your run-of-the-mill products liability case is mostly an argument to be made to legislatures, but there might be an argument that autopilot is not a product but rather is a service contract separate from the product of the car. On that argument, it's way beyond my knowledge as to the details of California law, both on whether or not it's a viable argument and on whether or not it's an argument that would be favorable to Tesla. But if there are arguments that can be made, this might be a good case to make them, as the damages are likely to be very high anyway, and the behavior of Tesla was not egregious (although the fact that they specifically knew about the problem with that particular stretch of road is not a great fact for them).

Add new comment