Tesla Autopilot alleged failure makes you wonder about how they train it


Another Tesla car crash, allegedly on autopilot, teaches us something about how well (or not well) Tesla is doing with its claimed ability to use its fleet of cars to quickly learn to identify unusual obstacles and situations. Here, a Tesla on autopilot crashes into a tow truck sticking out into the right lane (injuring the Tesla driver.) The driver says it was on Autopilot but that he was distracted for a few seconds. Driver's fault, but why did the Tesla, whose Autopilot is supposedly just months from turning into "feature complete full self driving" miss this sort of thing, when this has happened before, and Tesla has great tools to understand things that have happened before. New Forbes article in comment #1

Plus I look into new things about an old Tesla accident, where it seems the driver was completely abusing Autopilot, treating it like self-driving, and just wiggling the wheel every few minutes to keep Autopilot going, but missed and it was off when he probably thought he had kept it on.

Read some analysis at this Forbes site article Alleged Tesla Autopilot failure raises questions on how they train it


The video which was not mentioned in this article showed the Tesla's brakes engaging just before hitting the tow truck. This likely reduced impact. Could have been a lot worse. Also, likely the Tesla would have gone 'right' if not for the vehicle to its right which it hit after hitting the tow truck. The problem for the Tesla as I see it was the vehicle behind the tow truck blocking the line of sight to a degree until the Tesla came up upon the truck. For not paying attention dad was lucky he was in a Tesla that partially responded and protected his children. Fire subsequently not a great thing but well enough time for all to escape prior to onset.

The Tesla has an independent collision warning and collision braking system that is always on, not just in autopilot. This is probably what engaged the brakes.

However, that truck had been there a while, and all the other cars were navigating easily around it -- traffic was not even slowed very much. To be a full self drive product, Tesla's system needs to be able to match that ability. To perceive the problem well before it happens (but noticing that other vehicles are moving right for something, and braking, among other things) and be ready. Humans in this situation will slow, and they will move right, crowding the car in the lane to the right which will make that car move right too.

It's not an autopilot failure. It's a failure of "autopilot, the thing that in its next release is supposed to have some level of full self driving."

This crash is a good example of why it's a mistake to judge ADAS as though it's unmonitored self-driving. The types of evasive maneuvers you take in the latter situation are not equivalent to the types of responses you take in the former situation.

There's not enough information that I can find to determine what, if anything, the Tesla did wrong here. Yes, it could have probably squeezed in-between the truck and the car in the lane to the right of it, but I'm not sure that would be the right thing to do for an ADAS, as opposed to an unmonitored self-driving system. The more likely mistake, if it was in fact in autopilot, was not slowing down or changing lanes far before reaching the truck, but without the forward-facing camera view it's hard to say if the view was obstructed. Also, autopilot is, probably correctly, tweaked not to slow down too much in potentially hazardous situations (as opposed to imminent crashes), because the assumption is that the driver is a better judge in that situation. Except for evasive maneuvers in the case of an imminent crash, standard autopilot doesn't make automatic lane changes at all. A vehicle in unattended self-driving mode would have full control over speed and lane decisions.

The behaviour I describe in the comment above, that humans do, of attempting to move right and claim more space on the right, is not something an ADAS product would do. However, slowing in any situation with a significant risk of high speed impact, or at least issuing an audible warning, is the right thing for an ADAS tool.

I'm not sure what situations you would slow in. It'd have to be limited to situations where the car could be sure that the driver either wasn't paying attention or had made a mistake, or else it'd be really annoying.

Ditto to have an audible warning. It's arguably even worse to have an audible warning in too many unnecessary situations, as that'll have the "boy who cried wolf" effect.

My Tesla's forward collision warning goes off frequently for semi-false alarms, and you can tune that in the car's settings.

However, naturally you want to catch as many actual impacts as you can. That's why you try to make the systems better using the techniques Tesla has promoted they use.

The main excuse here is that Russian tow trucks don't look like ones in the USA. But that is only an excuse to the extent that you are ready to have it hit any new brand of tow truck.

Who is using that excuse?

Without the forward camera view I'm not sure how we can even say that a human would have seen and recognized the tow truck before the Tesla did.

Which mode it was in would also be relevant, and we don't know that either, though we know definitively that it wasn't in fully autonomous mode.

Why does it matter that it was a tow truck?

The actual impact was caught. It just was caught after it was already too late to avoid a crash.

To catch it before it's too late, you have balance that against warning people of things they already see. I guess another question is what setting this car was tuned too for that, although first the question is whether or not a warning did go off and when.

There is a fair bit of traffic, though not enough to be stop and go. That means every other human being who passed these stalled vehicles recognized and handled it.

Tesla FCW mostly warns me about things I already see.

It matters it was a Russian tow truck, in that there is the suggestion that it doesn't much look like the Tow Trucks that Tesla will have used in training their models in the USA, Asia and Europe.

We don't my see on the video how many other cars handled the situation, nor how. A last minute swerve into the lane to the right maybe, which the Tesla could have done, and maybe could have done without causing a crash with one of the cars in that lane, but I'm not sure you'd want to program an ADAS to make evasive maneuvers like that when there is traffic in the lane to the right. ADAS is not autonomous driving. There's no requirement to handle these sorts of situations, and imperfectly trying to do so would quite possibly cause more liability than doing nothing. Braking hard and staying straight is a fairly safe response from a liability standpoint. Liability-wise, it's likely the tow truck driver would be found 100% liable for this crash (at least assuming US laws). If the Tesla has swerved and caused a crash, maybe not.

Why does it matter whether or not the type of tow truck was used in training models? Does Tesla's vision system not make a 3D map of obstacles in the road regardless of whether or not its model has been trained on them? Training on types of vehicles is useful to predict behavior, but shouldn't be necessary just to avoid crashing into stationary objects.

They certainly have enough data to make a 3D map of the limited area they will be travelling through. Maybe not enough processing power with the old hardware? Or maybe this is just not something they do at all?

The aspect I'd be more interested in is what could have been done, long before the Tesla hits its brakes, to recognize the situation. In particular, when you see cars ahead of you swerving, you should slow down (and, if safe, probably change lanes). Were there warning signs like that, which the Tesla could have noticed? If so, I would expect an autonomous car to react to that. Maybe not an ADAS, though, because of the false positives.

Well, as is obviously the case in this video, you want to swerve as much as you can. You are, ideally, aware of the vehicles to the right. In this case, the Tesla hits the tow truck and then slams into the vehicle on its right hard. Much better to gently push that vehicle from the side if the other choice is what happened. Not a great choice. Best (and human) approach is to be aware by watching the motions of other cars around the obstacle that something is up, and to slow and exercise more caution around it.

Vision systems try to get a 3D map but they aren't perfect, particularly on stopped objects. But that's part of the debate between computer vision and LIDAR, which is inherently 3D while vision is inferred 3D. Motion parallax stuff is ancient (and is often just called machine vision because it's a lot simpler than computer vision) but it has limits.

This is a tough situation. Cars parked like this are risky even for human drivers. But human drivers handle them the vast majority of the time. Perhaps Teslas do too, but I don't believe they are able to follow the human strategy here. But they didn't even build a good 3D map and slow down just from the first car, let alone the truck. Their classifier should have seen that first car.

I don't think you're going to convince me that an ADAS should swerve in this situation. An ADAS isn't a driver. The human driver is the driver, and is the one who should make the decision to swerve or not swerve, if there's a significant risk that swerving will cause a crash.

Without seeing more video, I can't say whether or not a human driver would have been likely to avoid this crash. (Other than the fact that in this case a human was driving and that human driver didn't avoid the crash.) Tow trucks parked like this are extremely risky. I bet crashes like this one occur quite frequently.

It's not clear when the Tesla would have seen even the first car. The view of both vehicles was no doubt blocked by cars ahead of the Tesla until relatively soon before the crash.

Perhaps the lack of a good 3D map played a role in the crash. It's hard to say, and we'll likely never know. One thing we can say is that the 3D mapping ability of the Tesla will increase dramatically in cars that have the new hardware. It's not clear if the Tesla in this crash had that new hardware, but even if it did, I don't believe the software to take advantage of the new hardware has been fully deployed, yet.

There is conflicting information about whether autopilot was in or not.plus no info was it updated or not in recent months and if the car had it at all.
Google this
Как рассказал владелец Tesla, в машине был включен не полноценный автопилот, а режим "ассистент водителя ".

Driver told that it was not full autopilot but a driver assist

Most sources have quoted the driver as saying he was in Autopilot. Autopilot is "driver assist mode" though Tesla also lets you turn on just adaptive cruise control and also always has forward collision warning (which may or may not have gone off) and forward collision breaking (which does appear to have gone off just before the crash.) These are considered driver assist features.

Tesla, when it gets logs from a vehicle, knows if it was in autopilot at the time of a crash. However, since this vehicle probably did not communicate with the cellular network in Russia, and was burned to a cinder, we will probably not get any firm answer on that.

I'm curious how relevant today's autopilot is in predicting the next-gen FSD that Tesla plans to release. From what I gather, the order of magnitude greater processing power in the chip will let the car use the full camera resolution and accelerate/improve decision making.

The bigger question is if robo-driving (Tesla or other) can eliminate all the dumb human accidents, will it be ok if it messes up an occasional corner case while being 5x (pick your number) safer on average? Will be interesting to see when society is ready to make that trade-off. We will never eliminate all weird accidents, so when will we get used to them?

No matter how much better the processing speed gets its going to be no where near the depth lidar that offers. 1 mode of sight vs 12 makes a big difference. If anything as processing gets better his sensors vs lidar is going to look like a pinto vs a ferrari.

This is simply summed up to Elon's attempt to cut corners and do things his way. Tesla is the ONLY company not using LiDar for there lvl 4 and 5 autonomous efforts. LiDar is 10x more effective and safer for consumers at scale, however it is more expensive. Instead of adopting the technology, he is running an experiment with his customers to see if he can get away with a cheaper solution. By selling his cars on a global scale and by collecting a significant amount more data he believe his tech should be able to function in that capacity. As he collects more data his cars will get better however there is only so far sensors can go. There ability to collect quality data is just not possible in order to perform pinpoint accurate perception and prediction analytics at 1/20000 of a second. It plateaus and his consumer pay the price. He has decided to gamble lives in his pursuit.

Excuse the "there" instead of "their"...sent from phone.

Add new comment