NTSB lets us look inside a new Tesla accident, what does it tell us?

Topic: 
Tags: 

Because Tesla Autopilot is driving tons of miles it's having accidents, and the NTSB in investigating them. That gives us a window we would not have into what's happening. The NTSB report on a non-injury Autopilot accident came out recently, and let's just learn what Tesla's autopilot didn't perceive.

I have some analysis in a new Forbes site article Tesla autopilot accident shows what's inside -- and it's not pretty for full self drive

Comments

The article's author asserts that the use of LIDAR in this scenario would have prevented a collision. Please explain why.

LIDAR would unambiguously detected the presence of the truck in the lane. Radar returns are ambiguous, the system could not figure out what they were. Vision is also ambiguous to today's systems. The LIDAR would have shown the truck extremely clearly, and the vehicle would have issued an alert immediately, then come to a stop. (It does not swerve into the next lane as yet.)

Lidar would have prevented the situation in that Tesla would have gone bankrupt by now if they had put lidar in all of their cars.

It points out explicitly that no LIDAR is available for Tesla at this time. The point, instead, is that there might be a reason that almost everybody else working on full self driving is going to wait for an affordable LIDAR before releasing. Do you not think that is a relevant issue to cover, and that the fact that Tesla's systems, even in 2019, are missing this sort of problem?

There's probably also a reason why no one built rockets the same way as SpaceX. We'll see if Tesla's "crazy idea" pays off. I think it will, and I think that crashes like this one are not relevant to judging that (for reasons that I've explained in other posts).

But actually, as far as I know, they are at the core very similar to other rockets because there's only so much you can change in the basic fundamentals. Which doesn't mean they can't do a lot at the edges. They are not different from other rockets the way a nuclear rocket would be different from a H2/O2 rocket, or an H2/O2 is different from an SRB. Or a microwave powered rocket like a friend of mine tried to build. Those are the real crazy ones.

SpaceX produced the "first orbital rocket to vertically land its first stage on the ground." A whole lot of people predicted that they'd never succeed.

They will change the code and update the sensing capabilities and within months this exact accident will never happen again. They'll push the new code out to the fleet and suddenly all the Tesla's will no longer get in this type of accident again.

It's a terrible accident and should never have happened, but that said at least with autonomous vehicles we'll be able to react and improve.

That's how I thought it would be. But they've had this accident before. And they've had two fatal accidents hitting a truck crossing the highway 2 years apart. They definitely are not following the pattern that we both think they should.

Tesla does use maps with data about radar-detected stationary objects that appear to be in the roadway, introduced after the Joshua Brown fatal accident. See https://www.tesla.com/blog/upgrading-autopilot-seeing-world-radar . It is puzzling what happened with the second fatal accident with a crossing tractor-trailer, though, assuming that technology was present and operating.

This does make the 2nd Florida fatality even stranger. The mountain view fatality is harder to analyse. This learned map would know there was a crash attenuator (and rising ramp) at that location and might diregard the radar from it. It would have been a different map there -- a lane level map that said that, "no, the lane does not suddenly veer to the left here, it goes straight and there is a gore to the left" which would have prevented that fatality. Though the radar resolution of 5 degrees should have possibly allowed the car to say, "Wow, I see the bright radar return from the crash attenuator, so I will ignore that, but instead of looking like it's off a little to the side, it really seems more and more like it's dead center."

As you can see here https://teslamotorsclub.com/tmc/threads/tesla-autopilot-maps.101822/page-6 especially post #104, Tesla's fleet learning for radar maps are created by Teslas driving over routes. It's possible that the two diverging lanes were included in such a map but the gore area between them was not. In that case, driving into the gore area may have left the Tesla with no "fleet learning for radar" help in detecting the stationary-object barrier in time to help, especially with the vision system looking in the direction of the sun (as I believe was reported).

The accident would have been prevented by lane maps, which Tesla does not wish to have. Lane maps would have said, "the lane here goes straight, and to the left is a gore, and left of that is an off ramp." With a map saying that, it would not have concluded the gore was a lane at all. However, a radar map would also have helped because it would have realized that the radar return it was seeing -- one presumes a strong fairly frontal return from the crash barrier -- would have been at odds with what the map was saying, which would be the crash barrier more to the left. However, the resolution of radar makes that harder to figure out, though it should be up to that, especially as it gets closer to the crash barrier. At the start of the gore, the angular distance between the crash barrier and the correct lane would be small, and radar resolution is low. The closer you get, the more it doesn't look right, and at some point you can decide to slow (that car sped up thinking it had a wide open "lane") and to sound a crash alert, allowing the human to wake up and veer to the right (or left if doable) even if there is not enough room to stop.

I don't disagree about the advantages of lane maps. I was simply examining what may have happened with the technology in the vehicle that crashed.

I also agree with the behavior you identify when the "fleet learning for radar" is present and working correctly. I believe that is why the Tesla did so much better than any other vehicle tested recently by the European New Car Assessment Program, with exactly the type behavior you describe. Youw can see the Tesla tests here https://youtu.be/_5aFZJxuJGQ . Description of the tests and videos of all cars tested are here https://www.euroncap.com/en/vehicle-safety/safety-campaigns/2018-automated-driving-tests/ .

Do you have a cite for Tesla not wanting to have lane maps? All I've been able to find is that they don't want to have "HD maps."

Not having any sort of lane map (an actual lane map or something with similar information) would be a huge mistake. Humans drive much more safely on roads they know, because they have mapped them mentally.

What they have said, and said in autonomy day, was a disdain for "lane level maps." Because they change as you repaint.

I think it's safe to say that this is a view that will change as the company gets a more well-developed product. Hopefully there is someone high up in the company that realizes that while "lane-level maps" are, of course, imperfect, the whole trick of driving is dealing with a number of imperfect inputs. Yes, maps are sometimes wrong. But as Tesla employees should be well aware, vision systems (and radar systems, and all the other sensors) are sometimes wrong too.

Either that, or maybe the quote is being taken out of context. Accounts of what someone said are sometimes inaccurate too, especially (but not exclusively) when you can't verify it with the original source.

"high precision maps and lanes are a really bad idea"

"any change and it can't adapt."

"we briefly barked up the tree of high precision lane line [maps], but decided it wasn't a good idea."

Not out of context, I was watching and wrote these down.

I very much agree that high precision lane line maps are not a good idea.

The knowledge that there are X lanes in this section of highway, and that one lane splits off to the left while X-1 lanes go straight, leaving a paved gore area that grows wider until it ends with a barrier, is very good to have, though.

Probably the best solution in the end game is to let the AI decide how much precision to remember in any particular section of road.

I would consider that high precision. Most of the sites that do high precision do it for localization. They then include a simple vector description of the center of the lane, which is where they plan to drive. I am not sure if they say, "Here I should be 12 inches from the left lane marker."

Yes, storing the (highly) precise shape of the gore would be "high precision."

I don't think it's a good idea. Imprecise information like "there's a gore between lanes 1 and 2 shortly after mile marker 72.5," when combined with live data from the cameras looking at where the gore actually is, should be fine.

I'm glad that my suspicion that Tesla isn't abandoning the latter seems to be correct.

High precision maps are expensive robocar-specific infrastructure. One of the key features of the path we ought to take to put robocars on the highways (in my opinion, and at one time in your opinion https://www.templetons.com/brad/robocars/vision.html), is that we should minimize the need for specialized infrastructure.

That's the way full detailed maps work. You don't keep full details to try to interpret them at the time of driving over them. Rather, the high precision map lets you recognize a road area, so you can know where you are, and very importantly, know if the world still looks like your map, and thus that the pre-calculated information (like the presence of the lanes and the gore and the angle of departure etc.) is correct, or if additional calculation is needed to understand the changed road.

But when the road has not changed, which is 99.999% of the time, you can then make use of understanding of the road that was computed based on multiple passes from different viewpoints, with unlimited computing resources and unbounded real-time, which has then be vetted and improved by a human and tested by other cars. Which is a nice thing to have. Of course, you still must drive the other .0001% of the time when you are the first car in the fleet to encounter an unscheduled, unannounced road change, but you won't be quite as good as you can be the rest of the time.

"when the road has not changed, which is 99.999% of the time"

That number was obviously made up, and bears no resemblance to reality.

--

"understanding of the road that was computed based on multiple passes from different viewpoints, with unlimited computing resources and unbounded real-time, which has then be vetted and improved by a human and tested by other cars"

Which is incredibly expensive and doesn't scale to the entire globe. It serves only to cover up for the fact that you haven't created a self-driving car. You're just faking it with software 1.0 methodologies.

I'm very glad Tesla isn't doing it.

The cost of mapping a street is several orders of magnitude cheaper than building it, or even repainting it. Why does that suddenly not scale? Why do you need to do the entire globe at once? (Not that it has bothered Google to do all the streets in a country with Streetview at once, but they are pretty rich.)

You do it one road at a time, in the streets you want to drive. If you have a car that can drive entirely without the map, you can drive with it in areas you didn't map, and you can drive better in the areas you did map. Why wouldn't you do that?

In order to have a car that can drive entirely without the map, you need to build a car that can drive entirely without a map. That's exactly what Tesla is working on.

Once you have that, why would you bother with human curated high precision maps?

If you have a car that can drive without a map, you can have a car that can make a map in real time (pretty impressive feat.) So why would you want to forget everything each car learns as it drives the road rather than store it in a map?

The answer is, a car will drive better with a map even if it can drive OK without one. So the non-map car might get confused by lane markers and drive into a gore, and the map car won't get confused.

For example, you could mantain maps of road that are difficult or complex, and more minimal maps of roads that are simple and easy. Certain roads will be so complex you don't drive them without a correct map, but that might be a subset. Or you may start needing a map on roads, and then need it less and less as your real time analysis improves until you need fewer and fewer maps, though I doubt you ever get to needing zero for a long, long time.

"Tesla Full Self Driving...will almost surely use most of the core components found in Autopilot."

That's where I think you're very wrong. You're thinking of the problem in a "software 1.0" mindset. Tesla is clearly moving toward solving the problem primarily in a "software 2.0" way. Furthermore, this "software 2.0" method is, in my humble opinion, the only way to solve the problem. As Tesla should be solving the problem, and seemingly as they are solving the problem, the Autopilot we see today is little more than a method of collecting data to use to train the real autonomous vehicle software. Virtually none of it will be used at the time Tesla actually produces an autonomous vehicle (which I don't think will happen any time in 2019 or 2020).

https://m.youtube.com/watch?v=zywIvINSlaI&feature=youtu.be is a video using the terms "software 1.0" and "software 2.0" as I have used them above.

I think this mistake in understanding how Tesla is building its software also is reflected in your statement that they don't have "stereo vision." While it's true in the sense that Tesla doesn't have traditional stereo vision ("software 1.0" stereo vision). But Tesla has several forward facing cameras, plus radar, which gives more than enough information for the car to determine where objects are located in 3D space. With enough processing power the cars will have better than binocular "vision."

Tesla is clearly taking the "software 2.0" approach to solving computer vision, and ultimately, to solving autonomous driving. Under this approach, feeding the individual views from multiple cameras into the AI systems is superior to creating a 3D model through stereo vision (or lidar, for that matter) and then feeding that 3D model into the AI. The former is much more processor intensive, though, and that's one of the reasons why FSD requires a hardware upgrade.

In that talk, he's saying that machine learning is software 2.0, and software 1.0 was classical algorithms. Autopilot is already a "2.0" system as will FSD be. Which seems to thus not prove your theory that they are two different systems.

If I heard evidence that the FSD team was told, "OK, throw out everything you have for Autopilot and start from scratch" then I would say we can't look at the quality of Autopilot in judging progress on FSD. I have not seen that evidence yet.

The funny thing is, it would not be a strange move with old school software 1.0. In that field, you often get the temptation to throw out and completely rewrite. With machine learning tools, the code part is much smaller, less likely to be in need of complete rewrite. And the old training data is generally always useful training data.

Next time I see Andrej I will ask.

The baseline between Tesla's 3 cameras -- which do different fields of view -- is quite small. This is not to say you can't get some stereo from that. However, I would be surprised if you would get much neural network stereo from such cameras at the distance of 40 meters when the Tesla decided the road ahead was clear and it was time to speed up. On the other hand, you would have thought you should have had some stereo, even from that baseline, before 7.5 meters when the car changed its mind and issued the FCW beeps. Decent baseline stereo should identify the distance to a truck at 40m, though that's on the edge from what I understand.

Did you watch the whole video? There are graphics in it that show how 2.0 code is replacing 1.0 code. (They're not throwing out 1.0 code and rewriting it.) Start at 15:45 if you can't be bothered to watch the whole thing.

There's enough distance between the two outside forward facing cameras to get some stereo vision. But there's not enough processing power in the old hardware to process it properly. No doubt the FSD system that requires upgraded software will handle this better, once it is ready for production release.

But my impression from the summary is that Autopilot is mostly 2.0 (machine learning) and so the transition from Autopilot to FSD is not a 1.0 to 2.0 transition of the sort you are talking about. I am sure it partly is, I would very much hope there is a ton of stuff in the FSD project.

But that doesn't mean that how well the build and improve autopilot doesn't reveal something about how well they will build and improve FSD.

Autopilot is the best of its kind. A very wise man once said that "Autopilot goes beyond anything ever offered in a car before." Autopilot saves lives. Billions of miles have been driven on Autopilot, and very few crashes (if any) have been the result of Autopilot when used correctly.

I think that lends some evidence that FSD will likely also go beyond anything ever offered in a car before, and that FSD will be the best of its kind. But if FSD is synonymous with a vehicle that can drive without human supervision, then it doesn't lend that much evidence, because that's a very different system. (Almost all evidence points to the fact that FSD will not be a system that can drive without human supervision, at least not initially.)

--

Even if Autopilot were 100% "2.0 code" (once you watch the video I think you'll agree it isn't), don't you think that the neural networks will be completely redone for the new hardware? The old hardware processes frames at 110 frames per second. The new hardware processes frames at 2,300 frames per second. Surely you have to throw the old neural networks out (figuratively speaking; they'll still be used for Autopilot on the old hardware) and retrain new ones, in order to best take advantage of that increase in processing power.

If that's true, how is the fact that a Tesla failed to recognize a fire truck nearly two years ago tell us...anything...about how a Tesla using a completely new neural network running on new hardware in the future will perform?

You need to make up your mind. Either FSD is an evolution of Autopilot and thus we can judge it based on Autopilot, both for good and bad, or it's a completely different product are we're wasting our time looking at Autopilot.

"Completely redone?" Not at all. They won't be identical. They will be bigger, they will have new training data, new approaches. But they will use what they learned and they will also use what they built.

Autopilot had a well known problem with stalled vehicles much more than two years ago. The fact that it still has the same problem in early 2018 when it should have been a very high problem on their list (below hitting the broadside of trucks, but pretty high) tells us a bad thing. Because the real problem of real-true-actual-full-self-driving is in dealing with things you've never seen before. But if you're not dealing with the things you have seen before, that you know you have to deal with, doesn't breed confidence.

I don't think we can judge how well FSD will handle stalled vehicles based on Autopilot any more than we can judge how well FSD will stop at red lights based on Autopilot.

You suggest it should be high on Tesla's list to make Autopilot do things that it wasn't designed to do. If it weren't for the fact that Tesla has admitted that they can't do FSD with hardware less than 3.0, then I might agree. But Tesla has admitted that they can't do FSD with hardware less than 3.0. So I don't think we can say what should be a priority. Wasting time working on Autopilot features that can be solved on hardware 3.0 but can't be easily solved on hardware less than 3.0 would be a mistake.

They can't do FSD with Hardware 3.0 so I agree that they can't do it with 2.5. But they can do more with 2.5, and the more crucial things in particular.

The broadside of a truck is not one of those cruicial things. Those accidents required completely inattentive drivers.

The suddenly revealed stalled vehicle is a risk for any driver. A human looking away for just a second can crash into that. So it's squarely the sort of thing you want an FCW/AEB system to see, and FCW/AEB is a core function in autopilot. They should be backporting anything they learn at least.

I'm not sure what you mean when you say they can't do FSD with hardware 3.0.

I'm sure they have been backporting what they can. But to the extent the limited processing power they have can be used for scenarios that are more likely to occur when Autopilot is being used correctly, they need to prioritize that.

There's also the issue of false positives. If there's a 0.1% chance that there's a stalled car in the lane you're in, it might make sense to not hit the brakes under Autopilot even though you'd surely hit the brakes in autonomous mode.

Surely you must have noticed that I am skeptical that they can do a real "full self driving" product with just the hardware in the car, including the 3.0 processor.

However, it is possible that they can produce what Elon is very incorrectly labelling "full" self driving with that hardware, which is a human supervised autopilot that works off highways. Nobody else on the planet would call that full self driving.

Some day in the more distant future it will be possible to do a real full self driving product with just cameras and some time of fast processor (be it Tesla HW 3.0 or something better. However, nobody knows when that date is, but most people bet it's not for a while.

I actually wonder about doing it with the cameras they have because there is no way to clean the cameras, other than the front facers. It's not going to be good to have a car that shuts down with a splash of mud on the side.

I generally read "FSD" as being a brand name for a particular Tesla product. Has anyone used the term "FSD" or "full self-driving" prior to Tesla using it as a product name?

Good point about about the mud. I agree with you that unsupervised driving (which I guess is what you mean by "real full self-driving") is a while away. I think even Tesla officially agrees with you. It will surely take years after building a feature-complete release to work out the bugs, demonstrate reliability, and get the necessary regulatory approval, to do unsupervised driving.

It may be a Tesla name but it's also an English phrase. Tesla is using it in a way different from everybody else. In fact, many people simply use "self driving" to mean "real actual full self driving," the so-called "level 4" and find Tesla calling a product "full self driving" when it's really a supervised autopilot very silly.

However, Elon Musk has said that (fake) full self driving will be "feature complete" this year (he is certain of it) and that next year it will be able to operate unattended, but the regulators might not still allow it so they probably can't sell it (even though it will work.) Or so he says.

If Tesla succeeds in getting people to refer to their product as full self driving, we will need actual full self driving and real actual full self driving etc.

It is silly.

I think it's even sillier to pretend that they're not being silly, or to pretend that Elon Musk doesn't sometimes say things that are wildly over-optimistic.

I think I should point out that the accident occurred in January of 2018, not 2019 as the article says. My thoughts are very similar to yours, that FSD without lidar, or stereo cams, or HD maps, or a good DMS doesn't seem too wise. Nor is there strong evidence that they're making good progress towards FSD. Where's the autonomous cross-country drive promised by the end of 2017?

That said, is there any reasonable way to know if Autopilot would perform any better in September 2019 in this scenario than it did in January 2018?

Add new comment