NTSB lets us look inside a new Tesla accident, what does it tell us?
Submitted by brad on Fri, 2019-09-06 12:47
Topic:
Tags:
Because Tesla Autopilot is driving tons of miles it's having accidents, and the NTSB in investigating them. That gives us a window we would not have into what's happening. The NTSB report on a non-injury Autopilot accident came out recently, and let's just learn what Tesla's autopilot didn't perceive.
I have some analysis in a new Forbes site article Tesla autopilot accident shows what's inside -- and it's not pretty for full self drive
Comments
Martin Winlow
Sat, 2019-09-07 03:39
Permalink
RADAR Vs LIDAR
The article's author asserts that the use of LIDAR in this scenario would have prevented a collision. Please explain why.
brad
Sat, 2019-09-07 10:15
Permalink
LIDAR
LIDAR would unambiguously detected the presence of the truck in the lane. Radar returns are ambiguous, the system could not figure out what they were. Vision is also ambiguous to today's systems. The LIDAR would have shown the truck extremely clearly, and the vehicle would have issued an alert immediately, then come to a stop. (It does not swerve into the next lane as yet.)
FKA
Sun, 2019-09-08 09:11
Permalink
LIDAR
Lidar would have prevented the situation in that Tesla would have gone bankrupt by now if they had put lidar in all of their cars.
brad
Sun, 2019-09-08 12:52
Permalink
The article says that
It points out explicitly that no LIDAR is available for Tesla at this time. The point, instead, is that there might be a reason that almost everybody else working on full self driving is going to wait for an affordable LIDAR before releasing. Do you not think that is a relevant issue to cover, and that the fact that Tesla's systems, even in 2019, are missing this sort of problem?
FKA
Sun, 2019-09-08 15:02
Permalink
Here's to the Crazy Ones
There's probably also a reason why no one built rockets the same way as SpaceX. We'll see if Tesla's "crazy idea" pays off. I think it will, and I think that crashes like this one are not relevant to judging that (for reasons that I've explained in other posts).
brad
Mon, 2019-09-09 01:39
Permalink
SpaceX's rockets are great
But actually, as far as I know, they are at the core very similar to other rockets because there's only so much you can change in the basic fundamentals. Which doesn't mean they can't do a lot at the edges. They are not different from other rockets the way a nuclear rocket would be different from a H2/O2 rocket, or an H2/O2 is different from an SRB. Or a microwave powered rocket like a friend of mine tried to build. Those are the real crazy ones.
FKA
Mon, 2019-09-09 16:30
Permalink
They've done a lot
SpaceX produced the "first orbital rocket to vertically land its first stage on the ground." A whole lot of people predicted that they'd never succeed.
The Virginian
Sat, 2019-09-07 10:46
Permalink
They will change the code and
They will change the code and update the sensing capabilities and within months this exact accident will never happen again. They'll push the new code out to the fleet and suddenly all the Tesla's will no longer get in this type of accident again.
It's a terrible accident and should never have happened, but that said at least with autonomous vehicles we'll be able to react and improve.
brad
Sat, 2019-09-07 17:39
Permalink
Alas, I fear not
That's how I thought it would be. But they've had this accident before. And they've had two fatal accidents hitting a truck crossing the highway 2 years apart. They definitely are not following the pattern that we both think they should.
BenFar
Sat, 2019-09-07 19:28
Permalink
Tesla does use maps with data
Tesla does use maps with data about radar-detected stationary objects that appear to be in the roadway, introduced after the Joshua Brown fatal accident. See https://www.tesla.com/blog/upgrading-autopilot-seeing-world-radar . It is puzzling what happened with the second fatal accident with a crossing tractor-trailer, though, assuming that technology was present and operating.
brad
Sun, 2019-09-08 13:15
Permalink
Thanks for correction
This does make the 2nd Florida fatality even stranger. The mountain view fatality is harder to analyse. This learned map would know there was a crash attenuator (and rising ramp) at that location and might diregard the radar from it. It would have been a different map there -- a lane level map that said that, "no, the lane does not suddenly veer to the left here, it goes straight and there is a gore to the left" which would have prevented that fatality. Though the radar resolution of 5 degrees should have possibly allowed the car to say, "Wow, I see the bright radar return from the crash attenuator, so I will ignore that, but instead of looking like it's off a little to the side, it really seems more and more like it's dead center."
BenFar
Tue, 2019-09-10 19:30
Permalink
As you can see here https:/
As you can see here https://teslamotorsclub.com/tmc/threads/tesla-autopilot-maps.101822/page-6 especially post #104, Tesla's fleet learning for radar maps are created by Teslas driving over routes. It's possible that the two diverging lanes were included in such a map but the gore area between them was not. In that case, driving into the gore area may have left the Tesla with no "fleet learning for radar" help in detecting the stationary-object barrier in time to help, especially with the vision system looking in the direction of the sun (as I believe was reported).
brad
Tue, 2019-09-10 21:49
Permalink
In the case of the gore
The accident would have been prevented by lane maps, which Tesla does not wish to have. Lane maps would have said, "the lane here goes straight, and to the left is a gore, and left of that is an off ramp." With a map saying that, it would not have concluded the gore was a lane at all. However, a radar map would also have helped because it would have realized that the radar return it was seeing -- one presumes a strong fairly frontal return from the crash barrier -- would have been at odds with what the map was saying, which would be the crash barrier more to the left. However, the resolution of radar makes that harder to figure out, though it should be up to that, especially as it gets closer to the crash barrier. At the start of the gore, the angular distance between the crash barrier and the correct lane would be small, and radar resolution is low. The closer you get, the more it doesn't look right, and at some point you can decide to slow (that car sped up thinking it had a wide open "lane") and to sound a crash alert, allowing the human to wake up and veer to the right (or left if doable) even if there is not enough room to stop.
BenFar
Wed, 2019-09-11 06:33
Permalink
I don't disagree about the
I don't disagree about the advantages of lane maps. I was simply examining what may have happened with the technology in the vehicle that crashed.
I also agree with the behavior you identify when the "fleet learning for radar" is present and working correctly. I believe that is why the Tesla did so much better than any other vehicle tested recently by the European New Car Assessment Program, with exactly the type behavior you describe. Youw can see the Tesla tests here https://youtu.be/_5aFZJxuJGQ . Description of the tests and videos of all cars tested are here https://www.euroncap.com/en/vehicle-safety/safety-campaigns/2018-automated-driving-tests/ .
FKA
Wed, 2019-09-11 17:10
Permalink
Cite?
Do you have a cite for Tesla not wanting to have lane maps? All I've been able to find is that they don't want to have "HD maps."
Not having any sort of lane map (an actual lane map or something with similar information) would be a huge mistake. Humans drive much more safely on roads they know, because they have mapped them mentally.
brad
Fri, 2019-09-13 01:15
Permalink
Maps
What they have said, and said in autonomy day, was a disdain for "lane level maps." Because they change as you repaint.
FKA
Sat, 2019-09-14 05:47
Permalink
Interesting
I think it's safe to say that this is a view that will change as the company gets a more well-developed product. Hopefully there is someone high up in the company that realizes that while "lane-level maps" are, of course, imperfect, the whole trick of driving is dealing with a number of imperfect inputs. Yes, maps are sometimes wrong. But as Tesla employees should be well aware, vision systems (and radar systems, and all the other sensors) are sometimes wrong too.
Either that, or maybe the quote is being taken out of context. Accounts of what someone said are sometimes inaccurate too, especially (but not exclusively) when you can't verify it with the original source.
brad
Sat, 2019-09-14 10:29
Permalink
Specific quotes from Elon Musk
"high precision maps and lanes are a really bad idea"
Not out of context, I was watching and wrote these down.
FKA
Sat, 2019-09-14 17:10
Permalink
high precision
I very much agree that high precision lane line maps are not a good idea.
The knowledge that there are X lanes in this section of highway, and that one lane splits off to the left while X-1 lanes go straight, leaving a paved gore area that grows wider until it ends with a barrier, is very good to have, though.
Probably the best solution in the end game is to let the AI decide how much precision to remember in any particular section of road.
brad
Sat, 2019-09-14 22:11
Permalink
Shape of a gore
I would consider that high precision. Most of the sites that do high precision do it for localization. They then include a simple vector description of the center of the lane, which is where they plan to drive. I am not sure if they say, "Here I should be 12 inches from the left lane marker."
FKA
Sun, 2019-09-15 05:55
Permalink
Rough shape of a gore
Yes, storing the (highly) precise shape of the gore would be "high precision."
I don't think it's a good idea. Imprecise information like "there's a gore between lanes 1 and 2 shortly after mile marker 72.5," when combined with live data from the cameras looking at where the gore actually is, should be fine.
I'm glad that my suspicion that Tesla isn't abandoning the latter seems to be correct.
High precision maps are expensive robocar-specific infrastructure. One of the key features of the path we ought to take to put robocars on the highways (in my opinion, and at one time in your opinion https://www.templetons.com/brad/robocars/vision.html), is that we should minimize the need for specialized infrastructure.
brad
Sun, 2019-09-15 16:37
Permalink
Rough shape
That's the way full detailed maps work. You don't keep full details to try to interpret them at the time of driving over them. Rather, the high precision map lets you recognize a road area, so you can know where you are, and very importantly, know if the world still looks like your map, and thus that the pre-calculated information (like the presence of the lanes and the gore and the angle of departure etc.) is correct, or if additional calculation is needed to understand the changed road.
But when the road has not changed, which is 99.999% of the time, you can then make use of understanding of the road that was computed based on multiple passes from different viewpoints, with unlimited computing resources and unbounded real-time, which has then be vetted and improved by a human and tested by other cars. Which is a nice thing to have. Of course, you still must drive the other .0001% of the time when you are the first car in the fleet to encounter an unscheduled, unannounced road change, but you won't be quite as good as you can be the rest of the time.
FKA
Mon, 2019-09-16 04:16
Permalink
Sounds like a crutch
"when the road has not changed, which is 99.999% of the time"
That number was obviously made up, and bears no resemblance to reality.
--
"understanding of the road that was computed based on multiple passes from different viewpoints, with unlimited computing resources and unbounded real-time, which has then be vetted and improved by a human and tested by other cars"
Which is incredibly expensive and doesn't scale to the entire globe. It serves only to cover up for the fact that you haven't created a self-driving car. You're just faking it with software 1.0 methodologies.
I'm very glad Tesla isn't doing it.
brad
Mon, 2019-09-16 15:45
Permalink
Why doesn't it scale?
The cost of mapping a street is several orders of magnitude cheaper than building it, or even repainting it. Why does that suddenly not scale? Why do you need to do the entire globe at once? (Not that it has bothered Google to do all the streets in a country with Streetview at once, but they are pretty rich.)
You do it one road at a time, in the streets you want to drive. If you have a car that can drive entirely without the map, you can drive with it in areas you didn't map, and you can drive better in the areas you did map. Why wouldn't you do that?
FKA
Mon, 2019-09-16 21:23
Permalink
Why wouldn't you do that?
In order to have a car that can drive entirely without the map, you need to build a car that can drive entirely without a map. That's exactly what Tesla is working on.
Once you have that, why would you bother with human curated high precision maps?
brad
Tue, 2019-09-17 05:51
Permalink
It's the reverse
If you have a car that can drive without a map, you can have a car that can make a map in real time (pretty impressive feat.) So why would you want to forget everything each car learns as it drives the road rather than store it in a map?
The answer is, a car will drive better with a map even if it can drive OK without one. So the non-map car might get confused by lane markers and drive into a gore, and the map car won't get confused.
For example, you could mantain maps of road that are difficult or complex, and more minimal maps of roads that are simple and easy. Certain roads will be so complex you don't drive them without a correct map, but that might be a subset. Or you may start needing a map on roads, and then need it less and less as your real time analysis improves until you need fewer and fewer maps, though I doubt you ever get to needing zero for a long, long time.
FKA
Thu, 2019-09-19 12:05
Permalink
A self-driving car may or may
A self-driving car may or may not be able to make a high precision map in real time. Making a map requires different hardware and software than driving.
That said, as I said above, I do expect that self-driving cars will remember details about the roads they drive. As I said, humans do this too, and they drive better, all else equal, on roads that they are more familiar with. (All else equal including things like speed, attention to the road, etc. I don't know about others, but I drive much more carefully on roads that I'm not familiar with. Despite that, I probably still drive less safely on such roads, at least when there are uncommon features such as a very long gore area that isn't marked with chevrons.)
A Tesla is not "a non-map car." They have maps. I'm not sure the extent to which they use those maps in the current navigate on Autopilot, but they must use them at least a little bit (and they must contain some lane level detail) for the navigate part.
brad
Thu, 2019-09-19 16:45
Permalink
To drive
You must at a minimum look at the road and figure out its geometry and where you belong in that geometry, and where other cars are likely to travel in it. In other words, you make a map. If you are in the left lane you can possibly not bother to map the right lane, but you still should to understand what cars are going to do in that lane or next to it.
This is what a map is -- the coalesced understanding of the road, its lanes, how they connect, what's static. You must build it as you drive if you are doing more than keeping in one lane. You need it even then because driving involves predicting what everybody else on the road might do, and that depends on the map.
FKA
Fri, 2019-09-20 06:21
Permalink
Maps
Sure, you make a "map" of sorts. It's not necessarily a high precision one. It's not necessarily a highly accurate one. It's accurate, and precise, in the places that are relevant to the moment. You don't necessarily store it anywhere long term. In fact, since the "map" to a large and growing extent exists as part of the state of the neural network (in the short term, and the neural network itself in the longer term), it's not even obvious how you would store it (eventually the AI itself will figure that out).
I'm struggling to figure out what your point is.
brad
Fri, 2019-09-20 14:12
Permalink
Pure neural network design
Ah, you are talking about the idea of pure neural network designs, which attempt to have the neural network understand the scene as a scene, rather than using the neural network to segment the scene, and using other tools (including other networks) to then understand it further and make decisions.
I agree, a pure neural network would not have a "map" to remember. However, as yet, nobody has any serious progress on a pure neural network approach and I am very, very skeptical of that approach, as are most others, but not everybody.
I am talking about approaches where neural networks figure out where the lanes are (ie. making a map) and other software then decides what lane to drive in, and how to drive in the lane, and when to change lanes, etc.
FKA
Fri, 2019-09-20 20:35
Permalink
And?
I'm still not sure what the relevance of this is.
brad
Sat, 2019-09-21 16:16
Permalink
And
I don't think pure neural network driving is on any near horizon. So the cars that want to drive without a map are the cars that understand the road well enough to make a map in real time. Thus, doing that and then forgetting all you learned is foolish.
FKA
Sun, 2019-09-22 04:14
Permalink
And?
I can agree with all that. I just don't think it has any relevance to anything.
FKA
Sun, 2019-09-08 08:35
Permalink
Software 2.0
"Tesla Full Self Driving...will almost surely use most of the core components found in Autopilot."
That's where I think you're very wrong. You're thinking of the problem in a "software 1.0" mindset. Tesla is clearly moving toward solving the problem primarily in a "software 2.0" way. Furthermore, this "software 2.0" method is, in my humble opinion, the only way to solve the problem. As Tesla should be solving the problem, and seemingly as they are solving the problem, the Autopilot we see today is little more than a method of collecting data to use to train the real autonomous vehicle software. Virtually none of it will be used at the time Tesla actually produces an autonomous vehicle (which I don't think will happen any time in 2019 or 2020).
https://m.youtube.com/watch?v=zywIvINSlaI&feature=youtu.be is a video using the terms "software 1.0" and "software 2.0" as I have used them above.
I think this mistake in understanding how Tesla is building its software also is reflected in your statement that they don't have "stereo vision." While it's true in the sense that Tesla doesn't have traditional stereo vision ("software 1.0" stereo vision). But Tesla has several forward facing cameras, plus radar, which gives more than enough information for the car to determine where objects are located in 3D space. With enough processing power the cars will have better than binocular "vision."
Tesla is clearly taking the "software 2.0" approach to solving computer vision, and ultimately, to solving autonomous driving. Under this approach, feeding the individual views from multiple cameras into the AI systems is superior to creating a 3D model through stereo vision (or lidar, for that matter) and then feeding that 3D model into the AI. The former is much more processor intensive, though, and that's one of the reasons why FSD requires a hardware upgrade.
brad
Sun, 2019-09-08 13:08
Permalink
Whole new version
In that talk, he's saying that machine learning is software 2.0, and software 1.0 was classical algorithms. Autopilot is already a "2.0" system as will FSD be. Which seems to thus not prove your theory that they are two different systems.
If I heard evidence that the FSD team was told, "OK, throw out everything you have for Autopilot and start from scratch" then I would say we can't look at the quality of Autopilot in judging progress on FSD. I have not seen that evidence yet.
The funny thing is, it would not be a strange move with old school software 1.0. In that field, you often get the temptation to throw out and completely rewrite. With machine learning tools, the code part is much smaller, less likely to be in need of complete rewrite. And the old training data is generally always useful training data.
Next time I see Andrej I will ask.
The baseline between Tesla's 3 cameras -- which do different fields of view -- is quite small. This is not to say you can't get some stereo from that. However, I would be surprised if you would get much neural network stereo from such cameras at the distance of 40 meters when the Tesla decided the road ahead was clear and it was time to speed up. On the other hand, you would have thought you should have had some stereo, even from that baseline, before 7.5 meters when the car changed its mind and issued the FCW beeps. Decent baseline stereo should identify the distance to a truck at 40m, though that's on the edge from what I understand.
FKA
Sun, 2019-09-08 14:04
Permalink
2.0
Did you watch the whole video? There are graphics in it that show how 2.0 code is replacing 1.0 code. (They're not throwing out 1.0 code and rewriting it.) Start at 15:45 if you can't be bothered to watch the whole thing.
There's enough distance between the two outside forward facing cameras to get some stereo vision. But there's not enough processing power in the old hardware to process it properly. No doubt the FSD system that requires upgraded software will handle this better, once it is ready for production release.
brad
Mon, 2019-09-09 01:37
Permalink
Not yet
But my impression from the summary is that Autopilot is mostly 2.0 (machine learning) and so the transition from Autopilot to FSD is not a 1.0 to 2.0 transition of the sort you are talking about. I am sure it partly is, I would very much hope there is a ton of stuff in the FSD project.
But that doesn't mean that how well the build and improve autopilot doesn't reveal something about how well they will build and improve FSD.
FKA
Mon, 2019-09-09 16:56
Permalink
Best of its kind
Autopilot is the best of its kind. A very wise man once said that "Autopilot goes beyond anything ever offered in a car before." Autopilot saves lives. Billions of miles have been driven on Autopilot, and very few crashes (if any) have been the result of Autopilot when used correctly.
I think that lends some evidence that FSD will likely also go beyond anything ever offered in a car before, and that FSD will be the best of its kind. But if FSD is synonymous with a vehicle that can drive without human supervision, then it doesn't lend that much evidence, because that's a very different system. (Almost all evidence points to the fact that FSD will not be a system that can drive without human supervision, at least not initially.)
--
Even if Autopilot were 100% "2.0 code" (once you watch the video I think you'll agree it isn't), don't you think that the neural networks will be completely redone for the new hardware? The old hardware processes frames at 110 frames per second. The new hardware processes frames at 2,300 frames per second. Surely you have to throw the old neural networks out (figuratively speaking; they'll still be used for Autopilot on the old hardware) and retrain new ones, in order to best take advantage of that increase in processing power.
If that's true, how is the fact that a Tesla failed to recognize a fire truck nearly two years ago tell us...anything...about how a Tesla using a completely new neural network running on new hardware in the future will perform?
brad
Mon, 2019-09-09 17:58
Permalink
ADAS and self driving
You need to make up your mind. Either FSD is an evolution of Autopilot and thus we can judge it based on Autopilot, both for good and bad, or it's a completely different product are we're wasting our time looking at Autopilot.
"Completely redone?" Not at all. They won't be identical. They will be bigger, they will have new training data, new approaches. But they will use what they learned and they will also use what they built.
Autopilot had a well known problem with stalled vehicles much more than two years ago. The fact that it still has the same problem in early 2018 when it should have been a very high problem on their list (below hitting the broadside of trucks, but pretty high) tells us a bad thing. Because the real problem of real-true-actual-full-self-driving is in dealing with things you've never seen before. But if you're not dealing with the things you have seen before, that you know you have to deal with, doesn't breed confidence.
FKA
Mon, 2019-09-09 20:05
Permalink
judgements, wontfixes, and priorities
I don't think we can judge how well FSD will handle stalled vehicles based on Autopilot any more than we can judge how well FSD will stop at red lights based on Autopilot.
You suggest it should be high on Tesla's list to make Autopilot do things that it wasn't designed to do. If it weren't for the fact that Tesla has admitted that they can't do FSD with hardware less than 3.0, then I might agree. But Tesla has admitted that they can't do FSD with hardware less than 3.0. So I don't think we can say what should be a priority. Wasting time working on Autopilot features that can be solved on hardware 3.0 but can't be easily solved on hardware less than 3.0 would be a mistake.
brad
Tue, 2019-09-10 09:28
Permalink
Can't do
They can't do FSD with Hardware 3.0 so I agree that they can't do it with 2.5. But they can do more with 2.5, and the more crucial things in particular.
The broadside of a truck is not one of those cruicial things. Those accidents required completely inattentive drivers.
The suddenly revealed stalled vehicle is a risk for any driver. A human looking away for just a second can crash into that. So it's squarely the sort of thing you want an FCW/AEB system to see, and FCW/AEB is a core function in autopilot. They should be backporting anything they learn at least.
FKA
Tue, 2019-09-10 19:00
Permalink
Hardware 3.0 is the hardware that will be used for FSD
I'm not sure what you mean when you say they can't do FSD with hardware 3.0.
I'm sure they have been backporting what they can. But to the extent the limited processing power they have can be used for scenarios that are more likely to occur when Autopilot is being used correctly, they need to prioritize that.
There's also the issue of false positives. If there's a 0.1% chance that there's a stalled car in the lane you're in, it might make sense to not hit the brakes under Autopilot even though you'd surely hit the brakes in autonomous mode.
brad
Tue, 2019-09-10 21:57
Permalink
Can't do FSD with 3.0
Surely you must have noticed that I am skeptical that they can do a real "full self driving" product with just the hardware in the car, including the 3.0 processor.
However, it is possible that they can produce what Elon is very incorrectly labelling "full" self driving with that hardware, which is a human supervised autopilot that works off highways. Nobody else on the planet would call that full self driving.
Some day in the more distant future it will be possible to do a real full self driving product with just cameras and some time of fast processor (be it Tesla HW 3.0 or something better. However, nobody knows when that date is, but most people bet it's not for a while.
I actually wonder about doing it with the cameras they have because there is no way to clean the cameras, other than the front facers. It's not going to be good to have a car that shuts down with a splash of mud on the side.
FKA
Wed, 2019-09-11 17:28
Permalink
I see
I generally read "FSD" as being a brand name for a particular Tesla product. Has anyone used the term "FSD" or "full self-driving" prior to Tesla using it as a product name?
Good point about about the mud. I agree with you that unsupervised driving (which I guess is what you mean by "real full self-driving") is a while away. I think even Tesla officially agrees with you. It will surely take years after building a feature-complete release to work out the bugs, demonstrate reliability, and get the necessary regulatory approval, to do unsupervised driving.
brad
Fri, 2019-09-13 01:18
Permalink
Tesla term
It may be a Tesla name but it's also an English phrase. Tesla is using it in a way different from everybody else. In fact, many people simply use "self driving" to mean "real actual full self driving," the so-called "level 4" and find Tesla calling a product "full self driving" when it's really a supervised autopilot very silly.
However, Elon Musk has said that (fake) full self driving will be "feature complete" this year (he is certain of it) and that next year it will be able to operate unattended, but the regulators might not still allow it so they probably can't sell it (even though it will work.) Or so he says.
If Tesla succeeds in getting people to refer to their product as full self driving, we will need actual full self driving and real actual full self driving etc.
FKA
Sat, 2019-09-14 05:54
Permalink
Silly
It is silly.
I think it's even sillier to pretend that they're not being silly, or to pretend that Elon Musk doesn't sometimes say things that are wildly over-optimistic.
Aaron
Mon, 2019-09-09 11:09
Permalink
I think I should point out
I think I should point out that the accident occurred in January of 2018, not 2019 as the article says. My thoughts are very similar to yours, that FSD without lidar, or stereo cams, or HD maps, or a good DMS doesn't seem too wise. Nor is there strong evidence that they're making good progress towards FSD. Where's the autonomous cross-country drive promised by the end of 2017?
That said, is there any reasonable way to know if Autopilot would perform any better in September 2019 in this scenario than it did in January 2018?
brad
Mon, 2019-09-09 11:45
Permalink
Thanks, corrected that
Obviously, we can't say. We don't get reports on cars that are successfully braking for this situation, and indeed people may not even notice them. Tesla should consider, now that it is able to speak about the event with the publication of the report, doing a test track demonstration. Alternately, they could put code in cars to look for this situation and grab video of the car doing the right thing, and get permission to use that video in order to say, "we have addressed this."
Raptor
Mon, 2019-09-23 21:23
Permalink
The AP often does not brake/stop for a frontal collision
I took ownership of my Tesla model 3 in June 2019 and now 3 months later it is clear that the AP has some serious flaws when it comes to recognising stationary objects in front of the car - even up close.
- example one: a series of cones are standing on the temporary line in a construction zone. I waited to the very last moment before taking over and steering the car away from the cones - else it would have hit them.
- example two: similar to above but this time with concrete slaps making a sharp 90 degree turn in a construction zone. I was less bold in how close I would go before braking but again the AP did not warn me or try to brake to avoid a frontal collision into the concrete blocks (and no cars in front of me).
- third example: not mine but a video I saw where the Tesla hits (and continues) through a series of barrel sized trafic “cones” similar to my example 1.
I am not surprised about the accident with the fire truck. An AP system that has 4 seconds to stop an impending frontal collision with a massive object like a firetruck has a serious flaw - that goes beyond discussion. I am hopefull it will improve with V10 update - but remain sceptic. maybe Brad has a point about the choice of sensors not being optimal.
FKA
Fri, 2019-09-27 05:23
Permalink
A Tesla is not a self-driving car
These anecdotes, I think, are more relevant than the extremely rare crashes that occur, when it comes to deciding if Tesla has succeeded yet in creating a self-driving vehicle.
Obviously they haven't. At least not with the production software they've released. Whether or not they can, and whether or not they will, with the sensors they have, is something we really can only speculate about, though.
Brad points to what he claims "most experts" or "almost all experts" say. But almost all of those experts are going to fail at building a self-driving car. It's something that so far absolutely no one has demonstrated that they know how to do.
brad
Sat, 2019-09-28 02:38
Permalink
Experts are wrong
Yes, the predictions of experts are not an assurance of the future, far from it. However, when you are going against the prevailing wisdom, then you have to show why you know something the others don't. Tesla has made its case for that, but it's not a very strong one. As I wrote earlier, some day computer vision will be enough to get depth and make lidar superfluous. Some day. Who knows, maybe that day is close and Tesla's team is the one that will do it. It's not impossible. It just doesn't seem like a very likely outcome at present.
FKA
Sat, 2019-09-28 04:33
Permalink
Have to show?
They don't have to show anyone anything. Not until they're done.
Why should they?
I don't know how long it's going to take. What I think is a good bet, though, is that a computer that can't make sense of data coming from cameras without lidar can't properly predict human behavior enough to be trusted to drive itself in general (as opposed to extremely limited) driving scenarios. Sure, for simple situations it's enough to know that there's a large object in front of you, and nothing else. But being able to handle those simple scenarios just gives you a false sense of security.
Great, you've detected a large object stopped in the right lane. But you have no clue what it is. Do you go into the other lane, or just stop in your lane? Do you enter the lane with oncoming traffic to go around it?
I guess the Waymo answer is to stop and wait for a human in some central control room to take over. Maybe lidar combined with mediocre computer vision can work for that level 3 system. Maybe level 4/5 is far away, and level 3 using lidar is what we'll have to accept for the next several years.
brad
Sun, 2019-09-29 01:53
Permalink
Yes, that's Tesla's argument
If correct though, it means that self-driving is much further away than we hoped, not that Tesla is going to solve both full prediction of other road users and full perception from camera images "real soon now."
I see the prediction problem and the vision perception problem as important, but different. It is not clear that they get solved at the same time, or even which one is harder. (And even if one is harder than the other, that the easier one is necessarily solved sooner, though that would be the best guess.)
But since LIDAR solves the depth map problem today, in eliminates that question. It does not eliminate the need for understanding of the scene and predicting what the others on/near the road are going to do.
FKA
Sun, 2019-09-29 11:48
Permalink
Billions of miles
I suspect self-driving is much further away than we hoped, although I'm not sure how much further.
Solving the prediction problem and solving the perception problem can very much go hand in hand. The cars can drive around (with hundreds of thousands of drivers doing this for them for free). To predict correctly you need to perceive correctly. Your score on predictions acts both as a score on your ability to predict and as a score on your ability to perceive. And remember, you're not just predicting what you'll see in the future with the cameras, you're also predicting what you'll "see" with the radar and the ultrasonics.
That said, they may still need to have more hardware. It would certainly be easier to do this with 360 degree lidar. At the least they might need more cameras, and/or a camera that can quickly turn 360 degrees. And it's not clear whether or not the processing power they have even with the latest hardware is enough (fortunately this last problem is relatively inexpensive to fix).
Fortunately again, even adding more sensors to their cars will be cheaper than hiring drivers to drive billions of miles, in nearly every single driving condition. They'll take a big PR hit if they have to do this, but the cost of the additional hardware will be borne by the car owners except (probably) for those who have already purchased FSD (any idea what percentage that is?). And the time to retrofit all those cars will be non-trivial.
But in the meantime you still have a car that is one of the best, if not the best, in terms of ADAS. Hopefully that'll be enough to keep the orders coming long enough to survive until the software can be completed.
What was really disappointing to me today was to see the videos of people getting into crashes (or near-crashes) using summon. One of the most disturbing was a Tesla that pulled out in front of cross traffic. I'm not sure which cameras they use to check for cross traffic at an intersection (probably the forward-looking side cameras), and I'm not sure any of them have as good of a view as a human with the ability to move and turn his/her head. Hopefully I'm wrong about that, and it was just a software error (or that the video didn't show what it claimed to show). The ultrasonics are not adequate for this. They might sometimes be enough for a parking lot, but not for pulling onto a 45-mph (or even higher) roadway. And unfortunately the reality is that people build fences (illegally, but it isn't enforced enough) in places that force you to enter the intersection in order to check for cross traffic. Where are the forward-facing side cameras relative to the driver? Further back, I believe, which could be a big problem. (Edit: They're on the b-pillar. I don't think this is far enough forward to handle rather common intersections, including one I travel almost every work day, where an undoubtedly illegal fence blocks the view. I guess the Tesla could take a different, longer route...) (Edit 2: They're close to where my head usually is. A little further back, and with no ability to lean forward to see around an obstacle, but that interection near-crash is likely a software error, and not a hardware error.)
Lidar would be great, of course. But unless you can figure out how to get billions of miles of real world data from cars with lidar, it's irrelevant, because you can't solve the prediction problem without it. Simulated miles don't help you solve the prediction problem. Simulated miles just confirm the biases of the simulation.
brad
Sun, 2019-09-29 12:20
Permalink
Unsupervised learning
To use millions of miles you would need an unsupervised learning approach, which nobody really has at present. At least for perception. There is interesting potential for it in prediction, if your perception system works.
FKA
Sun, 2019-09-29 14:08
Permalink
How Tesla uses its billions of miles
Your assertion is highly oversimplified.
See https://towardsdatascience.com/teslas-deep-learning-at-scale-7eed85b235d3
Especially, see https://youtu.be/A44hbogdKwI and https://youtu.be/v5l-jPsAK7k
"We don't write explicit code for, 'is the right blinker on,' 'is the left blinker on.'"
"In shadow-mode, the vehicle is always making predictions.... And then we look for mispredictions."
"While you are driving the car, you are actually annotating the data, because you are steering the wheel."
"The network is predicting paths it can't even see, with incredibly high accuracy."
Tesla can, and is, using its billions of miles of data. There are other ways that other companies could be utilizing this much data, in this broad of a range of conditions and locations, but to my knowledge there is no way to get anywhere near this amount and quality of data from cars with lidar sensors attached to them. Lidar is just too damn expensive.
You've criticized Tesla before because they've said they find little use for simulators. The thing is, they don't need simulators. They have the real world. And the real world tends to be much more realistic than any simulator (though with all the data Tesla is processing in shadow mode, they could make the world's most realistic simulator, if that were what they wanted to do).
(I originally used the word "collecting" rather than "utilizing" and "processing." This was inaccurate. Most of the data Tesla is processing is being processed within the car, in what they call "shadow mode," which I believe is a term they use to refer both to times when a human is driving and the system is watching and to times when part of the system is driving and another part is watching. It's only the interesting parts, no doubt including rare events and when they get a highly confident prediction wrong, and from the videos including data for features they're working on, like the X seconds of sensor data prior to the Tesla getting cut off, that are actually being sent back to HQ. They refer to "switching a bit" when they move a feature from shadow mode, where they collect data, to driving mode, where the feature is used to contribute to driving decisions.)
brad
Tue, 2019-10-01 03:42
Permalink
Supervised
Yes, it is great that they can learn from the drivers, and that is not pure unsupervised learning. As you might guess, people are pretty skeptical if you can build a sufficiently safe and testable driving decision engine based just on that. Indeed, such an engine may drive too much like humans, you need a way to split out the good and bad human behaviours. But it's an interesting approach to be sure. It's less clear that Tesla actually uploads all this data -- in fact we know they don't -- but the question is how good they can be at selectively uploading it to train from.
We need not just bet your life reliability from this, not bet-your-kids reliability but bet-bystander's-kid's-lives reliability. Which is, as everybody knows, very hard.
FKA
Tue, 2019-10-01 12:09
Permalink
Imitation
Once again I think you are oversimplifying. Imitating humans is only one of the techniques used (and in the video they specifically said that they have ways of splitting out the data from the good drivers). I'd argue that some form of imitation of humans is necessary, if you're going to have robots and humans driving on the same roadways. Waymo learned that very early on, when they gave up on trying to handle four-way stops properly and started just imitating how humans handle them. But imitation learning is not the only technique that Tesla is using.
Yes, the problem is very hard, and solving it without getting billions of miles in is not going to be possible.
I'm not sure what you're saying about uploading. They don't upload everything. Most of the data is processed within the car. Only some of the data is uploaded to HQ. I think I said that in my prior post. That doesn't mean they don't use the data that they don't upload, though. They do use it, they just do much of the processing within the car, rather than in a centralized location.
How well they can do that in-car processing is, of course, key. But I'd say it's just a matter of time before they get it right, and I'd also say that their approach of using non-self-driving cars to gather and filter through billions of miles of real-world driving data is the only way to solve the problem. Perhaps some lidar-based company will go that route. For instance, Waymo could build an international, human-driven taxi service and collect lidar data on the side (what they're already doing in very limited areas). They also are, I suppose, (through their parent/sister company?) getting a lot of miles in through their mapping efforts, and maybe they're collecting the kinds of data that Tesla is collecting through that (e.g. info about highway cut offs). But without billions of miles of real-world data, I don't think you can build a self-driving car.
brad
Thu, 2019-10-03 07:13
Permalink
Yes, not the only thing
But learning from humans is one of the key advantages of having that large fleet of drivers. In the car, they can test how algorithms are working (comparing one to another) but to learn from the event, they have to upload it, and label it, and put it into a training data set.
You do have to learn some human driving patterns, but I would not agree it's settled if the best way to do that is to base your path plans on how humans drove that situation, or to start with a more "robotic" path planner and then have it use machine learning tools to "humanize" its plans, but in explicitly decided ways.
For example, the Tesla TACC is rather jerky today and I would like it to be smoother, the way a human drives. You could do that by watching how humans follow other cars, and trying to eliminate those who got too close. But you can also just improve the algorithm to minimize jerk forces and do a better job.
Add new comment