Tesla smart summon is a self-inflicted wound

Topic: 
Tags: 

I tried out smart summon on my Tesla yesterday. Both times it got confused and stuck for so long it blocked parking lot traffic and I had to run into it to move it. Videos have surfaced of the cars (gently) hitting things. Even if there it's working well for many people, these results erode confidence in the capability of Tesla's systems. Tesla has driven over its own foot releasing this product in this state, and for nothing, since it's not at all useful.

See Forbes.com story: Tesla rolls over own feet with pointless smart summon

Comments

Brad,
StupidSummon has zero redeeming social value. It should be forbidden from being used in any public spaces and treble damages should be awarded to anyone harmed by its use, half of which should be paid by Tesla for flagrant neglect of the public good.
Teslas already enjoy preferential parking in many lot (even though their utilization of marginal electricity means that the marginal electricity is being provided by keeping on the most dirty (usually coal) electrical generator to furnish that marginal electricity.)
If you want to use StupidSummon on your own property with your own family at risk should you get too cute in using it, be my guest. On public property, be prepared to to pay a very heavy price.

Also, what is it about SiliconValleyXXs? Are they afraid of getting wet? I thought that it never rained out there? Are they incapable of walking??? How entitled are they???? Here Tesla; here Model 3; here, here. Aren't they all so rich that they have chauffeurs? Alain

Smart Summon is a beta release, meant to be used only on private property. That means it is not finished. It will certainly get better. If you clearly can’t understand that, then simply do not use it. But do not threaten smart users with “paying a very heavy price”. That will only incite others who may say “my attorney will beat your attorney “.
Beyond that, the rest of your rant is filled with inaccuracies and grammatical mistakes (Teslas already enjoy preferential parking in many lot)... Who sir, is actually the stupid one?
Further, “marginal electricity” provided by the most dirty (usually coal)? Where do you live? I live in an area which uses clean energy. If you don’t like where you live, move! Or pay the few extra cents to get clean energy like the “Smart” people do.
As far as entitled Tesla Owners, man do you sound like a very jealous kid! Grow up. Get a job that pays well and learn that the real reason there are so many of these cars on the road (and soon to be way more than all others), is because they simply are better than all of the others. Even if they were recharged daily by the “dirty” electrical power grid in your neighborhood, they would still be far cleaner than your ICE vehicle.
It is best that you learn a little bit more about what you say before you speak. Then, perhaps (but not guaranteed) that people may think you are smarter.

A very well written article dealing with real time issues not covered anywhere about Tesla's smart summon. There doesn't seem to be any real value for the Tesla owner. At some point, Tesla owners must realize auto pilot is not their answer to sleeping while driving or summoning their car across a packed parking lot. Summoning their car from the privacy of a private home is one thing but as Elon Musk warned, not to be used in public. Ignorance seems to prevail in some Tesla owners with vehicle damage brought on by a desire to have higher expectations than Tesla warned against.

Autopilot is actually a very nice product, and it makes driving much nicer, especially in traffic jams. Smart summon is of very limited use. I could see using it in an empty parking lot in very heavy rain -- if it works in such rain. I could also see a person with limited ability to walk using it. But for most people, not.

Fortunately, the damage reports are very few and I suspect Tesla will even pay if they remain few.

But the videos I've seen have been surprisingly bad. It's not just corner cases that the car is unable to handle. It's pretty much everything.

I'm not sure why they released this.

You don't get it. Smart Summon is released so that Tesla can collect data about the performance of Smart Summon. Tesla will use this data to train its neural networks, in order to improve Smart Summon.

The problem is, it's not worth wrecking your reputation and confidence to get that data.

I think you're wrong about that. As long as no one (and no one's pet) is seriously injured, most people will forget about this very quickly, to the extent that they care at all about it. In fact, I wouldn't be surprised if sales have already gone up due to the increased publicity. Tesla has always offered bleeding-edge tech. They don't force anyone to use it.

On the other hand, they don't seem to be at the point where lack of this sort of data is the problem. The car is misbehaving in very common situations in which they can collect plenty of data in shadow mode. It also was clear from the limited release that the car just wasn't ready. It doesn't behave anything like a human driver would, and that's just unacceptable.

I think the real reason they pushed this release is so that they can raise the price on FSD like they said they would. They might wind up pushing that back again given that the rollout has been somewhat disastrous. But that's no doubt where the pressure came from.

My concern is not their reputation. It's that they don't seem to have very good technology to handle low speed driving in locations with informal, unwritten rules. Yes, parking lots are hard, but so are neighborhoods, and the difficulties of each overlap quite a bit.

Well, no, it's not wrecked, but it's tarnished. As is Waymo's because they have some issues with left turns. Everybody is aiming at a target where the vehicle has a quality level you can bet your life on. When the vehicles do things that make people say, "I would not bet my life on that" it makes people judge the product as further from reality.

I haven't seen any effect on the stock price, which is significantly up since then (for largely unrelated reasons).

Smart summon will no doubt get better and better over time, just like Autopilot has.

People, including you, continue to bet their lives on their Teslas every day. Or have you sold yours?

I do expect it to improve. I am saying the released it far too early, as it has eroded confidence in their capabilities. At least a lot of people seem to be saying that, and since it's a subjective opinion thing, that makes it true.

I watch the road with autopilot on. Last week I had to intervene to prevent a crash with the new version 10 at a lane merge. It's still got lots of distance to go.

I agree they released it too early. They made a mistake releasing it in the state it was in.

I just don't think it was a very significant mistake, as long as there are no damages of the sort that can't be fixed (serious injuries to people or pets). There is likely some damage to Tesla's reputation, but it should be short-lived. Plus, don't forget that postponing the release indefinitely would have damaged their reputation too.

You watch the road with Autopilot on. So does just about everyone. That's going to be the case until Tesla clearly demonstrates that this isn't necessary. Whether or not they released Smart Summon too early (which will be something that happened years in the past by the time this is an issue) is irrelevant.

I think your anecdotes about what people are saying has more to do with who you talk to. As I pointed out, the stock didn't seem to be affected at all. The people you talked to, did they have confidence in Tesla to begin with?

Yes, I even had one of my biggest Tesla boosting friends write me to apologize for being critical of my predictions. Yes, that's just an anecdote but I see the same sentiment in Tesla forums.

Strangely, I don't think the value of autopilot and FSD tech is reflected much in Tesla's stock price. As Elon started making the promises of each Tesla being able to pull in $200K from Tesla network fees, that didn't move the stock. If people believed it those claims should be pushing the stock even further through the roof.

I think the market is mostly betting on Tesla being dominant in electric cars, not self-driving ones. And the uncertainty is not over when they deliver FSD, but how many orders they get and whether they can deliver them.

I think the market for the most part realizes that FSD is just going to be an incremental step above the current Autopilot. I'm fairly certain that the market mostly ignores Elon when he says ridiculous things.

The release of Smart Summon was abysmal, and it did lower my expectations of Tesla's ability to deliver. I was surprised that the market didn't react, seemingly at all. But my expectations had already been lowered a lot by the videos I saw of the early access participants. It's why I say that I think Tesla would have had mostly the same loss in confidence from further postponing the Smart Summon release as it had by releasing it.

The reality is that all but the most starry-eyed seem to accept that Elon's predictions of functionality real soon now are quite unreliable. As Elon himself said about his own predictions, sometimes he's late but he gets it done. He knows he over predicts when and so do Tesla owners, and so I think delaying smart summon until it was useful would not have been so upsetting.

I did see one use of smart summon that is (very modestly) useful. Some people are using it to summon the car while they are walking to it, hoping to meet it partway there. That saves you some walking and the time to pull out of a spot, and of course it's actually easier to control pulling out from outside the car.

To make that work better, I would want it as a keyfob function, or have some way to enable it without going through a lot of UI on my phone -- pull out phone, unlock, invoke Tesla app, wait for it to start, wait some more, select summon, start the summon etc. Hard to do while walking.

I could see using it in a fairly empty parking lot to save me the trouble of pulling out. Problem is that it has to be fairly empty, because right now summon (probably wisely) is very conservative backing out and moving, and that means that unexpected traffic shows up in the lane and gets blocked by the car. You don't want to do that.

In some ways, smart summon might be more useful if it just pulled out of a space and stopped with the car pointed the right direction for you to get in, as long as you aren't unloading a shopping cart.

It's hard to see how to make Smart Summon into more than just a gimmick so long as it requires someone with line-of-sight to operate it. One thing that would help a lot would be a better designation of the pick up location. It's expected in a parking lot that cars will temporarily be blocked when others are pulling out. But it's generally unacceptable (albeit with some exceptions) to completely block other cars when loading and unloading.

They probably need good maps to do this well. Then they can present a plan for how to get to the precise pick up location and the operator can confirm or maybe even modify the plan. Ideally they'd be able to build the maps while navigating the parking lot while arriving, if the particular parking lot wasn't mapped already.

I think they'll eventually get there. And eventually you won't need line-of-sight. It'll take some time, though.

Yes, when it's a valet park you can summon before you leave a building that's a useful thing. That needs maps, and also rules -- ie. there will be only so many "waiting spots" by the door for cars to wait for their masters, and so cars have to only go there when they have reserved a spot and only hold it for a few minutes for their master to come out.

I think that in effect owners of busy parking lots will contract with a provider who offers "parking lot management" to them. This includes keeping the lot map up to date, and managing cars that want to come in and valet within it. The car's will ask its HQ for information on parking in a lot it's going to, and get told the policies and map of the lot, and where to go in the lot, and when it can come forward to wait for its master. This is not actually a very complex thing, and I don't expect stores and employers to have to pay very much for this service. They can also have simpler service and policies (just a map or just some policies) and will do so to please the customers who want to come in robocars. It's a win, actually. You can tell the self-parking cars to park at the back of the lot to leave room for the HDVs. You can tell them to park valet dense back there increasing lot capacity. In some cases you can tell them to leave if the lot fills up and go to a satellite lot. Well worth it to the lot owner.

There's no need for maps or rules or other infrastructure beyond the maps and rules and other infrastructure we have for Uber drivers.

Not for personally-owned cars, anyway.

Maps and rules are virtual infrastructure, not physical, and because virtual infrastructure is cheap and flexible, most robocar teams are big fans of it. It solves a lot of problems, including the complex question of handling parking lots, which humans actually kinda suck at, including Uber drivers, who end up being given rules about when and how they can go to crowded parking areas like airports.

It's compounded by the fact that the cars can't see the whole lot very well from inside a parking space, so if you summon one, it can't easily tell if there is a free spot by the door to wait for you, and if too many of them go they will congest the lot.

Unlike the roads, parking lot rules are often very ad hoc, and just written in English on the pavement and signs in non standard ways that require human level intelligence. That can be solved with a map.

The vast majority of parking lots are not fundamentally any harder than the street that I live on, where rules are often equally ad hoc.

The maps and rules you talk about above are not cheap. In fact, they would be devastating. Imagine if we didn't allow Uber drivers to drop people off places without getting specialized maps and rules created by the property owner. Uber wouldn't just be more expensive. It simply wouldn't exist. (We can leave whether or not that'd be a good thing for another debate.)

Yes, some places, like airports (and many stadiums, and Disney World) have special rules for Uber drivers. In many cases these are largely for profiteering purposes (Disney, for instance, bans Uber from its properties in order to sell exclusive access to Lyft). But they ostensively are for the same reasons you list.

No doubt there will be some special rules, and some special maps, for autonomous vehicles. I don't think there needs to be much beyond the special rules and special maps for Uber drivers. Not for personally-owned vehicles, anyway. For robotaxi companies, there likely will be a lot more in the way of rules, as there is otherwise essentially no limit to the amount of traffic they will create. This is especially true for parking. Robotaxi companies should have explicit permission to park on private property. Personally-owned robocars don't need this. They have implicit permission to park in the usual places where people have implicit permission to park - e.g. in a supermarket parking lot while the owner-operator is personally shopping at the supermarket.

I agree that parking lot rules are often very ad hoc. I don't agree at all that they are written in English. They're not written at all, and having a human try to write them down would be impossible. Figuring the rules out doesyielding to car going through a stop sign when require intelligence, and artificial intelligence will be able to figure out the rules and encode them in machine-readable format much more efficiently than humans ever could. That's how software 2.0 works. You don't hand-code the rules. You hand-code something that can figure out what the rules are, and feed it the billions of miles of data it'll need to figure it out. You don't hand code where cars are allowed to drive in parking lots. You get the software to figure out on its own where cars are allowed to drive in parking lots.

That's how it'll work for parking lots, and that's how it'll work for navigating the street that I live on. There are idiosyncrasies on that very street, some of which are somewhat general to suburban developments in the area of the country I live in, and some of which are specific to two particular intersections (on one of which I witnessed a crash caused by a driver who had just moved to the area and didn't know about the "rules" of how to handle that intersection, and on the other of which I have avoided a crash on multiple occasions by following the "local rules" that completely contradict the vehicle code).

Autonomous driving is hard. Many companies are struggling now just dealing with following the vehicle code using lidar sensors. If they'll ever be usable in large portions of the streets of the world, they're going to have to have human-level intelligence. Or more likely, superhuman-level intelligence. (Domain-specific intelligence will suffice, I hope. But it'll be superhuman domain-specific intelligence.)

Go is hard too.

The robotaxi (uber) will drop you off anywhere on the roads it is allowed to drop you off. They don't go into parking lots very often today.

No, the streets rules are not the same as the parking lot. The street rules are written down in a law book and must be encoded in the signs and road according to written standards. A parking lot can have signs saying "This space reserved for employee of the month" or anything else desired, with no standards.

These maps will not be expensive at all. For outdoor lots, it will be a matter of taking an aerial photo of the lot and drawing some lines on it saying where the spaces are, where the lanes are and what direction they go, where the pick/up drop-off spaces are and perhaps a few other things. Probably take the owner of the lot 10-20 minutes and a reviewer a couple of minutes to confirm if there are not strange rules. The cars will still read the lines marking the spaces.

But probably less. If you imagine a world of cars that can figure out a lot on their own, well, then a server can certainly do that very well from the aerial photo, so the owner takes perhaps 20 seconds to confirm or modify that -- and say this is the region for valet cars, here are the pick-up spots, here's the direction to go.

Hardly prohibitive. The better the software gets, the easier it is.

Yes, robotaxis will need permission -- and will have to pay to park. They won't need to do that to pick up and drop off customers though.

Naturally, you want the cars to be able to figure out as many of the local variations on streets and parking lots as they can. But until they can, it's not that expensive for a human to review the software analysis and tweak and confirm it.

The law cannot be written in code. Even just the statutes can't be written in code, and the law is much more than just the statutes.

What is the law, anyway? Can we agree that it should be defined in terms of legal positivism?

Knowing what rules you're allowed to break, what rules you're not allowed to break, and what rules you're required to break requires human-level intelligence.

Perhaps a perfect, human level of handling all those aspects of the rules and law requires human level intelligence. But you don't need a perfect human level of performance here. You just need to stay within the law, and play reasonably well with others. Not perfectly well, not as well as a smart human would. Just well enough that they don't get too upset at you. And they are possibly cutting you some slack in the early days, too, at least some people, and probably the law.

But more to the point, you can get human level intelligence with maps, because a human overviews the map and figures out what "Reserved for employee of the month weekdays" means. Or "No parking 3-6pm" And that can be turned into code.

You do agree that dealing with parking rules and dealing with driving rules are completely different things, right?

No, you don't just need to stay within the law, but depending on what you mean by staying within the law (the actual law has numerous not-well-defined exceptions), it's not even clear if it's possible to stay within the law.

Necessity is a defense to violating a statute. When you say that an autonomous car should "just stay within the law," are you saying that it should, or should not, sometimes violate statutes based on necessity.

Presumably you agree that it's okay for an autonomous car to violate a statute sometimes. When those times are is very complicated, and cannot be hand coded into computer language.

Yes, I am very familiar with this and have written articles about speeding, 4 way stops and many other things. Better to say stay within the rules.

Some day, the vehicle code should be changed to "stay safe, and don't unfairly get in others' way." Or almost that simple. But we're a long way away.

I don't think you grasp the problem if you think I'm talking about speeding (which is almost never necessary, and is almost never legal outside of emergency vehicles).

"Stay within the law" is not well-defined, and unless you are defining "the law" as a pragmatic concept, it's terrible advice.

Fortunately, no serious player in the robocar industry is taking such a simplistic approach to things.

Speeding is just one of many things I've covered (robocars should be allowed to speed when humans are doing it, to increase public safety) but of course it's much more than that.

Actually, many teams are still wrestling with how to deal with driving like a human, including going outside the bounds of the law from time to time. It's easy for a human to play loose with the law, the consequences are usually zero. If a company decides to break the law in the same way it's usually OK, until something happens. If there's an accident at the same time as a conscious decision to break the law it could be ruin.

The point, which you seem to be ignoring, is that "the law" does not equal "the statutes."

Edit: Nevermind. Whatever.

I gave a very simple example, by the way. The statutes say that if you're at a stop sign, you should yield to vehicles already in the intersection. The statutes define the intersection as including everything past the stop sign. So when you're at a four-way stop, and the other cars have crept into the intersection slightly, you are supposed to yield to them.

Of course, no one follows this statute. Moreover, you'd probably have a defense to not following it, if any idiot cop ever tried to charge you with violating it.

Should autonomous cars follow it?

If course the answer is no, as Waymo (then Google) found out. Instead, what you should do is use machine learning to figure out how people handle four-way stops in practice, and act like them.

That's just one example, that was solved early on. There are numerous others that need to be solved before we can have unsupervised autonomous vehicles.

First time I used it was at a gas station and it promptly hit the nearest gas pump scratching the front of the car badly ...Just a parlor trick that doesn't work yet .
Too early released , I think .

Um, gas station?

Are you going to ask Tesla to reimburse you for the damages?

If you do, I'd be interested in how they respond.

Only a very few people (not me) have had crashes with smart summon. Tesla says it's the fault of the operator who has to have a finger on the go switch to make it drive, but the reality is that the whole point of smart summon is you are some distance from the car, and you can's see all angles of it. It often is doing stuff on the side you can't see, and you can't tell how close it is to hitting something over there. You could walk over to your car to see, but that kinda defeats the point.

Harry says his car hit the gas pump and got scratched.

When did Tesla say it's the fault of the operator? Was this online, in an interview you did with them (presumably on the record?), or something else?

I don't think the "go" button is enough. In addition to not being able to see the angles, you don't know which way it's going to go.

How long is the delay between letting go of the button and stopping?

I'm a supporter of Tesla and a Model S owner for 3 years but I have to agree that the smart summon at this point has limited functionality. What it can do is amazing BUT I would NOT risk denting my car, someone else's car, person, pet, what have you. Parking lots are just too hazardous. What I don't agree with is that it is Tesla's responsibility to judge the overall stupidity of the public. It is the owner's responsibility to keep their property and others safe.

The point of the smart summon beta is to feed data to its deep learning AI to improve its self driving abilities. Gotta walk before you can run. The parking lot is the perfect place to start.

Sorry to hear it didn’t work for you. I tried it twice and it worked both times. Not terribly useful right now, but you have to start somewhere. What FSD is building up to is getting a car across the country, from one parking space to another, without human intervention. The highway part is pretty much done, parking is next, then stop signs and traffic lights and the rest of city driving.

Smart Summon is the parking part, and releasing it is part of the timeline. If you wait too long, the whole thing will be delayed. It’s a beta, it’s not perfect, but plenty of people find it works quite well. It has been used millions of times, with only a handful of minor accidents. Plus, Tesla collects a lot of valuable data. It is Elon’s modus operandi to get things out early and iterate, for better or worse.

Mostly better, in the long run, judging from past experience. Remember all the crashes at SpaceX leading up to rocket landings? That went pretty well in the end.

My 2 cents

On what do you base your claim that the highway is pretty much done? I have Autopilot and use it regularly on the highway, and would say that the highway is not even 1% done. Where are you getting your numbers? Pretty much done would mean the typical driver never saw a need to intervene in around 40 years of driving, maybe 60 years. Have you driven yours for 40 years without needing to take over?

What I meant is that Navigate on Autopilot is able to drive the car from entrance to exit on highways, with no human intervention, most times.

You can’t be serious about that 40/60 year standard. It is obviously impossible to achieve, definitely not by humans. Even the very best human drivers get this wrong much more often, getting lost or into an accident.

You could argue how far on the way to “done” “most times” really is, but 1% seems way low. “Almost” is much more reasonable, in my opinion.

The average human driver has a tiny accident about every 10 years of driving (small fender benders not reported to police.) They have one reported to the police about every 50 years of driving. Many people never have such an accident in their lives. I have driven for over 40 years without such an accident, and this is not unusual.

However, the rate for the highway is much longer, since the rate of accidents on the highway is, per mile, about 1/3rd of the rate on city streets. Since navigate on autopilot is only for highway, and accidents on the highway tend to be of the more serious kind that get reported to police, that might suggest that a product like that gets to needing human takeover perhaps once every 150 years to be equal to human performance.

So you're right, I am not serious about 40 years -- it's much longer.

But why do you think it's impossible to achieve? Waymo has done it, for example. It is simply that Tesla is about 1% of the way there.

There is a huge difference between “a typical driver sees the need to intervene” (from your standard) and “accident reported to police”. Surely, requiring 40 years between the former is stricter than between the latter. By so much that the comparison is meaningless.

Generally, need to intervene means that without intervention, some sort of safety incident would have occurred, in particular either a "contact" or high risk of a contact. (That could use some better defining.) So yes, you can compare rates of needing to intervene (where intervening prevented a contact) with rates of actual accidents. We have information on the rates of different types of accidents for humans, and I have stated some of them. Generally, any departure from the highway in highway driving at speed will class as an accident, though not all are of the sort reported to police. Temporary lane departures may not be accidents, though most teams consider an unplanned lane departure as a serious issue.

Fortunately, due to the naturalistic driving studies there is data on the frequency of other safety events like lane departures that caused no harm for humans to compare with.

I do often intervene in my Tesla when it probably would not have resulted in an accident. I think I've done 2 interventions this year that would have otherwise been accidents, though of course I can't be sure. Both were in merges, one with nav-on-autopilot on. The rest of my interventions have been things like getting much too close to another vehicle or object, and in one case driving into tree branches in an off ramp.

Not that it's an excuse, but I'm curious if you would have been at fault for one or both of those two avoided accidents, or if they were situations where the other driver was in the wrong.

In one case the lane to the right of me was ending, and the driver beside me was not slowing down to merge in behind me. Instead I hit the brakes and let him in, probably want he wanted, though in that case I would be in the right, I think. You do not have ROW if you are merging onto a highway, but I am less clear who has it when a lane is just vanishing on the highway that was not an on-ramp.

The other situation was trying to merge onto a highway. I am not sure it would have been an accident but I suspect I would have been at fault if there had been any impact since I was the one merging on. It was my first attempt at nav-on-autopilot doing a freeway to freeway transfer.

I just realized that we are, in one thread, discussing whether or not the law can be perfectly written into code, and in this thread you are saying that you're not sure who would have been at fault in two real-life scenarios.

I guess your argument would be that while you don't know the law, it is knowable.

In any case, I'd cover both of those situations not by a rule of law, but by a rule of ethics: Try not to get into crashes.

That, plus a good neural network that can predict how other cars (and other "obstacles") will actually behave (regardless of how they're supposed to behave), plus the sensors to recognize where the other cars (and other "obstacles") are, plus, I guess, a basic knowledge of the physics of driving, is what is needed to drive in those two scenarios. (Unfortunately there are many other scenarios that are more complicated, and "a good neural network that can predict how other cars will actually behave" is easier to say than it is to build.)

Who has the right of way just doesn't matter. (You've been successfully driving for many years and you don't even know yourself.)

In the scenarios I described I intervened, there was no accident, I don't know for sure if there would have been one. I can tell you who is at fault if there had been one, and it would have been the other car in one case. In the case of me merging on, the thing I was afraid of is unsure. The car might have merged where there was not room -- its fault. It might have slammed hard on the brakes (as I have often seen it do in merge areas) which could cause a crash that was not its fault technically but is morally in the view of most.

Neural networks don't understand the physics. They understand the patterns of what they have seen.

So I do know myself (though it varies from place to place) but what I don't know is what the geometry of any impact would have been.

You said above, "I would be in the right, I think" and "I am less clear who has [the right of way] when a lane is just vanishing on the highway that was not an on-ramp." That doesn't sound like you know who would have been at fault. It sounds like you don't know who had the right of way.

You also say "You do not have ROW if you are merging onto a highway." You seem confident about that, but I'm not sure you're right about it. I think it depends on the state. It also depends on the judge, probably. See https://www.pe.com/2011/01/03/who-has-right-of-way-when-merging-onto-freeway/ . Even the experts can't agree on who has the right of way. The DMV and the CHP don't agree, and that's just in one state. (What is your definition of "highway" for which you made that comment, anyway? How would you encode the distinction between the two scenarios?)

The only merge situation that is fairly clear and universal is when one of the roads has a "yield" sign.

(Neural networks could implicitly understand physics, but I didn't suggest doing that. The basic physics of driving are pretty easy to code by hand, though you'd probably benefit from neural networks to learn how to drive in various different environments, like ice, dirt, etc., as the physics of that sort of stuff gets complicated. I believe Tesla has said that they use neural networks to figure out things like how to navigate curves and how hard to hit the brakes. I'd be uncomfortable using neural networks for things like crash avoidance without a backup system that was hand-coded, though. I could be wrong about that, though. Eventually, I will be. Maybe many years from now, maybe much much sooner.)

(Physics is patterns. Actually, all knowledge is patterns. In my opinion, as an empiricist, anyway.)

This all makes sense, but what you said originally is: Years between interventions should be 40 because years between accidents is 40. It would have been more correct had you said “should be 40 times X”, where X is the fraction of interventions that actually prevent a police reported accident. Sounds like I’m splitting hairs, but I don’t think so, because I believe X is quite small.

To match human safety levels, you do indeed need to get necessary interventions (those needed to prevent a police reported accident) down to about one in 500,000 miles, which is about 50 years of driving for the average human.

Most people currently believe they want to get the robocars to be a fair bit better than the average driver, however.

In addition, as noted, the per mile accident rate on highways is about 1/3rd that of city streets. However, my intuition is that more highway accidents are police reported so that will balance it out somewhat.

Tesla has not released any data on their frequency of necessary interventions. Or of interventions at all. They have released a report on "accidents" while using autopilot without saying what an accident is. I strongly suspect they are using "airbag deployment" which is rarer than police reported. And of course, that's the record for supervised autopilot which would be vastly better to what unsupervised autopilot would do.

Anyway, while these are round figures, the suggest Tesla isn't even remotely close as yet.

You are right about what’s needed. You are also right that Tesla does not provide data on how close they are. NoA does not claim to be self-driving, anyway. A rigorous analysis needs to be done, and observations by you or me cannot provide even a rough estimate. I am pretty sure Tesla is doing such analysis internally to measure progress, but they won’t publish them until they are good enough for regulatory submission. How far along they are towards that goal, we don’t know, but I would be very surprised if it was as low as 1%. Tesla knows, and they have been saying end of 2020, but we are probably talking Elon Time, there.

Tesla claims that this year they will have "feature complete full self driving" that needs supervision on city streets, and that next year it will be safe to use it without supervision, and sometime after that, that regulators would approve such operation.

Right, use without supervision by end of 2020, ready for regulatory submission, is what I also understood.

As self-driving systems get better, “need to intervene” becomes a bad measure of success. For example, if the supervising driver is my spouse, I myself flunk this test after a few minutes, tops. More to the point, when the autopilot is a better driver than the supervisor, “need to intervene” becomes a measure of the supervisor’s ability, not the autopilot’s. So, we need a better standard, such as “mistakes made per mile”, with a suitable definition of mistake that treats man and machine fairly.

When I say, "need to intervene" I mean it in the sense the advanced teams use -- they intervene any time things look too risky, then they replay it back in simulator without the intervention to see what would happen, and if they truly needed to intervene.

If you really meant it that way, then it doesn’t support your argument, because neither of us has the means to evaluate our “need to intervene” that way.

That is the metric Waymo uses internally. They used to report it, they stopped because nobody else was reporting it. It is often talked about as the proper metric, since a good team tells safety drivers to intervene if they suspect a problem, and you don't want to punish teams for being cautious like that and saying they have lots of interventions.

Need to intervene should mean need to intervene, which is what I say and many others do too. We are pushing for this to be what is reported. Trouble is, not everybody has simulation as good as Waymo and the big ones. And for Teslas (which are not self driving, even the theoretical "full self driving" coming out) they definitely don't at present.

Rereading your original comment (“ Pretty much done would mean the typical driver never saw a need to intervene in around 40 years of driving, maybe 60 years. Have you driven yours for 40 years without needing to take over?”), there is no mention of the more elaborate measures that Waymo may be using, and in fact neither you nor me would be in a position to judge which of our interventions qualify as “needed” under those rules. So, clearly, the conclusion you are drawing is unwarranted.

We don't have enough data to score Tesla, but in examining the performance of the vehicle, and what it does show us, we can be pretty confident the score is not anywhere close to the score we need to see. You can see that from accident reports, you can see from the perception visualizations, you can see it from the people who have hacked into the Tesla to pull out more detail about what their perception system is seeing.

If Tesla were close to what is needed, then we could have a debate of whether they are over or under whatever threshold we wish to name. But they are not anywhere near close. What evidence do you have that they are close? What evidence have they given? The burden of proof is on them.

I thought you had concluded that NoA was no more than 1% complete because we are that far from 40 years without need of the “typical driver”, or myself, to intervene. Forgive me if I misunderstood. I think that particular conclusion is not warranted for the reasons I explained.

Whether highway driving is “pretty much done” is debatable, but I feel
it is working quite well overall, and work that remains is around edge cases. Maybe you’ll allow me to say it is “much more done” than parking and city driving?

It's complex. If we learn that Teslas have accidents 100 times more often than people would, you could call that 1% of the way there. In terms of safety record it is. In terms of time, it's better, more like 10% of the way there because you keep improving faster and faster.

But the reality is we don't know what Tesla's actual safety number is. Tesla itself may not know. I am just pretty confident that a Tesla could not come anywhere near to driving 500,000 miles on the highway without supervision without having an accident of the sort the police get involved in. I don't think it's close. Elon claims otherwise.

The release of Smart Summon also allowed Tesla to recognize $30 million of deferred revenue on its latest quarterly report, which helped it show an unexpected profit that sent the stock up significantly.

$30 million is such a small fraction of the profit that it wouldn’t have made a difference in the stock price rise. I think a much more likely motivation is to press forward on the way to FSD, to release a beta so it can be improved. It can be very hard to improve a product when it isn’t in use.

$30 million is 21% of the profit reported for the quarter.

Exactly. Small compared with the range of expectations.

In your driving on the highway (how many miles?), you've had between 0 and 2 times (you're not sure) that your interventions prevented a crash.

Somehow you've extrapolated from this that Tesla is about 1% of the way toward completion of autonomous driving on the highway.

But that 1% is "in terms of safety record" and not in terms of time. In terms of time, I'm not sure you've released a number (is it 10%?), but you do feel that number can be derived from number of necessary interventions per mile. In particular, you feel, based on the current number of necessary interventions per mile using the production version of the software as of the time of those maybe-necessary interventions (and the latest version of the hardware?), it's not possible that Tesla could be ready for highway driving without supervision by the end of 2020, as Elon Musk (who is admittedly and notoriously bad at estimating timelines) has claimed.

Is this correct?

Tesla has fallen for a mistake that many in the field fall for, which is imagining that once you get it to drive 1,000 miles it's not much more work to get it to drive 100,000 or a million. Turns out it's "easy" to get a basic demo going, as in a large number of competent teams can get there in a year or two of work. And Waymo was at a level of highway performance superior to Tesla many years ago. And they have been doing the hard slogging from there to actual production, and are the only ones close to it.

Yes, Tesla is trying some different approaches, so perhaps one of those will produce a new and surprising result. But it won't be because Tesla has better knowledge of neural networks than Google has (as they would freely admit) or because Tesla has a better processor (they don't.) Tesla vehicles drive more miles but gather much less data in doing so.

As I've said, to be deployable, then most drivers should be finding they don't need to intervene in a whole lifetime of driving. Of course, decent Autopilot has only been out a short time, so reports of needing to intervene should be extremely rare. They are not. 20% of Tesla Model 3 drivers surveyed report that Autopilot caused a dangerous situation for them, and almost all of them had used it for well under a year of driving.

So yes, that says "they are nowhere close." The claim that they are makes everybody else in the industry not just doubtful, but literally laugh. They all laugh because most of them have been there, where they had something that could drive for a while on the highway, and imagined they were close.

I'm not sure where you are getting any of that from. Musk is not making the mistake of thinking that a 1,000 mile demo is meaningful. His mistake is likely the one that a lot of engineers make: He is likely assuming that his employees are much better engineers than they actually are. 100 Elon Musks with Tesla's current code and data could deliver a truly autonomous car in a year. Whether or not Tesla's current engineering team can do it, I don't know enough to know.

Yes, the Autopilot that is released today is not deployable as a car that can drive itself without supervision. No one is claiming that it is. Your claim seems to be that you can extrapolate from the number if necessary interventions needed over the past year to the number of interventions that will be needed in 2021, for a product that is released in late 2020. I'm not sure why. How many interventions are needed today is, quite simply, irrelevant, except to the extent it helps Tesla sell more cars and gather more data.

Would love Tesla to show me more. Current beta testers are under NDA, however. Most teams are pretty secretive and only give out hints of where they are. Teams that want to hype how good they are sometimes reveal more, and if they don't reveal more, it makes me not believe they have it until they show it.

Maybe they have, in secret, gone far beyond everybody else, including teams with more skills, more money and more time. Maybe. But we should presume they have not until we see evidence. The only evidence we get is how the new releases of Autopilot do.

All I'm saying is that we don't know. Tesla seems to have all the ingredients to make a self-driving car within a few years, but it's not clear whether or not they'll be able to execute on it.

Other teams may have more skills, more money, and more time than Tesla, but Tesla has the upper hand on the most important ingredient needed for building an AI car: Billions of real world miles of actual driving. They only need a tiny fraction of the time or money of most other companies, because they have a large and growing fleet of cars owned and operated by individuals that spend their own time and money for the benefit of Tesla.

One huge question mark is whether or not this huge advantage that Tesla has will make up for its deficit in skills. I think it will, though. Both AI software itself and the hardware to run it on is quickly advancing. Tesla will almost surely not be the company to initially develop the technologies, but the techniques will relatively quickly leak out from the companies that are developing them and become well-known, standard techniques. Nearly all software technologies, outside of military technologies, work that way. Hiring skilled engineers is much easier and can be done much more quickly than building three gigafactories and a fanbase of hundreds of thousands of eager data collectors. Once you know the techniques, the AI itself does most of the actual coding. Welcome to Software 2.0.

Tesla has bet it can be done without LIDAR or hd maps. Obviously this has been argued a lot, on this site and others. Most people think you are going to need those, Tesla hopes not. If they have guessed wrong, it slows them down a lot. They can add the maps. They can't add the LIDAR to the old cars.

The other teams are able to reconfigure their prototype cars at will as they learn, to change sensor brands, positioning etc. They can add cleaning to their sensors. They can do anything, because they didn't lock down their hardware several years ago.

Tesla also has locked down their computing hardware, and can replace it perhaps every 3 years, and has already learned they will have to retrofit the old cars. They may learn that again. That is more doable but they will have resistance to it.

Tesla has more miles, but do they have more labeled training data than Cruise, Zoox, Waymo or others? Not clear.

I'm not sure you can call it them making a bet. They didn't really have a choice. Virtually no one would have bought a Tesla with a lidar sensor on it. And HD maps aren't really useful if you don't have lidar. Moreover, Tesla doesn't have to produce a level 4 or level 5 car to be a success. Their level 2 car is already one of, if not the, best of its kind. They're already preventing crashes and probably saving lives today. Tesla's bet was that people would buy overpriced electric vehicles if those vehicles had decent range and could perform well. They won that bet.

I do think that if level 4/5 autonomony becomes a reality, that Tesla will have it within 1-3 years after whoever gets it first. The part I'm not sure about is how far away we are from that.

Tesla could add some form of lidar to the old cars. I'm not sure if they'd want to, but they could. I think they'd benefit more from adding more cameras than from adding lidar.

Yes, the other teams can do "anything." Anything except manufacture 100,000 vehicles a quarter, that is. How many cars does Waymo have? 600 is what the stories from March say. And I see that Waymo is moving out of Austin.

Obviously Tesla's approach is drastically different from most, or maybe even all, other companies. That's precisely why I think they have a shot at being one of the first companies to succeed.

Tesla has more miles, by orders of magnitude. As far as having "more labeled training data," I think that question is too narrow minded. There are lots of things that you can do with miles beyond gathering training data. Moreover, the sheer quantity of labeled training data is not the only factor. I think it's guaranteed that Tesla has more labeled training data on the types of situations it has chosen to capture training data on. I think it's guaranteed that Tesla has more training data labeled with "this car is about to cut us off" vs. "this car is not about to cut us off." What other company has the ability to ask its fleet to capture instances of rare situation X and come back with thousands of results in a few days. Maybe some other car companies can do this. Not ones that have lidar sensors on them, though.

What other company can propose feature X, upload feature X to run in ghost mode on its fleet of cars, and a couple weeks later have hard data on how often feature X correctly predicts what is going to happen? Again, none, except maybe some car companies that have large fleets of cars similar to Tesla's (one of those similarities being a lack of lidar sensors).

Without a doubt Tesla will have to retrofit older cars with newer computing hardware multiple times. So will all the teams. That's just the nature of how quickly computing hardware (especially AI computing hardware) is improving.

I very much do question the way the way Tesla is marketing "FSD." I think they have a big lawsuit on their hands over it. On the other hand, there's probably enough fine print to save them from losing too big, and if they figure out how to build a driverless car, they'll have so much revenue on their hands the lawsuit will be no more than a blip.

Yes, they certainly can't put a LIDAR on production Teslas, and neither can anybody else.

The plan everybody else has taken is to build a LIDAR car that they will make in the future. Tesla could do that. They could also gather computer vision data from all the Teslas out there. Finally, they could stick LIDARs on a small fraction of cars belonging to heavy Autopilot users, to gather full data and test in shadow mode all they want to build.

I don't dispute that Tesla can do some useful things with their fleet. If the right answer is to build a car that uses 8 cameras and a radar, they have some advantages from that fleet.

If that's not the right answer, they have advantages at doing the wrong thing.

What would Tesla be doing differently if they wanted to build a LIDAR car that they will make in the future?

I guess they wouldn't be telling people who buy their cars today that they have all the hardware they need, but then, maybe they would.

I don't know if 8 cameras will be enough, and I especially don't know if their current 8 cameras are positioned perfectly. (I guess 2 cameras, or even just 1 camera, would be enough if you mounted them in some way that they'd have the same degrees of freedom as a human head, and also gave the car hands so that it could do things like block out direct sunlight; but with all the cameras being fixed, maybe 8 isn't enough.) I do think in terms of bang for the buck that you get more out of multiple cameras than you do out of lidar. As long as you have enough processing power to process multiple cameras simultaneously, anyway, and it seems like they either do now with the FSD hardware, or will within the next few years with the next hardware upgrade. I used to be skeptical about this, but I've seen lots of evidence that we have the technology to do just about everything with multiple cameras that we can do with lidar. (And the exception that makes me say "just about" everything is things like see in the dark, which aren't necessary for driving.)

I'm not sure what you mean by "they have advantages at doing the wrong thing." What they are doing is very much at least a part what any self-driving car needs to do.

Lidar is, quite simply, a crutch. It can help you cover up for the fact that you have really crappy software, but once you have good software, it offers very little advantage when driving in situations where humans are capable of driving.

Every day that goes by that none of the companies trying to build lidar-based robocars release a fully autonomous vehicle, lidar becomes less useful. Lidar was much more useful two years ago. As hardware improves, and software improves, it becomes less and less useful. Eventually, it won't be necessary at all. The only real question is whether or not all the other parts of a self-driving car will be ready before or after we reach that point.

But hey, I'm glad to see that your opinion that Tesla is not even close to coming out with an autonomous vehicle is just a repetition of your belief that lidar is necessary, and not based on anything more significant.

But computer vision has only one leg, at least today.

The question of whether CV can do all that LIDAR does is not at all a settled question, or rather, it's fairly settled that it is not currently true, and the debate is over if, and when, it might become true. And over who it is that will make it true.

For Tesla to have a shot, it has to be soon, and to some degree, it has to be Tesla that figures it out, though it's not out of the question they could licence it from others, though that negates their competitive advantage.

I am concerned that my Tesla has no way to clean its cameras, and about the blind spots they have. I get the warnings about how autopilot is degraded due to water on the cameras. Tolerable today but not in any robotaxi.

I think you're right that computer vision has only one leg today.

But here's the thing: I don't think you can build a self-driving car without it.

Yes, a Model S/3/X/Y doesn't make a good robotaxi. But a Model S/3/X/Y isn't being sold as a robotaxi (yes, Elon's dumb "appreciating asset" comments aside; I think most people know that's puffery). If Tesla solves self-driving except for not having rain wipers on all the cameras, they can come out with a Model T (buy the trademark from Ford) for their robotaxi model. Maybe they'll even put lidar on it, though I doubt it. (Maybe an Audi-style 140-degree forward-facing lidar? I guess you could put one in the rear too, if you're having trouble backing into things. The "model T" will no doubt be quite expensive. Any idea how much the Audi lidar sensors cost to make?)

If a Tesla Model S/3/X/Y only achieves level 3 autonomy in bad weather, and you have to buy a Model T for level 4/5, it'll be okay.

Or maybe level 4/5 is just not possible in the near future. Can Waymo even say it's working on something beyond level 3 if the car sometimes requires a remote driver to take over? I guess it can if it can safely come to a stop on its own? But what qualifies as safely coming to a stop? Is sitting in the road blocking traffic while waiting for a human to take over okay?

(I repeatedly say 4/5 because it's not clear to me how narrow of a range of driving conditions 4 can encompass. By 4/5 I mean 4 in a broad range of driving conditions -- enough to run a decent taxi service -- though not technically 5 since you're still going to have limitations. You might not cross certain jurisdictional borders. Some conditions that humans can drive in might be too extreme, though not many. You might not go off-road, except to follow official instructions or avoid an imminent crash, but you can handle most common pick-up/drop-off points even if they're located on private property and/or involve unpaved roads.)

Add new comment