Why Tesla's Autopilot and Google's car are entirely different animals


In the buzz over the Tesla autopilot update, a lot of commentary has appeared comparing this Autopilot with Google's car effort and other efforts and what I would call a "real" robocar -- one that can operate unmanned or with a passenger paying no attention to the road. We've seen claims that "Tesla has beaten Google to the punch" and other similar errors. While the Tesla release is a worthwhile step forward, the two should not be confused as all that similar.

Tesla's autopilot isn't even particularly new. Several car makers have had similar products in their labs for several years, and some have released it to the public, at first in a "traffic jam assist" mode, but reportedly in full highway cruise mode outside the USA. The first companies to announce it were Cadillac with the "Super Cruise" and VW's "Temporary Autopilot" but they delayed that until much later.

Remarkably, Honda showed off a car ten years ago doing this sort of basic autopilot (without lane change) and sold only in the UK. They decided to stop doing that, however.

That this was actually promoted as an active product ten years ago will give you some clue it's very different from the bigger efforts.

These cruise products require constant human supervision. That goes back to cruise control itself. With regular cruise control, you could take your feet off the pedals, but might have to intervene fairly often either by using the speed adjust buttons or full control. Interventions could be several times a minute. Later, "Adaptive Cruise Control" arose which still required you to steer and fully supervise, but would only require intervention on the pedals rarely on the highway. A few times an hour might be acceptable.

The new autopilot systems allow you to take your hands off the wheel but demand full attention. Users report needing to intervene rarely on some highways, but frequently on other roads. Once again, the product is useful if you only intervene once an hour, it might make your drive more relaxing.

Now look at what a car that drives without supervision has to do. Human drivers have an accident around every 2,500 to 6,000 hours, depending on what figures we believe. That's a minor accident, and it's after around 10 to 20 years of driving. A fatality accident takes place every 2,000,000 hours of driving -- around 10,000 years for the typical driver. (It's very good that it's much more than a lifetime.)

If a full robocar needs an intervention, that means it's going to have an accident, because there is nobody there to intervene. Just like humans, most of the errors that would cause an accident are minor. Running off the road. Fender benders. Not every mistake that could cause a crash or a fatality causes one. Indeed, humans make mistakes that might cause a fatality far more often than every 2,000,000 hours, because we "get away" with many of them.

Even so, the difference is staggering. A cruise autopilot like Tesla and the others have made is a workable product if you have to correct it a few times an hour. A full robocar product is only workable if you would need to correct it in decades or even lifetimes of driving. This is not a difference of degree, it is a difference of kind. It is why there is probably not an evolutionary path from the cruise/autopilot systems based on existing ADAS technologies to a real robocar. Doing many thousands times better will not be done by incremental improvement. It almost surely requires a radically different approach, and probably very different sensors.

To top it all off, a full robocar doesn't just need to be this good, it needs a lot of other features and capabilities once you imagine it runs unmanned, with no human inside to help it at all.

The mistaken belief in an evolutionary path also explains why some people imagine robocars are many decades away. If you wanted evolutionary approaches to take you to 100,000x better, you would expect to wait a long time. When an entirely different approach is required, what you learn from the old approach doesn't help you predict how the other approaches -- including unknown ones -- will do.

It does teach you something. By being on the road, Tesla will encounter all sorts of interesting situations they didn't expect. They will use this data to train new generations of software that do better. They will learn things that help them make the revolutionary unmanned product they hope to build in the 2020s. This is a good thing. Google and others have also been out learning that, and soon more teams will.


Another difference is that autopilot does not threaten the private car ownership model that is the lifeblood of the auto industry. However fully autonomous cars could be highly disruptive if they develop into Uber style public transport systems, which with higher passenger utilization, will need a lot less vehicles to move the same number of people.

There may end up being plenty of politics going on behind the scenes as different technologies evolve into two different transport models fighting over the same customer base.

The one difference between Tesla's autopilot and most other cars lane keeping auto-driving systems is that Tesla's system is a learning system. It sends road data / GPS data back to Tesla so that the entire fleet can learn the roads better, it also learns from driver input (when a driver overrides the autopilot, autopilot will learn that part of the road).

I would say Tesla's system is somewhere between true self-driving cars and advanced lane-keeping technology developed by other manufacturers. And, with constant OTA updates, and Elon's stated goal of autonomous driving, I see the system getting 'closer' to self-driving cars in the future.

I have read statements that the system learns, but I would like to see a more explicit description of this from Tesla. I believe it is likely that what it means is that indeed, data from cars with the system is uploaded, where it is used to train and improve the software, and after those improvements are studied and tested, new revisions of the software make use of them.

I would be extremely surprised if data simply flows from one car to others in an automatic way, without testing by the team, which is what some imply this means.

The point is that learning from vehicles on the road is what every project does --there is probably nothing special about what Tesla does. But I would be interested to hear if that's incorrect.

As far as I know the Google and Tesla cars both have a PC-like device in the car with a bunch of GPUs (like a graphics card of a PC).

This isn't used for doing any visual stuff directly, it's used to do machine learning (which does visual stuff but also other things).

Here is a talk from a conference this year by Nvidia:


I haven't seen any other manufacturer mention that they are doing the same.

So I think Google and Tesla are actually in the same category (with Tesla being first to market, but probably less far technology wise).

And it's the other manufacturer with advanced cruise control/lane control/whatever which are in the other category.

Deep learning based visual systems are getting better, and I expect that to continue. And some day, even perhaps become useful for unsupervised driving, but that doesn't mean your statement is correct. Tesla is entirely doing supervised driving here, and Google never has been (other than for testing purposes.) Tesla does hope to move to unsupervised driving, I agree, but that's a different project.

What I mean is: they are using the same underlying technology.

This puts them in the same category in my book.

I'm also a 100% certain the goal of Tesla is unsupervised driving as well.

They are just using a more incremental approach.

Anyway, maybe we can at least agree to disagree. :-)

What do you mean is the underlying technology they are both using? I mean obviously there is some tech in common, both cars have wheels and engines and brakes and so on. But unless you have some specific citation otherwise, let me assure you the self-drive systems use very different technologies.

You might not agree this is the way to get there, but they are going to try anyway:


As I have written several times before, there is not a good evolutionary path from ADAS to full self-driving, but the one virtue of the ADAS based cruise products is that they gather data about what goes on when on the road, and that's very useful for any project.

Tesla's early release of their product with the cars reporting back what their sensor logs learn is a great thing for them. One of the biggest challenges -- I sometimes think the biggest current challenge -- in making a robocar is testing and QA. This data will help, though it will not provide all the data you want, which demands a fancier sensor suite.

Convolutional neural networks are improving in capability at a surprising rate, which is what may be convincing Tesla that they should try things without a LIDAR, but I do believe that going without a LIDAR (as well as any other sensors) in your first car just to save money is not something that makes sense. In 10 years, the time to save money will come.

I have driven a tesla with autopilot and now several other cars that also have some lane keeping features. I disagree with what you say that other car companies are essentially offering the same thing as tesla. I don't see that at all. The other cars I drove were not in the league of the tesla in terms of accuracy, smoothness or ability to allow me to relax a bit more when doing highway driving.

If you watch all the videos that are available to all on youtube, you will see that this car, while not yet completely allowing you to read the paper while it drives, requires much less continual attention than other systems. This kind of trust should not necessarily be given by default, it comes from driving the car over the same route day in and day out in autopilot mode in varying conditions and seeing how it in fact does.

I used to think like you that LIDAR would be a requirement, but now I am not so sure. I do think radar is a requirement, but not LIDAR. It may be moot since there is now a low cost solid state LIDAR available. See: http://laautoshow.vporoom.com/QuanergySystems/index.php?s=35910&item=122534 so this can be added to the sensor suite, but I now think a vision based system will handle almost all the work for driving. I know this is a vendor presentation, but i suggest you watch this video from Mobileye, who is really leading this space and who I think will be the big player on self driving cars:


The Tesla's only have one of these cameras as well as a radar in front and some short range sonic sensors...clearly not enough for full autonomy, but for allowing me to relax on the PA turnpike for very long stretches of road there is no better car right now. I am hoping Cadillac and others change that next year since Tesla's are darn expensive.

For full autonomy you made need as many as 6 or more cameras and several radars and maybe even your LIDAR too. But I like Tesla's approach of minimal (but ever improving) mapping + real time decision making versus googles you-must-know-every-inch-of-the-world approach to autonomy.

Yesterday I read a blog post (probably linked from Hacker News) by a Tesla owner who commuted daily and found that the Autopilot has improved over the weeks he's driven using it in its navigation of his route: at first it wanted to take a number of exits and had to be overridden, graually got better, and now runs the highway almost flawlessly.

And I would like more information on it. I would not be surprised if Tesla's team are learning from data they get from cars, and feedback from customers, and tuning the system in response to that, though I would be a bit surprised if they would send out an update that changed the car's behaviour without a fair amount of testing (ie. not just a week.)

I would be even more surprised if they set it up to "learn" and alter the heuristics on its own, uploading that to other cars without evaluation and testing by Tesla's team.

In the middle, one can imagine a car learning from the driver and adapting itself (but not other cars) in certain ways. Ie. noticing that the driver normally takes a specific exit, or likes to take corners a certain way, or something like that. However, that would not be propagated to other cars.

I do expect deep learning and similar technologies to play a role, but I expect them to use LIDAR data as well as images. And I remain very cautious about systems which work but you can't explain how they work, making it harder to figure out how to test them, and how to assure they still work after you retrain them.


I wonder what kind of research you did on this piece. The announcement from Musk of the 7.0 software that included the autopilot feature answers the questions that you are wondering about. In response to a question he states that the vehicles could improve slightly every day but that drivers may only notice the improvements on a weekly basis.

He also states explicitly that the intent of Tesla is to improve this feature to the point of full autonomy with some additional redundancy but no LIDAR (to the point of taking a nap while on your way to work). I'm sure you disagree, but your point loses a lot of validity with me when you didn't bother to listen to the full announcement and weren't even aware of the full capabilities of the system before declaring your opinion.

Musk does state a lot of outlandish goals (Mars colony being one), but you must admit, from his track record so far, that he has put thought and calculation behind them. Many times his time horizons are off, but aren't they all. So knowing what you do now, what do you think of Musks' claim that he can do full autonomy with this system?

I have seen Tesla's announcement, which does not say it improves every day, and the specific quotes from Elon Musk I have seen say "the system is getting better with each passing week." Please point me to the statement about improvement every day. Improvement every day is a very tall order, however it's more practical if it means that improvements come from learning in the past on a regular basis. To offer improvements a day after gathering data, without QA, is something that I don't think would be wise, and nobody else I have talked thinks it is likely. I have not asked Elon Musk himself. And yes, he and I disagree on what role LIDAR should play.

As noted, it is, in theory, possible to make a working car with just cameras in the future. To do so is possible because humans do it, but to get to the human level of ability requires significant breakthroughs that most people judge to be years away. This doesn't mean that prediction could not be wrong.

Here is the announcement and subsequent press conference.


The following applies directly to our discussion:

25:21 - question about autonomous vs autopilot mode

29:50 - learning system, aggregated driving data.

34:55 - data updates and frequency

46:30 - full autonomy time frame

I think what is different about Tesla's approach is the large amount of
aggregated data from the entire Tesla fleet. Even if one car does something
stupid, the majority of cars will do what makes sense and is normal or best.

The question that I have, especially about the illustration of bad lane markers
on the 405, is how the car is able to lane keep soley by using GPS navigation.
It doesn't seem accurate enough for that. But he clearly states that it is
being done now. Would be interesting to find out more.

"I think what is different about Tesla's approach is the large amount of aggregated data from the entire Tesla fleet. Even if one car does something stupid, the majority of cars will do what makes sense and is normal or best."

They are doing it on a smaller scale right now (with only their own cares driving), but they are collecting all the data from all the years they've been driving: https://www.youtube.com/watch?v=7Yd9Ij0INX0

But they have everything in place that when they have customer cars on the roads they can learn every road. They already have incredible detailed information of the region where they are driving now.

Please stop calling them "accidents". Here's why.


The phrase “car accident” is so common that many of us use it without even thinking about it.

However, once you do think about it, you begin to realize how silly it is to default to the word “accident” in the context of something that involves police investigation, property damage, injury or death?—?not to mention something that is often the inevitable consequence of a crime. Therefore, Transportation Alternatives has launched a campaign to stop calling traffic crashes “accidents,” and to instead call them crashes.

Because, you know, that’s what they are.

But many are also accidents, because they were unintended. But indeed, if somebody drives recklessly but not wrecklessly, that should not be called an accident.

Add new comment