I get and review Tesla FSD -- and give it an F
Well, I finally got to try Tesla FSD, and it was a big disappointment. From a robocar developer's viewpoint, it sucks and I give it an F.
I made a video review and a text one. The text one contains the review part of the video and lots more information. The video has the 3.5 mile sample ride around Apple HQ, full of mistakes.
Read the text review on Forbes.com at I get and review Tesla FSD -- and give it an F
Quick answers to some viewer/reader questions
Isn't this just a beta? Do you know what a beta is?
There is a section on this in the Forbes article. Suffice to say with 45 years in the software industry and running multiple software companies and projects, I know what a beta is. It's less clear that Tesla wants to use the conventional definition of a beta. Tesla FSD is better described as a prototype than a beta -- the only way it's like a beta is that it's been given out to some early adopter customers.
As a prototype is has many bugs, of course. But Tesla on the one hand keeps declaring that it will be in production very soon (they first promised it would ship years ago) and also that portions have undergone "complete rewrites" which never happens in a real beta.
The term "beta" has seen a lot of flux in how it is used over the course of my decades in software development, including some very loose usages. But just because it's a prototype or beta, doesn't mean it's not fair to judge it and compare it with other prototypes, or to measure how far along it is on the path to production "very soon." And it's wanting -- not against anything from other car OEMs, which don't even have efforts of this sort, but against the self-driving teams.
I've ridden in many of the prototype cars of the self-driving teams. I've actually mostly stopped, because they all get the same review if you track this... "boring." It's supposed to be boring, but in a good way. Tesla FSD is not boring, and that puts it way behind the others. Necessary interventions on other than straight roads are very frequent. Not boring means an "F."
In 2018, an Uber prototype killed a pedestrian in Tempe, Arizona. At the time, Uber was doing about one intervention every 13 miles, and most people felt that was far too poor a record to have them switch to one safety driver instead of two. That vehicle, which killed a woman, rated an F, and Tesla FSD's performance at what it is trying to do is not as good!
Isn't an "F" unfair? I mean it's amazing!
People who have not seen the other vehicles will think it's amazing. It's amazing that it does it at all, and amazing that it even does it poorly without maps. But I'm not grading it on that, I'm grading it as an effort to make a full self-driving car.
Imagine you were taking your driving test and you made 3 wrong turns, ran 2 red lights, swerved into two obstacles so the tester had to grab the wheel, stalled in crosswalks for long periods and got honked at and gave a jerky and uncomfortable ride. You would not just get an "F," the tester would stop the test early on and ask to drive the car back to the DMV. You would be told not to come back for a while. You have to perform as well as a teenager to get a passing grade in this game.
I know this because I failed my first driving test, when I was 16, because I stopped at an intersection, couldn't see and advanced into the crosswalk where I sat because I couldn't go due to traffic, while pedestrians arrived and swarmed around me. I didn't run any red lights.
Real self-driving is very hard. Doing it 99.9% may seem amazing if you're a newcomer to the field, but you've got to do it 99.9999% of the time, and reach the level of humans who, bad as they are, have a ding ever 100,000 miles, an insurance claim every 250,000 and involve police every 500,000 miles. Tesla FSD on roads with any challenge doesn't seem able to go more than a few dozen miles on average without something that would cause a ding. (That it sometimes goes further may impress but it's the average rate that matters.)
Members of the public see a car drive 100 miles with one mistake and think it's impressive. Robocar engineers would see that and call it a terrible failure. Going 10,000 miles with one serious mistake would be a failure. Going 100,000 miles with one very minor mistake is where they would start to consider it doing OK. Having multiple serious mistakes in 30 miles would result in, "Why are you even here?"
Won't it get better?
In fact, 10.9 just dropped and has improved -- in fact I suspect Tesla saw this video and filed tickets for some of the problems. It understands the bollards and dedicated lane of the right turn now, and so far has no failed on the forced left lane in 3 tries (though it only failed occasionally before.) At the place it took the sudden left and ran the light, I went by 3 loops and twice it swerved into that left turn lane but then corrected and swerved out of it. A 3rd time when cars where in the lane it did not swerve into it. It's unclear if this has improved.
We should hope each release will improve, and I'll drive it more, but it's still too rough -- and inconsistent -- get get a passing grade on this route. Of course, if this came about because Tesla saw the video (that's not confirmed, of course) then that means little, other than that they fix bugs that are reported. The reason I suspect they saw it is that no minor update fixes things this specific in this way.
But Tesla drives on any road, how can you compare it to cars with limited service areas?
Driving without a map is an incredible stretch goal, which Tesla has failed to come remotely close to so far. You don't get points for what you are trying to do, you get points for what you succeed at. Almost every other team believes Tesla has taken the wrong path here (and on LIDAR, but that's a different story.)
Maps are super useful. Any car that could drive without a map is a car that can make a map. In that case, maps are just having a memory. Knowing what things look like up close and from other angles because a cousin car drove this road before. The world changes, and every map-making company knows that. When the map is wrong, you take a step down -- and drive like the Tesla tries to drive all the time. Or rather, you do a bit better because your map is not entirely wrong, and if it's detailed, you know where it's wrong and where it's not. There are many more advantages to maps than this, but even if this were all you did, it would be worth it.
Maps don't scale, you imagine? The first team to build a working map-based car was at Google. I worked on that team. The people did it had just built Google Streetview. When people asked, "Wouldn't we have to drive every road in the country to map?" they could answer, "we did it last month, we'll do it again." It's a big project, but very scalable and doable for the likes of the big players in this game.
Or not even. Several companies such as Intel/MobilEye are building their maps just by having the cars with their gear drive the roads and report what they see. MobilEye is in over 100 million cars, dwarfing anybody's fleet. They can't get as much data from these cars as Tesla gets from its much smaller fleet, and Tesla doesn't get as much as Google/Waymo do from their even smaller fleets, but it's enough. If Tesla wanted to, their fleet and sensors are enough.
Even if maps cost a lot of money to make, the amount per mile is still quite low, and the amount per trip over a segment of road is as well. Even if it cost $1,000 to map a mile of road (it won't) that map will serve thousands of people driving it every day. Nobody will be too expensive from the cost of mapping. But again, quite useful maps can be made for close to free.
Most people believe driving without maps is a fool's errand. It may be possible in the future, but it's not the path to getting to success sooner. And in the end, when grading how well the car does, it mostly matters how it does, not what it wants to do. A car that can drive badly everywhere is not better than a car that drives very well in a few cities. Not when it comes to full self-driving.
Those who think maps need to be perfect to work and are useless because they get out of date don't really understand how maps work. While being the very first to encounter a street that has changed since it was mapped is actually extremely rare, everybody knows you still have to handle it.
To those who say, "Driving without a map works everywhere" the reality is that driving without a map works poorly "everywhere." Driving with a map does work well everywhere, and it's not that expensive -- certainly not so expensive that you would give up safety to save a small amount of money. See the video on mapping above for a full explanation with examples.
There are other limitations to those cars, like turning left
Waymo is not confident of 100% safety on unprotected lefts, so one fault of their service it is sometimes takes a longer route to avoid them. That's definitely something on their todo list. But understand that the Tesla can't do these turns either. Yes, it often pulls them off, but in full self driving, "often" or anything less than 100% means "can't." The Waymo could almost certainly do those left turns as well or better than the Tesla, but it's still not good enough for self-driving.
All the companies building their robotaxis, companies like Waymo, Cruise, Argo, Zoox, AutoX, Baidu, WeRide, Motional, Aurora and more -- are all operating in limited areas. That is not their long term plan, though. They have the money and plan to expand to lots of cities. They don't plan to drive everywhere because they are taxi companies, not trying to be car companies like Tesla. Making a car you can sell and drives everywhere is a much, much, much harder problem, and most people think Tesla is biting off more than anybody can chew. The real world-changing stuff is in robotaxis that change the nature of car ownership anyway. Even Tesla sees that and talks about how they want to deploy their cars in this way. You can make a great robotaxi service without driving every road, so why delay trying to get there. We'll see who gets there first.
Is driving timidly so bad?
It's not. In fact it's what all early prototype products have to do. But you must graduate from that to be a full self-driving product. It's part of what is holding even Waymo back. If you're too timid to go into production, you've failed. You can't be out there blocking traffic, getting honked at. Your project may get pulled from the roads if you do.
How is it OK or legal to do this?
Every robocar team has put a car on the road that was at this poor level when it first started, but they had safety drivers in the car to intervene if it made a mistake. I myself have done this with the Google car. With one glaring exception from Uber, the safety record of this technique has been exemplary -- superior to the safety record of regular human drivers.
As long as we aren't seeing accidents that hurt other people in other cars at a rate above that for normal driving, there is no reason to stop this, even though Tesla is definitely skating near the edge, and has some key differences
- Other teams have professional employee safety drivers with some training. Tesla just has ordinary owners. Uber had an employee who completely neglected her job.
- Other teams start with 2 safety drivers, though the goal is someday to get to zero, which Waymo and a few others have done. (Uber switched to one, which was tragic because with two, the one at the wheel would not watch TV.)
- Tesla's drive quality is quite a bit poorer than other cars were after several years of development -- but again, as long as people are not hurt that's not a key issue.
There are reports of people getting very minor issues like curb dings and scrapes. Elon Musk claims no accidents, but he means serious ones, where an airbag deploys. As long as the record stays like that, and the public is not at greater risk than from ordinary driving, calls to stop Tesla are not appropriate. Though in California, there is an argument that what they are doing needs a permit under California law.
Is this FUD? Do you hate Tesla?
As I said, I love Tesla. I like Elon. I don't support everything they do, because they make mistakes. I am not afraid to call those out. Yes, I think Waymo is #1 and that other companies are far ahead of Tesla. Yes, I am friends with people at Waymo and many other companies, but I stopped working there in 2013, so I have no financial incentive. I own GOOG and TSLA stock, but don't imagine for a second that what I write would ever move the needle on either of those stock prices. FUD requires a motive. I own a Tesla and would love for them to succeed, but I'll be critical when they take the wrong path to success. They are taking a very long bet -- it might even pay off, which would be great. But it's the wrong bet from what we know today.
Humans drive with just their eyes
Yes, everybody knows this very old (in the field) phrase. And birds can fly just by flapping their wings. We are not even remotely close to matching human general intelligence, though. We're not even up to matching a horse or a bird. So yes, clearly, if we got AI that approached humans, or even the part of humans that drives, you could do it with just a camera.
In fact, humans do it with just a single camera (we can do it with one eye) that we swivel around but mostly point forward. But that argument, Tesla is stupid to have 8 cameras on their car, and ultrasonics. Humans don't need those. We don't build airplanes to fly like birds, and it's not at all inherently right to say that driving systems are best built to work like humans.