I get and review Tesla FSD -- and give it an F

Topic: 
Tags: 

Well, I finally got to try Tesla FSD, and it was a big disappointment. From a robocar developer's viewpoint, it sucks and I give it an F.

I made a video review and a text one. The text one contains the review part of the video and lots more information. The video has the 3.5 mile sample ride around Apple HQ, full of mistakes.

Read the text review on Forbes.com at I get and review Tesla FSD -- and give it an F

Quick answers to some viewer/reader questions

Isn't this just a beta? Do you know what a beta is?

There is a section on this in the Forbes article. Suffice to say with 45 years in the software industry and running multiple software companies and projects, I know what a beta is. It's less clear that Tesla wants to use the conventional definition of a beta. Tesla FSD is better described as a prototype than a beta -- the only way it's like a beta is that it's been given out to some early adopter customers.

As a prototype is has many bugs, of course. But Tesla on the one hand keeps declaring that it will be in production very soon (they first promised it would ship years ago) and also that portions have undergone "complete rewrites" which never happens in a real beta.

The term "beta" has seen a lot of flux in how it is used over the course of my decades in software development, including some very loose usages. But just because it's a prototype or beta, doesn't mean it's not fair to judge it and compare it with other prototypes, or to measure how far along it is on the path to production "very soon." And it's wanting -- not against anything from other car OEMs, which don't even have efforts of this sort, but against the self-driving teams.

I've ridden in many of the prototype cars of the self-driving teams. I've actually mostly stopped, because they all get the same review if you track this... "boring." It's supposed to be boring, but in a good way. Tesla FSD is not boring, and that puts it way behind the others. Necessary interventions on other than straight roads are very frequent. Not boring means an "F."

In 2018, an Uber prototype killed a pedestrian in Tempe, Arizona. At the time, Uber was doing about one intervention every 13 miles, and most people felt that was far too poor a record to have them switch to one safety driver instead of two. That vehicle, which killed a woman, rated an F, and Tesla FSD's performance at what it is trying to do is not as good!

Isn't an "F" unfair? I mean it's amazing!

People who have not seen the other vehicles will think it's amazing. It's amazing that it does it at all, and amazing that it even does it poorly without maps. But I'm not grading it on that, I'm grading it as an effort to make a full self-driving car.

Imagine you were taking your driving test and you made 3 wrong turns, ran 2 red lights, swerved into two obstacles so the tester had to grab the wheel, stalled in crosswalks for long periods and got honked at and gave a jerky and uncomfortable ride. You would not just get an "F," the tester would stop the test early on and ask to drive the car back to the DMV. You would be told not to come back for a while. You have to perform as well as a teenager to get a passing grade in this game.

I know this because I failed my first driving test, when I was 16, because I stopped at an intersection, couldn't see and advanced into the crosswalk where I sat because I couldn't go due to traffic, while pedestrians arrived and swarmed around me. I didn't run any red lights.

Real self-driving is very hard. Doing it 99.9% may seem amazing if you're a newcomer to the field, but you've got to do it 99.9999% of the time, and reach the level of humans who, bad as they are, have a ding ever 100,000 miles, an insurance claim every 250,000 and involve police every 500,000 miles. Tesla FSD on roads with any challenge doesn't seem able to go more than a few dozen miles on average without something that would cause a ding. (That it sometimes goes further may impress but it's the average rate that matters.)

Members of the public see a car drive 100 miles with one mistake and think it's impressive. Robocar engineers would see that and call it a terrible failure. Going 10,000 miles with one serious mistake would be a failure. Going 100,000 miles with one very minor mistake is where they would start to consider it doing OK. Having multiple serious mistakes in 30 miles would result in, "Why are you even here?"

Won't it get better?

In fact, 10.9 just dropped and has improved -- in fact I suspect Tesla saw this video and filed tickets for some of the problems. It understands the bollards and dedicated lane of the right turn now, and so far has no failed on the forced left lane in 3 tries (though it only failed occasionally before.) At the place it took the sudden left and ran the light, I went by 3 loops and twice it swerved into that left turn lane but then corrected and swerved out of it. A 3rd time when cars where in the lane it did not swerve into it. It's unclear if this has improved.

We should hope each release will improve, and I'll drive it more, but it's still too rough -- and inconsistent -- get get a passing grade on this route. Of course, if this came about because Tesla saw the video (that's not confirmed, of course) then that means little, other than that they fix bugs that are reported. The reason I suspect they saw it is that no minor update fixes things this specific in this way.

But Tesla drives on any road, how can you compare it to cars with limited service areas?

Driving without a map is an incredible stretch goal, which Tesla has failed to come remotely close to so far. You don't get points for what you are trying to do, you get points for what you succeed at. Almost every other team believes Tesla has taken the wrong path here (and on LIDAR, but that's a different story.)

Maps are super useful. Any car that could drive without a map is a car that can make a map. In that case, maps are just having a memory. Knowing what things look like up close and from other angles because a cousin car drove this road before. The world changes, and every map-making company knows that. When the map is wrong, you take a step down -- and drive like the Tesla tries to drive all the time. Or rather, you do a bit better because your map is not entirely wrong, and if it's detailed, you know where it's wrong and where it's not. There are many more advantages to maps than this, but even if this were all you did, it would be worth it.

Maps don't scale, you imagine? The first team to build a working map-based car was at Google. I worked on that team. The people did it had just built Google Streetview. When people asked, "Wouldn't we have to drive every road in the country to map?" they could answer, "we did it last month, we'll do it again." It's a big project, but very scalable and doable for the likes of the big players in this game.

Or not even. Several companies such as Intel/MobilEye are building their maps just by having the cars with their gear drive the roads and report what they see. MobilEye is in over 100 million cars, dwarfing anybody's fleet. They can't get as much data from these cars as Tesla gets from its much smaller fleet, and Tesla doesn't get as much as Google/Waymo do from their even smaller fleets, but it's enough. If Tesla wanted to, their fleet and sensors are enough.

Even if maps cost a lot of money to make, the amount per mile is still quite low, and the amount per trip over a segment of road is as well. Even if it cost $1,000 to map a mile of road (it won't) that map will serve thousands of people driving it every day. Nobody will be too expensive from the cost of mapping. But again, quite useful maps can be made for close to free.

Most people believe driving without maps is a fool's errand. It may be possible in the future, but it's not the path to getting to success sooner. And in the end, when grading how well the car does, it mostly matters how it does, not what it wants to do. A car that can drive badly everywhere is not better than a car that drives very well in a few cities. Not when it comes to full self-driving.

Those who think maps need to be perfect to work and are useless because they get out of date don't really understand how maps work. While being the very first to encounter a street that has changed since it was mapped is actually extremely rare, everybody knows you still have to handle it.

To those who say, "Driving without a map works everywhere" the reality is that driving without a map works poorly "everywhere." Driving with a map does work well everywhere, and it's not that expensive -- certainly not so expensive that you would give up safety to save a small amount of money. See the video on mapping above for a full explanation with examples.

There are other limitations to those cars, like turning left

Waymo is not confident of 100% safety on unprotected lefts, so one fault of their service it is sometimes takes a longer route to avoid them. That's definitely something on their todo list. But understand that the Tesla can't do these turns either. Yes, it often pulls them off, but in full self driving, "often" or anything less than 100% means "can't." The Waymo could almost certainly do those left turns as well or better than the Tesla, but it's still not good enough for self-driving.

All the companies building their robotaxis, companies like Waymo, Cruise, Argo, Zoox, AutoX, Baidu, WeRide, Motional, Aurora and more -- are all operating in limited areas. That is not their long term plan, though. They have the money and plan to expand to lots of cities. They don't plan to drive everywhere because they are taxi companies, not trying to be car companies like Tesla. Making a car you can sell and drives everywhere is a much, much, much harder problem, and most people think Tesla is biting off more than anybody can chew. The real world-changing stuff is in robotaxis that change the nature of car ownership anyway. Even Tesla sees that and talks about how they want to deploy their cars in this way. You can make a great robotaxi service without driving every road, so why delay trying to get there. We'll see who gets there first.

Is driving timidly so bad?

It's not. In fact it's what all early prototype products have to do. But you must graduate from that to be a full self-driving product. It's part of what is holding even Waymo back. If you're too timid to go into production, you've failed. You can't be out there blocking traffic, getting honked at. Your project may get pulled from the roads if you do.

How is it OK or legal to do this?

Every robocar team has put a car on the road that was at this poor level when it first started, but they had safety drivers in the car to intervene if it made a mistake. I myself have done this with the Google car. With one glaring exception from Uber, the safety record of this technique has been exemplary -- superior to the safety record of regular human drivers.

As long as we aren't seeing accidents that hurt other people in other cars at a rate above that for normal driving, there is no reason to stop this, even though Tesla is definitely skating near the edge, and has some key differences

  • Other teams have professional employee safety drivers with some training. Tesla just has ordinary owners. Uber had an employee who completely neglected her job.
  • Other teams start with 2 safety drivers, though the goal is someday to get to zero, which Waymo and a few others have done. (Uber switched to one, which was tragic because with two, the one at the wheel would not watch TV.)
  • Tesla's drive quality is quite a bit poorer than other cars were after several years of development -- but again, as long as people are not hurt that's not a key issue.

There are reports of people getting very minor issues like curb dings and scrapes. Elon Musk claims no accidents, but he means serious ones, where an airbag deploys. As long as the record stays like that, and the public is not at greater risk than from ordinary driving, calls to stop Tesla are not appropriate. Though in California, there is an argument that what they are doing needs a permit under California law.

Is this FUD? Do you hate Tesla?

As I said, I love Tesla. I like Elon. I don't support everything they do, because they make mistakes. I am not afraid to call those out. Yes, I think Waymo is #1 and that other companies are far ahead of Tesla. Yes, I am friends with people at Waymo and many other companies, but I stopped working there in 2013, so I have no financial incentive. I own GOOG and TSLA stock, but don't imagine for a second that what I write would ever move the needle on either of those stock prices. FUD requires a motive. I own a Tesla and would love for them to succeed, but I'll be critical when they take the wrong path to success. They are taking a very long bet -- it might even pay off, which would be great. But it's the wrong bet from what we know today.

Humans drive with just their eyes

Yes, everybody knows this very old (in the field) phrase. And birds can fly just by flapping their wings. We are not even remotely close to matching human general intelligence, though. We're not even up to matching a horse or a bird. So yes, clearly, if we got AI that approached humans, or even the part of humans that drives, you could do it with just a camera.

In fact, humans do it with just a single camera (we can do it with one eye) that we swivel around but mostly point forward. But that argument, Tesla is stupid to have 8 cameras on their car, and ultrasonics. Humans don't need those. We don't build airplanes to fly like birds, and it's not at all inherently right to say that driving systems are best built to work like humans.

Comments

“Tesla’s performance is beyond what Waymo could do a decade ago“

Is “beyond” meant to be “behind” here?

(Yes, this was fixed, thanks... Brad)

the blogosphere has not bbombarded and bashed the column or the columist as of this moment.

bradtem

10 hr. ago
Additional comment actions
Certainly I was expecting a much lower intervention rate. Highways and arterials (which Autopilot does reasonably well) are low intervention. This is what surprised me. I mean I just left my home and did a simple loop about the area on roads I don't think are particularly unusual, and it was intervention after intervention. I expected much better performance. Maybe as I drive more miles I'll see a better record. (But my sweetie prefers not to be in the car while testing, I guess she wants me to die alone in the fiery crash. :-) However, a good system almost never drives any road like this drive, let alone the first road you drive leaving your house.

...
Tesla's perception is limited, and of course refuses to use LIDAR and now radar. The side-rear cameras should show this traffic, but there are limits to what you can do with these cameras, particularly in sun or rain.

I don't understand some of the planning errors, particularly the sudden veer into a left turn lane. We often see Tesla's planner wobbling about considering very different plans and that doesn't inspire confidence. Why is selects plans that are not at all along the planned nav route is odd, though in some cases it's because it now feels that route is not driveable, so it does what it can.

I would hope with a better map it could make clear plans and stick to them. As long as it builds the map on the fly, the plan it picks is going to be a very dynamic thing and lead to surprises. On Tantau, the street of Apple HQ, the lane is marked with giant left arrows. It usually sees those in its perception system but doesn't here, and I don't know why.

Your feedback to others on the YouTube post is helpful as well.

"...Hey, I'm a Telsa fan. They are the best car company, that's why I gave them $55,000 of my hard earned money. I'm an Elon Fan too, but that doesn't mean everything he does is perfect. ...

The other automakers are not really even attempting this (other than through subsidiaries) any more. As for the companies who are attempting it, there may be some who are performing as badly as Tesla, some who are even a little worse. The biggest players now -- Waymo, Cruise, Zoox, Motional, AutoX, WeRide, Baidu, Argo, TuSimple, Plus.AI, MobilEye and a few others -- well, Tesla isn't even in their league at present. I was expecting FSD to be better, to at least compete with some of these, but sadly, it can't. "

All your comments and answers, Reddit / Youtube / Twitter / Forbes etc consolidated together help greatly when read as a whole.

99% of people just want to comment where they read the post. You can try to designate a central location and they ignore it.

Picking the wrong technology sets a program years. Will the FSD chip mated with D1 Dojo supercomputet and AI/NNs succumb to a sensor suite choice (lack of radar, lidar, maps). Is it easier to add than takeaway?

Quote:
"If I was an AI engineer, that's (Tesla) exactly where I would go. Probably NOT, honestly. There's a reason they're holding such a "recruiting" event, and it's not because the top talent is banging down their door for a job. You'd much more likely find yourself in a research program, or at one of the major behemoths in the industry like Google, Amazon, IBM, every military contractor, hell even Tencent. You'll notice that Tesla is missing from things like the NIPS conferences, too, so it's basically a zero percent chance someone would go work there."

We have had model 3 for 2.4 years and live autopilot. If you live in Pal Desert, the east test is Palms to Pines form here to San Diego! It does the switchbacks with ease set speed to posted 55 and it slows down for sharp turns… the only issue is it slows before the curve much like a sports car on a track and folks following seem to be in a huge hurry wanting to run up on the car.. they should learn a little more about driving safely in switchbacks. I noticed the demo was a 2018. Why use a car that is at least 3-4 years old????

The 2018 Tesla is pretty much identical to current ones when it comes to the self-driving system. It has the latest computer, of course and the same sensor suite, though it has a radar that the newer cars do not. The FSD system does not use the radar according to reports. The main differences with newer models are minor -- heat pumps, charging controller and a few other things, not affecting FSD. Tesla regularly claims that all cars back to about 2016 have the hardware they need for FSD, though they do require the processor board upgrade, and probably will require another one.

A sub-$35,000 BEV with SAE Level 4 included would be the most disruptive event in the history of the automotive industry. Elon, build a Model 2 with Level 4 for $30,000. Mobileye, join forces with Nissan and build a BEV with Level 4 for $30,000. The camera guys will forever be worshipped by the working class.

I have been involved with the design and manufacturing of
Simple electronic systems both hardware and software. Certianly
You may be able to make a driverless car drive as well as a highly skilled

Human for say a few years. But there is a major problem with the all the
Extra complex hardware and software ,sensors,wiring and god knows
What adds extra chances of faults occuring usually intermittant faults
I have worked on late model range rovers and after about 10 years so many frustrating faults have occured that the computer diagnostic system could not uncover. Some times the vehicles are just binned.I have ended up with the binned ones. Yes driverles cars will have a place, but there is one thing I have that a computer system does not. FEAR. There is no way you can make highly complex systems 100% reliable.Look at even the aviation industry which is a best practise industry.

I think you are way too harsh, way out of line. Tesla FSD is not available to anyone who has not jumped through a bunch of hoops to become, not a user, but a tester. This is not a finished product and is not being presented as such. Frankly, I think Elon had a great deal of nerve legally, knowing that there would be "Reviewers" like you or people who are openly or other wise representing the entrenched ICE world, would pop up and try to review this system as being promoted for prime time. It's nothing like that. It is in beta testing. How could Tesla make that clearer? I too have this system and it is teaching me just how difficult learning to drive and then teaching a machine can be.. Yes, this system has miles to go, but needs to be in the field in some form to learn what it is you do when you drive a car. This job is incredibly hard. Progress will be incremental and very slow, but without beta testers, we may not see it in our lifetime. Relax,learn more while the car learns. Progress will necessarily be slow, please be patient. Mike Heaton

It’s hard to take you seriously (even if you might be right in some respects) because you think Brad supports ICE.

It’s hard to take you seriously (even if you might be right in some respects) because you think Brad supports ICE.

I have a history of 45 years in software development and 15 years in robocars. I am fully aware that the product is a prototype. It only gets a review because it is being disseminated to a wider audience and because Tesla have said (for some time) that will will be a commercial production product "very soon." Many people seem to take that claim of being ready soon as credible. It is reasonable to report evidence that it is not. If Tesla didn't annoyingly claim that this is a product on the verge of release I would treat it as the prototype it is.

That said though,I've ridden in several prototype self-driving systems, though this is the first claiming to drive mapless on any street. In contrast to the others it scores quite poorly, which is why the "F" at what it is as well as what it is pretending to be. It is very worthy of comment whether trying to drive mapless on arbitrary roads is a bold stroke, or an error.

Interesting 72 hrs - Grade of F by BT, and The Dawn Project (with full page NYT ad and website).

Crash Test Dummy.

You are correct in your criticism that this should not be called "Full Self Driving". And you are correct that Elon has been dead wrong about how soon real FSD can be delivered.

You say, "Most people believe driving without maps is a fool's errand. It may be possible in the future, but it's not the path to getting to success sooner."

You have no way of knowing this. Success in AI tends to sneak up on us. Tesla is doing something far more ambitious than the others. Nobody on the planet really knows if they will succeed or not. And nobody really knows how soon they could succeed.

A few years ago, Lex Fridman told his MIT students, "There exists a neural net that can drive better than a human." This is obviously true and there is no reason to believe that Tesla will not be the first to develop it. Conversely, there is no reason to believe they will be either.

We just don't know. It wasn't long ago that nobody thought a computer could ever beat a world class Go player. The combinatorics of the problem were just too massive for computers to handle. We saw how that one turned out.

I have said many times (including above) that what Tesla is doing is placing a longshot bet on a breakthrough. It might pay off. It probably won't. That is not a statement that one approach is completely wrong, just that it's more likely to be.

Tesla is being more "ambitious" in some ways but actually even that might be wrong. Waymo could do what Tesla is trying. Probably do it better than Tesla -- there is certainly better neural network skill at Google than anywhere in the world, and by a good margin. But they don't think that's the most likely path to success. Their ambition is quite high, actually.

I am not sure I agree with Lex on this. Or rather I don't think it's a true statement that "there exists a neural net that can drive better than a human ... on the compute hardware available today and in the next few years." That's not at all demonstrably true. Nor that we know or will know how to train that network.

It is definitely true that the neural net Tesla is after can be built. That's all Lex was saying. It's not (yet) demonstrably true. But it is unequivocally true nonetheless. We don't know if it really requires a true breakthrough or just more and more iterations and innovations.

Google is no better than Tesla at neural networks, at least in the autonomous driving space. They are both very, very good. Both have top AI talent and both are developing cutting edge hardware.

The difference I see is that Waymo has to figure out how to build a robotaxi network that is profitable. It looks to me like Waymo has a long way to go and I'm just as skeptical about Waymo's profitability as you are about Tesla's FSD. (Actually, I'm quite skeptical about both. It's a really hard problem.)

Tesla doesn't need robotaxi. If they can just make FSD work as well on city streets as it does on the highway then FSD will be extremely profitable for Tesla. Robotaxi would be a bonus.

But despite Elon's embarrassing missed predictions, Tesla actually has a luxury of time that Waymo does not. Waymo can't burn money forever. But Tesla isn't really spending all that much on FSD. In fact FSD, is already generating significant revenue. Because Tesla's cost structure for the project is smaller, they can keep working on the problem much longer.

You must use a different definition of that phrase than I do. Perhaps you think the human brain is nothing but neural networks. That's presumes facts not in evidence.

No, Google's AI talent is leaps and bounds over Teslas. Tesla is not inventing this stuff. They use the best that other people invent. Not saying they are bad at doing that, and they are making their own innovations but there is no evidence they even approach the kind done by people at Google including DeepMind and Geoff Hinton.

Tesla doesn't need a robotaxi, but the personal car is a much harder problem than a robotaxi. So by aiming for it, they put themselves at a disadvantage. However, I will say of all the car makers, Tesla is certainly better equipped to handle that task. But considering what the other car makers are like, that's not a high bar. (Though the car makers who have been willing to just have a startup subsidiary like Argo/Cruise/Motional do it have a better chance than the others.)

Don't count out MobilEye. They believe in neural networks and evolved ADAS like Tesla does. They are in an order of magnitude more cars than Tesla is, though they don't have the fine control over them that Tesla has. They are building LIDARs and imaging radars and doing it well. Being part of Intel, they are a level above Tesla in being able to make custom silicon. They have embraced maps and are doing them sell, exploiting their much larger fleet of cars.

Look at the sheer number of employment openings at Cruise Automation. Then consider Waymo recently stated of rebuilding the complete stack and sensor suite..
Lex Friedman podcast (start at 2;04 mark)
#241 – Boris Sofman: Waymo, Cozmo, Self-Driving Cars, and the Future of Robotics

BTW, it's also obvious that vision and neural nets is all that is needed to perform the driving task. That's how humans do it.

So to say that driving without maps is a fool's errand is quite wrong. All of us human fools still manage to drive.

Again, we don't know which approach gets us there sooner. But we do know that if Tesla is successful, then Tesla gets us there cheaper.

Yes, people have been saying this line for a very long time, long before Elon even dreamed of it. They imagine that somehow people are not aware that humans can drive in the human way (with one eye missing, in fact.)

It is possible, if you have some fraction of natural human intelligence. Which we don't have, not even remotely close. So it's not really relevant. Birds can fly fine but we don't make aircraft flap their wings.

(Yes. My grandfather was blind in one eye and he drove very well.)

People, many who are AI experts, have been saying that line because it is an excellent argument.

Vision plus a neural net can definitely drive a car. What we don't know is how hard it will be to create a that artificial neural net that can drive better than a human. We only know it is possible.

Our whole argument comes down to the idea that you say Tesla's approach is likely to fail and I say we don't know. We can't possibly know the probability of success or failure at all. I don't even think we can possibly have an informed opinion.

Frankly, I think both Waymo and Tesla look like they are making progress. Then I take another look and I think both Waymo and Tesla are in "fake it 'til you make it" mode.

There are a lot of things that are possible that are still intractable for foreseeable engineering. And it is far, far, far, far, far from proven that anything a human brain can do can be done by a camera and a neural network. People debate it a lot, but there is is certainly no consensus that this is true. (In fact, there is a larger consensus that it is not.) Which does not mean it's false but that you can't assert it as an accepted fact. It just isn't.

It's not a fact that a neural net can drive a car. And it's not a fact that if a neural net can drive a car, that we know how to build one. These are both speculations without a lot to back them up, caused by the fact that neural nets have conquered a number of interesting AI pattern matching and statistical problems which previously could not be done as well. That trend makes some people extrapolate that there is no limit.

Maybe there is no limit. Maybe it can be done. Some are betting it can, others that it can't or is too hard. Nobody can declare "a neural net can definitely drive a car." Well, not with any force behind it.

I read you saying their approach is likely to succeed, and I say we don't know. I find it odd that you represented my view as the reverse of what it is.

Can we know the probability? Well, no, not with breakthroughs. They happen or they don't. You can have intuitions on the speed of progress to them but they are by definition, breakthroughs.

Waymo and Tesla actually both deserve credit for being willing to let members of the public try their systems. That's an important bar few others clear. That's not faking it.

When I say a neural net can definitely drive a car, I mean the one in our heads. Whether a human brain is "more than a neural net" is, to me, a matter of semantics.

I generally agree with you on what you said in this latest post.

While humans are not a neural net (not in the AI and computer science meaning) even if we were, one could point out that we're not very good, on average, at driving cars. Of course, some humans drive a whole life without an accident, though on the whole, our species has far too high a death toll for its driving. Such a system could never get adopted today.

Though to disagree with myself, most of our accidents are due to failed attention, which machines are less likely to suffer from.

I do think we are machines, but neural nets are a very useful abstraction inspired by biological brains, but they are not the same.

I agree with this thought as well. And given that humans are not very good at driving, this give much credence to Lex's statement that there exists an artificial neural net that can drive better than a human.

I personally think Lex's statement is obviously true from a theoretical standpoint, but we don't know exactly how hard it is to create that NN in the real world.

Perhaps I will get around to forming a reply to the other thread where we have more disagreement.

I would certainly say that there can be a computer system that drives better than a human. Whether it is a 2022 neural net or not, or something derived from a 2022 neural net, is a different question which can't be stated with certainty (either way.)

I suspect the answer will include lots of neural net tech (though not necessarily built or trained as we do today) and a bunch of other techniques. Some suspect it can be done entirely with machine learning. That's not yet shown. Machine learning will play a big role, and most teams out there are working on that assumption.

My uneducated view is that true autonomy is so hard because it must "do everything." That is, with ADAS I can keep adding features... BSD, lane-keeping, automatic emergency braking, etc. And as Atptiv has said, get to 80% of full-autonomy safety benefits at 20% of the cost. But in this approach the driver is supposed to keep doing everything as before, with the system intervening as needed. In full autonomy the system is inverted: it does everything and the driver is out of the loop. (I do understand the SAE level system, I am making a broader point.) Thus its processing burden is immense. BSD = focusing a camera at the blind spot and beeping if there is a vehicle present. Full autonomy: "Human, you are attempting to enter the airport parking garage via the exit side... I must stop that." In full autonomy, the "edge cases" become all that is truly important, because instead of building up from limited functions (additive ADAS) full autonomy must "solve for the world." I was struck to read Waymo's paper about how its system would avoid the vast majority of fatalities in Chandler etc., not because the tech is amazing (it is!) but because the keys to this high safety level were mostly such incredibly mundane things: don't run stop signs, don't break the speed limit, don't turn left into traffic.

Is not telling the human they are going in the wrong entrance of the parking garage. It's taking the car with nobody in it and driving it in the right entrance while making no mistakes.

There are two camps. Most are in the "ladder to the moon" camp and think you should focus only on full autonomy (in a defined useful subset of situations and streets.) The other camp thinks you can start with ADAS and just keep making it better. This camp is smaller but has MobilEye, Tesla, Tim Kenley-Klay's new startup and a few others in it. Most are in the former camp.

Why would a CEO of a reputable company author an exaggeration of this nature?

Dan Dowd Twitter in last hour.

"I am worried about the Billions that will die if FSD goes live. It will be almost a Billion in the first few days."

if you look is a fake account

It's over the top, that's for sure.

complete stack and sensor suite rebuild by Waymo.

Lex Friedman podcast (start at 2:04 mark)
#241 – Boris Sofman: Waymo, Cozmo, Self-Driving Cars, and the Future of Robotics

Regarding traceability, validation and verification, how can Tesla make constant changes to a subsystem like beta FSD without significant internal testing first? Even Waymo states simulation is only good to an extent. Are the Tesla employees who are active in the FSD program on the payroll when they are in the vehicle with FSD active?

I would presume, if it were my team, that it would have a very large regression suite with lots of simulation scenarios and recorded data streams for any new build. That would be followed by some test track and on-road drives by staff -- Tesla has thousands of employees with Tesla cars -- and then you might consider release to testers.

Not sure what you mean by on the payroll. I presume they are salaried, most of them.

It is my understand that this is exactly what Tesla is doing. It's regression suite + live testing.

And they are live-testing multiple versions at once. Some employees are live-testing the latest minor version while others are live-testing the next major version.

I believe that right now the next major is version 11 with the "one stack to rule them all" architecture. I think this is also the version where the system has some understanding of object permanence. I expect it to be delayed quite a bit because it is so different and needs lots of extra testing.

part of Yann Lecun comments on Twitter. Read the entire thread and then tweet your review.

"In 2016, MobilEye "divorced" from Tesla, and Tesla had to start building its own driver's assistance system.

It took them 2 years to build a system that matched the MobilEye performance."

"It's not an autopilot. It's a driver's assistance system.
And for that, it's great.
It warns you when you don't hold the wheel for more than a few seconds.
Your cognitive load is much lower (much less tired after long trips), and the driving is safer overall. "

Hard to believe these are comments from an AI expert.
Disjointed, vague, not explicit, possible innuendo.
At least BT's article is coherent.

Tesla is almost a trillion dollar company in value, worth more than the all the below.

Will Toyota, Volkswagen, Mercedes, GM, Ford, BMW, Hyundai, Nissan, SAIC, Volvo resort to public beta testing of similar capabilities to the FSD program?

Why not then is the real question?

Is mostly about their position in EVs. Some of it will come from AV dreams. Ford Hyundai and GM are testing their vehicles with superior capabilities to FSD -- far superior as far as I know -- on public roads.

Given the list of Ford Hyundai and GM why not mention Baidu Volvo Mobileye or Mercedes

I don't think self-driving efforts constitute much of the valuation of Volvo and Mercedes. They do constitute some, but not a grand part, of the value of Tesla, GM and possibly Ford and Hyundai though less sure about those.

When you called leaps and bounds in terms of talents but you have only worked in google almost 10 years ago and (perhaps)never in Tesla, I wonder where your info is from?
Also, Waymo is WAY older/has spent WAY more money to build than Tesla FSD. And, confirmed in Lex Friedman's podcast, they are rewriting their code base just like Tesla.
Given that you don't have information bias(like, not from your google friends), with your network, it will be for self-drive supporters the best to do public reviews on all other self-driving systems that you claimed "in order of magnitude" better than Tesla FSD.
Nonetheless, what is your scale for SD systems? Waymo not being able to make a left is okay, cruise not being able to go city-street is okay, all the map-based systems not being cross cities/states is okay. To a robocar specialist, don't they all deserve an "F"?
btw you can pick tesla's words saying they only see serious accidents as "mistakes". When you say "MobilEye... are in an order of magnitude more cars than Tesla". What is the actual number of miles with the system activated?
Dare I say" Tesla has an order of magnitude more miles with FSD activated than others", I am not sure, just saying, but I am not a specialist, right?
I do think you are one of the very few critics that are actually trying to be objective, just think that with your background & years of experience, coming out with a review(s) on the whole SD car industry with deep research will give a more awesome insight into the industry.
A good review, but I will give a D for DO it better next time.

As I have said, only Waymo and Tesla are willing to let people ride their own routes in them, which is what is needed for a review. Props to both of them. I have taken rides in several and watched many rides for the press and the moment I turned on FSD it was clear it was a step behind just from the jerk quantity. Videos released by companies are cherry picked and not useful (Tesla had a video in 2016 that looked fine of course.) Ideal is public directed rides, and failing that personal rides and press rides where they selected the route but not the circumstances.

Of course Waymo is still busy coding, as is everybody else. As they should be.

The "F" is for what Tesla claims they have -- a beta that is very close to release. Tesla does not have that. That they call it that is odd. So if they say, "Try our beta that will be released this year" (after predicting that for several years) it has to be something matching that. Cruise, whom I have not ridden in (except a long time ago, a different product which wasn't that good) is clear on their limits now, though I am skeptical of their 2023 release. Not being good at left turns is a flaw, though only in ride time, which is pretty far down the problem list.

Not using maps is a bug, not a feature. It's better to get a car that can actually pull it off in a few cities then a car that drives badly in all cities. The latter has no value.

But I do have plans to review the others that will let me. Those who will not let me will be classed as not ready for a review. Tesla is also not ready, but at least they let people do it.

OP here.
Will look forward to your reviews for other systems.
I think the point here is that, after reading through your thought process, it seems like the problematic statements are "Tesla is way behind others" while the fact is, objectively speaking, hard to say due to their unique approach to the situation.
When you make statements with your background, it will get turned into a hit-piece by those mass media who have agendas against Tesla, I guess great ability comes with greater responsibility?
On the topic of Tesla's approach, I do side with you in a sense that they aren't following the most straight path, they want more than robataxis in cities, they want an AI on wheels that do even go without maps(btw I think they do use maps and plan to implement locale memory, Elon mentioned maybe on Lex's podcast). And with years of struggles, they finally figure they actually need to reach AGI to get there, thus why they announce the plan to make AI humanoid robots.
Elon does what he does, he is always ambitious but always late. (but gets things done at the end nonetheless.)
Cut down the ambiguous statements, present as an objective specialist, and if you do have fair views on the whole industry, I am sure your research will do well for greater goods.

Phil Koopman blog Safe Autonomy is worth reading to understand background.

If the "F" is based on Tesla's claims, then I agree with Brad. Elon has always massively over-promised on his timeline. So for that, Elon gets an "F".

But I really don't care that Tesla's definition of "beta" is different from the usual software definition. Anyone like me, who participates in the FSD Beta program knows what they are getting. Labels don't really matter.

The fact that FSD is jerky is irrelevant (for now) and it does not indicate that Tesla is behind. Given that Tesla is solving a more ambitious problem, we can say, "Yes, it's a little jerky, but it operates everywhere." We really only care if the jerkiness gets better over time, which it has rather quickly. This is an indication that they are making progress solving the driving task for the general case, but it doesn't really tell us how close they are.

Brad says, "It's better to get a car that can actually pull it off in a few cities then a car that drives badly in all cities. The latter has no value."

But Waymo hasn't shown it can pull it off in any city. Waymo has to figure out how to make a profit and they are failing at that so far. It looks to me like they are far, far away from profitability. If they can't make a profit then Waymo, has no value.

Tesla will improve over time and even Waymo's director of engineering thinks Tesla will get to level 4 eventually. At that point, Tesla's system will be extremely valuable even if they are unable to launch a single robotaxi.

Anything that makes any claim to being any form of a self-driving system (prototype, beta or otherwise) that makes wrong turns and runs red lights and veers towards obstacles is going to get an F, it it does it frequently in just a few miles of driving. Hesitant driving, jerky driving, getting honked at -- those wouldn't be a failure as long as they are reasonably infrequent. Stopping for very long periods unsure of what to do or where to go -- that's an F if it's not quite rare.

So yes, jerky driving is not a failure, but it does indicate you are behind systems which drive comfortably, because that's a must-have for the final checklist.

While Waymo will not bother to make it, I strongly suspect they could make a car that "drives everywhere" but imperfectly, and better than Tesla drives everywhere. They have no wish to make that because it's not a useful thing to release.

Tesla's approach may indeed work some day. Most people think it's an uncertain, and probably much later day. Except they would use maps because it makes zero sense not to use them when they don't cost anything but storage and bandwidth. Without a LIDAR is doable some day.

You can indeed debate how profitable a robotaxi business is. Tesla wants to get in that business too. But Waymo and many others feel that the robotaxi business comes first, which is much more likely to be profitable than the business that comes much later during that period. Waymo is making a driving system. They can partner with any car OEM if they have made the best one.

Maybe you need to try the FSD beta a little more. I've never seen it run a red light or come close to running a red light. And it doesn't tend to veer toward obstacles. If anything, it's too careful with obstacles and veers away too much.

Again, I disagree with you about jerkiness. This is not an indication Tesla is behind. It is just an indication that Tesla is taking a different approach from Waymo. I could just as easily say, "Well, put a Waymo in my unmapped neighborhood and see how poorly it drives. Tesla does much better, so Waymo is behind."

If Waymo could make a system that "drives everywhere" better than Tesla then they could sell that system to OEMs and make a fortune. It would be unbelievably useful.

It is impossible to tell who is ahead at this point because nobody has a viable system that really works. You haven't solved the problem until it is actually solved. The difference is that Waymo has a very limited time to prove that they can develop the technology and build a profitable business out of it. Tesla can keep iterating for another 10 years if that's what it takes.

This is a common misunderstanding. "It drove fine in my experience" means nothing. I know that's hard to accept if it impressed you. The only thing that would matter is "I drove it for a whole lifetime and I never had to intervene and it never had the sort of crash the police would get involved in." It's hard to comprehend what that means.

On the other hand, personal experience of errors does matter. Because if it makes that mistake, be it the first 10 minutes (as in my case) or the first year, we know it is never going to make the lifetime If it makes a dozen mistakes in an hour, you know it's not even close.

It's not what it can do, it's what it can't do, and in particular can't do fully reliably. This level says it's not yet a beta. A beta wouldn't be at the whole-lifetime level, but it would be getting closer to it.

Why do you think Waymo couldn't make a vehicle that drives badly on all roads, or in fact drives better than a Tesla on all roads. I am pretty sure they could make that, but they aren't going to bother because driving without a map is a bug, not a goal.

Just as "It drove fine in my experience" means nothing, "It drove poorly in my experience" means nothing. And again, the label "beta" also means nothing. It's just a label.

Waymo could indeed make a vehicle that drives badly on all roads. Also, Tesla could make a vehicle that drives like Waymo on a mapped road. The point is, each company is taking a different approach and has different goals. They need to be judged on how close they are to meeting their own goals. You seem to be judging Tesla based on Waymo's goals.

The next goal for Tesla is not robotaxi. Robotaxi comes later. The next goal is for FSD to drive as well on city streets as it currently drives on the highway. We don't know how close they are to that goal but I am seeing improvement with each release. If they reach that goal, they will see the next level of monetization.

The only goal that matters for Waymo is robotaxi. Waymo has not yet shown it is anywhere close to having a profitable robotaxi. If you see signs that I'm wrong, I'd love to hear about it.

For both companies, it is next to impossible to know how close they are to reaching those goals. I'm not sure they even know themselves. We can't give either a fair grade.

No, while driving fine tells you knowing, driving poorly tells you almost all you need to know. Since it must drive well for a human lifetime, if it can't even go around the block it's very far from ready. You can't tell exactly how far, but you know it's very far.

Yes, Tesla's plan is consumer car first, robotaxi later. It's not a bad plan if you can actually do consumer car first. Consumer car is harder in many ways.

Note that it's OK on the highway, but far from robocar level. It's good ADAS level. If they said they felt they could get it soon to standby driver ("level" 3) that might be a credible statement. You can even pull off freeways without maps, but it's also easy to map them, and they will help, so why not? (Several of Autopilot's freeway crashes would have been prevented by maps, and the rest by LIDAR.)

No, Waymo is not focused on profit right now. Nice to have Google's money! They are betting on something that's a reasonable bet -- that if you get robotaxi working, it's better than collecting underpants.

I said from the beginning that progress in AI sneaks up on you. And I believe I am right about that. At Tesla AI day, they showed many new techniques that are not yet being used in the beta software. Any of them could cause a huge jump in capability.

And because progress is not incremental, you can't judge that the solution is far away from doing a few shaky laps around the block. FSD version 11 could be a massive leap forward. And that could get released any day.

But you also can't judge from dozens of smooth drives that the solution is close because you might be at a local maximum and not know it.

Google's funding is not infinite and neither is their patience. Already, Waymo has taken a lot of money from outside sources. If Google thought Waymo was anywhere close to profitability, Googtle would have provided all the cash and kept all equity for themselves. So Waymo must indeed be focused on getting to profitability. From what I can tell, it looks like they are hemorrhaging money and who knows how long that can go on?

But that's different from thinking it's inevitable that it will sneak up on you. Tesla's plan involves AI breakthroughs. They can come. They probably will come at some point in the future. Nobody has the magic 8 ball to name the date.

Tesla's strategy is to bet on those breakthroughs with a fairly mediocre camera suite. Other teams are also interested in and working on those breakthroughs too, but with better sensor suites and maps. Google doesn't just have money, they invent the stuff that Tesla uses when it comes to machine learning.

I never said success was inevitable for either company.

If you agree that success in AI can sneak up on you then you must agree that you can't do a fair evaluation of Tesla's progress based on a few trips around the block. What may look like failure can actually be something very close to success. The Wright Brothers crashed a lot of planes before they came up with the one that flew.

The fact that AI gets breakthroughs doesn't interfere with the ability to evaluate where it is today. It just increases the uncertainty about the future. It doesn't make the future impossible to talk about.

You certainly can evaluate the quality of work so far to judge how likely breakthroughs are for a team. You can also evaluate things like resources available (where Tesla scores well, of course.)

And in particular, you can look at two teams and say, "This team looks like they can do it with more hard work and no major breakthroughs" and "This other team needs some breakthroughs" and judge progress and opportunities for success that way.

All the well-funded teams are doing lots of machine learning work and experiments. The few things that Waymo has let out in that area blow the doors of what Tesla has let out. We don't know what happens inside, of course.

To understand what's happening inside you have to study clues that come from the inside. One of those clues is to see how well the team understands its problem and its own progress. Elon's ability to predict the progress of his own project has become a running joke, unfortunately.

You said, "The fact that AI gets breakthroughs doesn't interfere with the ability to evaluate where it is today. It just increases the uncertainty about the future."

You are dead wrong about that. Think about the last flight that preceded the successful Wright brothers plane. It crashed. Anyone looking from the outside would say that they were no where near having a real flying machine. But in reality, it just took a few tweaks get it right on the next try.

What I'm saying is that in this case, the uncertainty of your observation is so high that it is impossible to make a fair assessment based on a few trips around the block. I would be just as wrong to give Tesla an "A" based on a few perfect trips around the block (which happens quite often, btw).

Also, I will point out that Waymo used to make the same predictions about success being right around the corner. And like Tesla, Waymo was wrong over and over. It's just that Elon never let failure squash his optimism. (Waymo's CEO gave up and left) All the self-driving companies will continue to look like fools until they solve the problem.

Nothing I've seen from Waymo convinces me that they are any closer to meeting their respective goals than Tesla. Nothing I've seen from Waymo convinces me that they are any better at AI than Tesla. If you have some specific information that might convince me otherwise, I'd love to hear it.

I am curious, where have you seen Waymo be wrong over and over? It has made very few official pronouncements, precisely because it knows how hard it is to predict this sort of thing. I have never seen anybody make the kind of predictions Elon makes, with words like "certain" and "highly confident" used routinely in short term predictions that failed utterly. I don't know why he does this. Usually if the head of a project says he is certain it will release this year, they are supposed to know what they are talking about, know when to use the word certain.

Waymo has made more basic predictions. Like all software predictions they have missed some deadlines, but not by much. And in particular, when they have said "we will do X by date Y" they appear to have most, but not all of X on date Y. Tesla, being breakthrough dependent, still has almost none of X when date Y rolls around for them.

When you forecast something 5 years out, it is almost expected that this can slip a couple years. When you forecast it 6 months out, it is not expected to slip 5 years.

I think people who knew aeronautics would have looked at the Wright Brother's failed flights and felt they were on to something. The public did not, but who expects that.

Anyway, I don't dispute it's hard to predict breakthroughs. Hard for Musk as for anybody, maybe harder. But you can debate if it's a single breakthrough or not -- it probably isn't -- and if it isn't you can get a sense of the quality.

You may be trying to argue, "nobody knows enough to give it a grade" and if you think a grade is a final pronouncement of where it might go, that could be a correct view. But that's not what the grade is. Rather, it's a contrast to the very common posts by newcomers who say, "This is amazing, I'm with Elon and think it's almost done."

I'm not sure that's true about "people who knew aeronautics". Those who did would have to take a really close look at what the Wright Brothers were doing. They would need to look at their calculations and the results of their wind tunnel tests. They would need deep knowledge of their previous glider tests. Then maybe they could make a good guess about how close they were to success. But even the best expert wouldn't be able to give them a grade from just watching a few failed test flights.

The same is true with FSD. To actually understand where they are, you need a deep knowledge of what Tesla is doing behind the scenes and the new techniques they are employing. You need to see the performance of alpha code which nobody outside of Tesla can see. You need access to things like disengagement statistics, which Tesla says is going down quickly, but they don't give the numbers.

But with all that, I'm not sure that even the insiders can tell where they will be in another year. Elon is now hedging by saying, "I would be shocked if we do not achieve Full Self-Driving safer than human this year." That's actually a pretty low bar if we are still talking about a system with human supervision.

So yes, I am arguing that "nobody knows enough to give it a grade".

I don't think anyone using the FSD beta thinks it's almost done. But for those who mistakenly believe that, a grade of "F" is not going to make the point. Your previous article about chasing the 9's makes that point a lot better.

By this standard you can't grade anything. You can certainly grade how it performs now though. You might argue there's a large cone of uncertainty over where it will go (though it shows not sign of being on the verge of "suddenly all falling into place.")

It's hard to tell with Elon, but if he means "Safer than a human while being supervised" that's a strange thing for him to mean, because it's not that hard to get to that level. He might backtrack and say, "I meant supervised" but that's a cop-out. Many times he has said he's talking about self-driving, not ADAS.

It's not good ADAS. I took a very short drive with 10.9 to the grocery store and back. I had to intervene 3 times to prevent a collision. 3 times! In 5.6 miles.

By this standard, you can grade something if you understand it well enough to know what you are observing.

I just found out that a follow up with Tesla investor relations revealed that Elon meant "unsupervised". So it's actually a higher standard than I thought. Tesla just raised the price for FSD and they really talked it up during the conference call. I'm still quite skeptical, but they are sounding more confident than ever. It's not just Elon. Other Tesla reps on the call sounded confident as well.

I doubt that you prevented collisions. You don't know what would have happened without your intervention. Three interventions in 5.6 miles does sound high. How long have you been using the beta? I know that my interventions have gone way down as I've used it more and started to trust the system.

Elon recently reported that there have been no collisions using FSD Beta. That's pretty impressive given that there are almost 60,000 users. If we were having a near miss three times every 5.6 miles, I'm sure the news would be littered with "FSD caused a crash" stories.

If you want to see something really amazing (and scary), check out this short video. IMO, this is a case of someone who trusted FSD too much. But it does show that some people are letting FSD Beta do whatever it wants and they aren't crashing.
https://www.youtube.com/watch?v=eHkpBhxUnug

There are credible reports -- with video -- of minor impacts. Tesla tends to say "no airbag deployments" when they say "no accidents" and that's a very poor bar.

Yes, I can't know if the car would have prevented the accident. At Waymo, they know, because every intervention like that is re-run in simulator to find out. I can't do that.

But in the 3 incidents on that recent drive (coming as a video soon) we saw:

  1. Stopped at a red light with heavy cross traffic in front of me, the vehicle lurched forward and was clearly about to have an impact. I hit the brakes hard, and perhaps it would have too.
  2. Moving into a right turn lane that had a green light, it continued at full speed even though several cars were stopped in the lane doing their turns. Yes, it could have done a hard brake (as I did) but nobody is going to "trust it."
  3. One minute later in a lane that is ending, but the car can't see it because of heavy traffic, it drives at full speed because it doesn't know the lane is ending (it has no map and other cars make that harder to see.) The lane to the left is also quite full, and it would take great skill or be impossible to merge at speed. I decide again not to trust it.

So in any situation it might be possible for it to recover because, hey, I managed it. But that's a poor standard. It's clear it was not time to trust it and that's a failure.

In that "holy balls" movie, this is something many humans will do to cross stopped traffic. We do it when it's clear to us the traffic won't resume. Tesla's side perception is not good enough for that yet, so yes, intervention would be smart there.

Remember, the standard is not that the system "might have handled it." It's confidence that it will always handle it, every time it's in that situation.

You say, Tesla tends to say "no airbag deployments" when they say "no accidents" and that's a very poor bar.

But you have no evidence to back that up. And it's a pretty strong accusation. I expect better of a journalist.

I basically agree with the rest of your post.

I do expect Tesla to be able to make that "holy balls" move, but not on version 10. Newer versions will include object permanence and better predictions of cross traffic speed and trajectory. So the cameras will be able to look between cars just like a human would and know that the other side of the road is going to be clear. I suspect that version 10 didn't actually have good enough perception to make that maneuver, but version 11 might be able to pull it off safely every time.

This is well documented, by Tesla. Admittedly in their fine print, but it's very clear. Why do you say there is no evidence. Read any of Tesla's quarterly autopilot safety reports, it's right there.

I apologize. The evidence you were working from is there, but it just wasn't cited. It's not just airbag deployments, but any active restraint such as (I presume) seat belts. They say that essentially, Tesla counts any accident over 12 mph.

So we have about 60,000 beta testers and still not a single accident above 12 mph. This even includes where you are on autopilot and some other car is 100% at fault.

I think that's pretty good, but maybe Tesla is just getting lucky.

Other automated driving companies are having their fair share of crashes. The problem is in getting good data so that you aren't comparing apples and oranges. It would be impossible to grade these companies based on accident data. I really hope regulators put some standards in place so we can see real accident rates.

It's not entirely clear if they catch anything over 12mph. Two accidents I have seen video of look like they might have been at greater speed, but would not trigger any active safety (sideswiping a curb and hitting a rubber bollard.) The rubber bollard could easily have been a concrete one with different results.

But yes, humans are pretty good at intervening. Of course some, better than others. And FSD is poor enough now that you will not get too complacent. A bigger issue will arise when it gets better and people get more complacent, I suspect.

Companies have internal data to help them grade, but we don't see it.

repetition with a different 3 mile loop

Was this below statement from Mobileye a misspoken comment? Hard to believe there will be a robotaxi launch in 2022 given this statistic. Can you transcribe the intetview?

The key that remains to be seen is just how good their software is. Shashua said they are still working at getting their system to 1,000 miles between accidents but they are confident they will get there soon. That’s not very good — you probably need to be 200 to 300 times better. We’ll be watching to see how they do.

Did VW just undermine Mobileye with Bosch alliance? Three weeks after the Mobileye and VW CEO exchange, VW announces the Bosch alliance. Without more information to determine the statement today, maybe a follow up discussion would help.

Bosch and Volkswagen Subsidiary Cariad Form Alliance for Volume Production of Level 3 AVs.

Herbert Diess

We are acquiring the IP and capabilities to design our own software. And we are forming a world-class team with Bosch to join forces with our own software engineers who’ve been working on developing VW’s proprietary 2.0 software stack. Congrats to all who’ve made this happen!

The alliance nicely complements last year’s Hella Aglaia camera software acquisition and what we’re doing in the AD space with argoai and the ID BUZZ robotaxi.

In the end, it’ll be a game of vertical integration: From sensors to the latest and most precise road data, software and AI training loops for adequate perception and the compute power to take the right decisions in very diverse traffic situations worldwide.

If a vehicle has an accident every 1000 miles, no company would continue testing with a fleet of 200 cars on the road. A statement from CEO Amnon Shashua after your interview to verify if that is what he meant to say is warranted and should have been part of the article. 200 cars driving 5 miles each is 1,000 miles. Wether those are real world miles or simulator miles, the statement seems incorrect.

1000 or 100,000 or 100,000,000

the CEO may mean almost 10,000 miles

He said 1,000 hours but an earlier version of my article incorrectly quoted it as 1,000 miles. That's closer to 40,000 miles, quite a difference.

Sorry about that, I wrote down 1,000 miles when he said 1,000 hours -- very different.

If self-supervised AI learning is the future of AI why did Tesla enlarge the data annotation team to 1,000 people in last 9 months? Is not Andrej Karpathy, a leading self-supervised algorithm proponent with access to an unlimited volume of data, resorting to classic machine learning.

If you can make it work, you will love it. You can't always make it work.

Tesla data label group label video in vector space. Group can process 10,000 video segments per week employing state of the art algos. AKaparthy interview discussing the labeling is on the net at robotbrains.

mercedes safety vs tesla safety

read over 150 pages
SAFETY FIRST FOR AUTOMATED DRIVING

google above in combination with mercedes

FSD video on curvy blacktop roadway do not exist

Best guess to how long geofence limits ​of both Volvo Ride Pilot and Mercedes Drive Pilot to California will be in affect if available in 2022?

60000 enlisted FSD beta testers simply generates mileage for future arguements and voices to stymie regulators. There is only about 300 total engineers in FSD group and so filtering through this data is extremely limited.

AI and Data separate group from FSD

Self-Driving Coalition, just renamed itself the Autonomous Vehicle Industry Association (AVIA) and is trying to distance industry coalition from Tesla.

Carmakers 9600 Tesla 133 Apple 0

"Detroit carmakers collectively have more than 9,600 dealerships scattershot across the United States versus Tesla's 133 as of early 2021."

Dr. Urtasun described the Waabi World simulator as a foundational proprietary tool that Waabi won’t license out or share externally – except with regulators, so they can test other self-driving systems. “This is our first big milestone,” she said.

douma interview 3 days ago talks kaparthy ai training of tesla fsd
karpathy avoids use photorealistic simulation in training

Add new comment