Tesla makes a custom neural network chip, is that wise?


Tesla announced it has built its own custom neural network processor to use in Autopilot 3.0 in 2019.

Tesla started mainly using MobilEye's vision chip, but that relationship ended after the first fatality. They have since been using NVIDA GPUs in Autopilot 2.0 and now plan to use their own ASIC.

This is an interesting but risky choice. When you look for your hardware you can use either an ASIC (custom IC) built specially for your problem. This does much better at the problem you design it for -- it's faster, it uses less power, and in very large quantities it can be cheaper.

You can also use a general purpose chip. The most common chips for neural networks are GPUs. GPUs are really general purpose CPUs (lots and lots of very simple ones) but they also spend a lot of their silicon on graphics functions you can't make as much use of. As general purpose computers, they have to do it in software but more slowly and using more power. Because they are mass produced, they are often cheap, especially for something low volume like a car. (Cars are very low volume compared to consumer electronics -- millions of cars but billions of phones and laptops.)

The problem is, building an ASIC is expensive and time consuming. You have to guess right on what you need because it will be years from starting a brand new chip to putting it in production. If you don't guess right, you have sub-optimal hardware and you can't change it. Designers of general purpose chips, even GPUs, do have to make guesses about what sort of instructions and data paths to have, but most decisions happen later -- in software and microcode. GPU designers also have a lot of experience making their chips, so new revisions come out much faster. This is also because they have a constant pipeline of new chips always in the works.

When it comes to neural networks, everybody uses GPUs today, but lots and lots of companies are trying to make chips to do those neural networks. Even the GPU makers are doing such chips. Google started early by making the TPU chip which is optimized for their tensor flow neural network tools, and used it in their own servers and now offer it to the public. You can bet they have better ones in the pipeline.

Neural network chips will be a mix between custom chip (with hardware aimed at how current neural networks are executed) and general purpose chip (designed to be flexible for many different such approaches.)

If Tesla has made a chip, perhaps they feel that the way they are doing their networks will be so different from everybody else that the general purpose designs will be too inferior. That seems unlikely to me, but we don't know much about their internal plans.

The big risk comes here: What if they're wrong in deciding, 3 years in advance, what they are going to need? That's an easy thing to be wrong about in fields that move as fast as machine learning and robocars. If they are wrong, they don't die -- they just buy somebody else's chip that is closest to what they really need -- but they have wasted a lot of resources and a lot of planning.

I write this knowing no details about Tesla's chip -- so I hope that we'll see some more details on what it does in the future. Tesla has made a big bet that computer vision, plus radar will be at the center of their strategy. Elon Musk has famously called LIDAR a "crutch" which gets in the way of solving the real problem. There are rumours in the wind that the view on LIDAR has softened a bit at Tesla, but for now they are constrained by the need to do things in a production car, and there are no production automotive robocar lidars shipping in 2018. Every other team presumes that such units will ship shortly. (And of course Waymo built their own, and since they have ordered 80,000 cars, it must be in production.)


Because they started on it 2 years ago, already have the chip driving in cars in the field in preproduction hardware, and have application results showing a 10x speedup with no power, footprint, or cost downside. There is no GPU on anybody’s roadmap with remotely comparable performance.

So yeah, apparently it was a wise decision.

Weren’t you saying, a few years ago, that there would be a $200 lidar for high volume vehicle applications shipping in 2017? What happened to that?

2 years ago (and more) lots of other companies also started work on different neural network oriented chips. Google of course already had them before that point, but for everybody else, including Tesla, developing a new chip takes years, and Tesla won't have its chip until 2019.

Quanergy did forecast it would have a $250 lidar in 2017, but now says 2018. I don't recall saying that was a guaranteed prediction (no prediction in computers and hardware ever is) but the broad trend is correct -- there are now large numbers of companies developing more power and low cost lidars with lots of different technologies.

Add new comment