Affordable robocars -- will it be cameras or LIDAR?


There have been a wide variety of announcements of late giving the impression that somebody has "solved the problem" of making a robocar affordable, usually with camera systems. It's widely reported how the Velodyne LIDAR used by all the advanced robocar projects (including Google, Toyota and many academic labs) costs $75,000 (or about $30,000 in a smaller model) and since that's more than the cost of the car, it is implied that is a dead-end approach.

Recent stories include a ride in MobilEye's prototype car by the New York Times, a number of reports of a claim from the Oxford team (which uses LIDAR today) that they plan to do it for just $150 and many stories about a Romanian teen who won the Intel science fair with a project to build a cheaper self-driving car.

I have written an analysis of the issues comparing LIDARS (which are currently too expensive, but reliable in object detection) and vision systems (which are currently much less expensive, but nowhere near reliable enough in object detection) and why different teams are using the different technologies. Central is the question of which technology will be best at the future date when robocars are ready to be commercialized.

In particular, many take the high cost of the Velodyne, which is hand-made in small quantities, and incorrectly presume this tells us something about the cost of LIDARs a few years down the road, with the benefits of Moore's Law and high-volume manufacturing. Saying the $75,000 LIDAR is a dead-end is like being in 1982, and noting that small disk drives cost $3,000 and declaring that planning for disk drive storage of large files is a waste of time.


Cameras or Lasers in the robocar

I will add some notes about Ionut Budisteanu, the 19-year old Romanian. His project was great, but it's been somewhat exaggerated by the press. In particular, he mistakenly calls LIDAR "3-D radar" (an understandable mistake for a non-native English speaker) and his project was to build a lower-cost, low-resolution LIDAR, combining it with cameras. However, in his project, he only tested it in simulation. I am a big fan of simulation for development, learning, prototyping and testing, but alas, doing something in simulation, particularly with vision, is just the first small step along the way. This isn't a condemnation of Mr. Budisteanu's project, and I expect he has a bright future, but the press coverage of the event was way off the mark.


Great read Brad. Lidar is good for driverless taxi fleets. They have all the liability so the added costs shouldn't be a problem. They will pass the cost onto their passengers anyway.

Two IR Cameras can fully construct a 3D image of the forward scene. Detecting all objects and computing their actual size and moving direction. Collisions can be predicted and direction provided (alert) or action taken under appropriate conditions. My website provides a lot of information (

Having two cameras, even with a longer baseline (which is problematic as you need them behind windshield wipers) for stereo are generally not considered reliable for doing 3D at longer distances. For urban driving, you want around 100m of reliable 3D perception, and for the highway, 200 to 250 is desired. SWIR Lidar does not give you much more than 100m but other Lidars can go further.

If you have a camera system able to perform at 100m, tell us more about it.

This time-of-flight sensor/camera from Toyota that
was in JSSC a few months ago made me more optimistic:

The full paper is behind the IEEE firewall, but hopefully the
abstract is free to read for all ...

Understand that this is a 32 sensor array. The Velodyne has 64 sensors, but they are individual components -- putting it all on one chip is a good path to making it cheaper. The 320x96 scanner they describe is, I presume, the result of some sort of scanning which is why it's 10hz, as they would have to make 960 scans with the 32x1 array to do this.

As noted, today's high resolution LIDARs tend to be done with a large array of individual lasers and an array of photodiodes. Each photodiode is aimed where the laser will fire so that you can read them all at once.

Although it presumably is short ranged, it can only be good news for robotics advancement that the naxt Xbox's next kinect has a high resolution flash lidar with mm accuracy, bringing such a device into cheap mass production will be great for robotics experiments. The PS4 also has a stereoscopic camera ensuring that some of the large and established game industries time and money will be looking into 3d robot vision.

Great read Brad. Thanks for your insights. You mention "advanced localization" in passing. Can you please elaborate on the trends in Advanced Localization, technologies, pros/cons, challenges, price points, requirements for HD Maps, etc.

Big fan of your blog Brad!

Clearly LiDAR is much more robust. Cameras can be used for semi autonomous systems, but do you see any way that a fully autonomous car can function without LiDAR (using only a camera + a sensor suite)? In your opinion is that at all possible? If not today, do you think this could happen in the next 5 years?

It's not possible today -- not if you want to go more than about 25mph, and difficult even under that speed. You just are not going to be reliable enough to run unmanned.

In the future? It's obviously possible for vision to work, since that's what humans do. (And not even stereo, we can drive with one eye.) However, we do that by having an incredible pattern matcher and classifier and "understander" that is vastly beyond any computer system today. We see a pedestrian and we don't just know what they are and how far away they are and which way they are going, but much more.

Some day, there will be vehicles with just cameras. Nobody knows when that day is, because it requires not just evolutionary progress, but breakthroughs. Perhaps it comes in less than 3 years, though most would doubt that. If not, the first cars that go out are going to need LIDAR, because nothing else does what you need.

Certain low speed applications -- valet parking, low speed shuttle -- might work with just a camera or camera+radar+ultrasonic

Hello Brad,
At first I want to thank you for all the sharing of knowledge and visions with us.
I'd like to know your opinion about the following:

Tesla's Elon Musk stated that their autonomous cars won't need Lidar in the future. Not even for the highest HAD-levels. Though there are many manufacturers miniaturizing Lidar and also making low cost solid state small (and less precise) models. I asked Tom Tom about this and they also told me that due to the high costs of Lidar nowadays, they think that they can handle the selfdriving cars with the use of camera's only. I don't know if you're familiar with their Localization Product called RoadDNA which is a local reference made with Lidar and then transformed to a 2D picture.
My question is: if it is obvious that Lidar isn't needed, why are there so many initiatives in making Lidar smaller and cheaper? It just doesn't make sense.

First of all, it is far from obvious that you can do it without LIDAR. Certainly nobody is even close to doing so today. There are those who hope they can do it without LIDAR, but in truth, it's a silly hope, because even if you could do that, it still makes no sense to not use LIDAR and do even better, when LIDAR is assured to get cheap very soon. LIDAR will be cheap in just a couple of years. Cameras will some day be good enough on their own, but not before LIDAR is cheap. Why make a car less safe and capable than you can just to save a few hundred dollars? Perhaps in the distant future, but not today.

It is true that we seem to be getting closer, thanks to Deep Learning, to bumping the ability of cameras. But closer is still not close. Not even remotely close.

What do you think of Tesla Autopilot? No lidar.

The Tesla is great, but it's a highway autopilot that requires human supervision. A full self driving car is orders of magnitude harder, and today needs LIDAR. Perhaps someday it won't.

What is your opinion of 3D cameras, or more precisely, depth information synchronized with IR or RGB feed? Isn't this a sweet spot, giving rise to interesting algorithms?

Yes, sensor fusion of some depth source and RGB is indeed worthwhile and many researchers have used this.

Thanks for the wonderful article! I'm very puzzled why LIDAR cannot give accurate speed information in autonomous driving systems, while LIDAR police guns could measure speed quite well. Would you mind giving me some pointers why this is the case? Many Thanks!

You can do it, but it's more challenging and for LIDAR you would need to do it on all your lasers. There are other ways to measure the speed of things if you have a point cloud, but it's not as fast and easy as doppler would be, but cheaper.

Thanks so much for the great article. Summarized driverless sensor issues very well. Much of the article was spent on the high cost of lidar. The $500 lidar was announced so much of that can be updated.

And my pals at Quanergy have a $250 unit with 8 solid state beams coming this year as well. Not much detail on the Velodyne yet.

Would love to hear your thoughts on this argument: If every car has LiDAR on the road, then each individual LiDAR may get confused by the signals from other LiDARs

Any validity to this?

That does not happen. There can be occasional mild interference but it's not hard to deal with. A lidar sends a laser pulse at a very specific spot and waits for one microsecond with a lens focused on that specific spot to see the return pulse. To interfere, another LIDAR would have had to shine its laser on that same spot at the exact same microsecond. It can happen, but it's a one in a million thing.

Brad, Outstanding report! Can I ask what date is what published? Thank you, Bob

The best analysis I've seen, Brad. Do you or anyone on this forum know anything about the potential of a newly commercialized tech that exploits the polarization of light waves reflected off surfaces to ascertain the 3-D shape of objects? It's a passive system that supposedly can use any light frequency. Vision Systems Design 2017 Platinum Award Winner: Teledyne DALSA (Waterloo, ON, Canada; - "Polarization. Line scan polarization cameras using nanowire-based micropolarizer filters have significant advantages over area-based cameras using the same technology. Teledyne DALSA's Piranha4 line scan polarization camera provides improved image quality and enables high-speed, real-time detection of birefringence, stress, surface roughness, film, and other physical properties that cannot be detected with conventional imaging in industrial environments."

I have not looked at them (though they are from one of my former hometowns) but I get the impression this does not see the world in 3D, but determines surface types better. That can be quite useful in segmenting an image, telling cars from background etc for a form of 3-D, but I have not heard about a lot of use of this in robots.

Add new comment