All about sensors: Advanced radar and more for the future of perception
Earlier this week I talked about many of the LIDAR offerings of recent times. Today I want to look at two "up and coming" sensor technologies: Advanced radar and thermal cameras.
I will begin by pointing readers to a very well done summary of car sensor technologies at EE Times which covers almost all the sensor areas. For those tracking the field it is a worthwhile resource.
Advanced radar
Robocars have used radar from the earliest days. It's not that expensive, and has many superhuman capabilities -- it sees through fog and all other forms of weather, it has very long range, and it tells you how fast every target is moving.
What's not to love is the resolution, which is very poor. Radars if they are very good today will tell you where a target is within several degrees of azimuth (horizontal) and are even worse in the altitude. Radar is also noisy, and full of "multipath" returns that bounced off something else in the environment. (That's actually a big feature when a signal from a car you can't see bounces off the road surface.)
Until MobilEye did it with cameras, car ADAS systems like adaptive cruise control and forward collision warnings all used radar. The low resolution meant they sometimes could not be sure what lane a car was in, and in the early days strange actions by adaptive cruise controls were common. Radars also have a problem with fixed objects or stalled cars. Their Doppler says they are stopped, and you're getting tons of radar returns from all the fixed objects of the world which you mostly have to ignore. Which means you ignore the stopped cars and pedestrians too. Famously, Tesla's radar (like most automotive radars) ignored the strong reflections coming from a truck crossing the road because they had the Doppler of a fixed object -- resulting in a fatality when the driver didn't intervene as expected.
Efforts are underway to make radar with much more resolution. There are a few ways to do this.
- You can use much more bandwidth. That's not allowed in the usual radar bands which are limited to 4ghz, but there is some potential for ultra-wideband radar to get that bandwidth in the very high bands.
- You can have multiple radars and compare the returns from overlapping ones. This is how most automotive radars get their resolution today
- You can sweep, not like the classic aviation radars, but by using a phased array antenna system and digital processing. As you "steer" the phased array beam in fine increments, you can get more information about where a target is.
- You can make a wider antenna array, as wide as the car to gain some resolution.
- In some cases you can create a synthetic aperture from the movement of the car, but usually that only works when looking to the left or right, not directly forward. When crossing fast streets, you still need to look to the sides.
- You can use new, smarter software, to decode radar signals and learn what they are reflecting off by looking not just at one reflection but how they change in time. Neural networks are a new tool to help with that.
I saw two companies at CES promoting these techniques. Metawave of Palo Alto is using a combination of beamforming with antenna arrays, antennas made from less-exotic metamaterials and machine learning.
In Israel, Arbe Robotics has an approach with similar techniques. They claim 1 degree of horizontal resolution and 4 degrees in altitude. (Note, I have a small interest in Arbe through a venture fund.)
At the magic distance of 250m that you want for high speed highway driving, one degree means around 4m or the width of a car lane, so it's right on the edge of being useful for spotting a stalled car in your lane, in that you will hopefully not confuse the car with something stopped on the shoulder, or a bridge or sign there. The 4 degree vertical is more problematic -- you want to not confuse that car with a bridge or overhead sign above it.
Further hope comes from the neural networks. Some things are very obvious when you look at their pattern over time. For example, the legs of a cyclist are constantly going back and forth, towards you and away, which can be seen clearly in radar. This can also be true for pedestrians. Identifying stalled cars will be more interesting.
Another big challenge is debris on the road, ranging from roadkill to tires to things that have fallen off trucks. This is still a research problem.
Radar is the only sensor that sees much of anything in thick fog. Humans drive (foolishly) in thick fog and robots will be more cautious -- perhaps too cautions, making humans say, "I am in a hurry, I will take the wheel." In certain areas, ability to drive in poor visibility could be a differentiating factor.
Ground penetrating radar
I have talked about it before, but another interesting radar technique is to point the radar down into the ground, and read reflections from the stones and gravel buried beneath the roadbed. These patterns are unique, and as such by reading them you can figure out just where you are on the road, even if the road is completely covered with snow.
Thermal cameras
For many years I have also been interested in the potential of thermal cameras. The main kind here is the microbolometer camera, a MEMS device that sees light in the range of 8 to 14 microns. This light is usually called thermal radiation, and its intensity depends on the temperature of the object emitting it.
Because they use emitted radiation, they don't care about changing lighting conditions which are the bane of regular vision. (LIDAR uses light emitted by the laser and also doesn't care.)
Consider some of the advantages: * As noted, they work regardless of outdoor lighting -- equally well day or night, and they are not bothered by glare being pointed into the sun. (Sunlight still causes temperature differences but those change slowly) * Exposed human skin with its particularly temperature can be readily identified * On cars that have been running a while, the tires and tailpipe (if not electric) are warm and obvious. * Animals -- a major source of accidents -- are also quite visible, even hidden in bushes. Since most animals cross at dusk, they stand out well against the cool of dusk. * They can see better through weather and light fog
But there are some issues
- They are much more expensive than regular cameras, but this is coming down and will come down further
- Their resolution is much better than LIDAR but much worse than cheap visible cameras.
- High resolution units, with their night vision abilities, are export controlled and viewed as military devices in some cases
- They don't see through glass so must be mounted outside, and independently protected from weather. (Cameras usually get mounted on the rear-view mirror which is cleared by the wipers.)
- Tires and tailpipes are not warm for the first minute or so of car operation
- In many places, the background temperature of the environment can reach the temperature of human skin
- They don't see traffic lights or turn signals (nor does LIDAR)
- Computer vision still has its limitations in fully reliable operation
Even so, they offer another type of superhuman vision.
At CES, I met with Adasky which is the first maker of these cameras to promote their use for self-driving cars. Several cameras have been used for some basic ADAS and to provide night vision to drivers in the past, though only with modest success.
Fancier cameras
Another company I met at CES was pushing superior regular cameras. Epilog's approach is to take the image from a lens, split it in two, and put it onto two checkerboard arrays of low cost image sensors. With this approach, every pixel is picked up by a sensor in one or both of the arrays, allowing for the creation of a very high resolution, wide aspect ratio image. This much resolution is actually more than most neural networks can handle, but it allows for a digital zoom to get decent resolution on far away objects.
They claim that this seamless image is better than simply stitching images from multiple low cost cameras. It clearly is better, but it's not clear if it's necessary, and whether the artifacts of the joins are that bad on medium distance objects.
Add new comment