Tricking LIDARS and robocars

Topic: 

Much press has been made over Jonathan Petit's recent disclosure of an attack on some LIDAR systems used in robocars. I saw Petit's presentation on this in July, but he asked me for confidentiality until they released their paper in October. However, since he has decided to disclose it, there's been a lot of press, with truth and misconceptions.

There are many security aspects to robocars. By far the greatest concern would be compromise of the control computers by malicious software, and great efforts will be taken to prevent that. Many of those efforts will involve having the cars not talk to any untrusted sources of code or data which might be malicious. The car's sensors, however, must take in information from outside the vehicle, so they are another source of compromise.

There are ways to compromise many of the sensors on a robocar. GPS can be easily spoofed, and there are tools out there to do that now. (Fortunately real robocars will only use GPS as one clue to their location.) Radar is also very easy to spooof -- far easier than LIDAR, agrees Petit -- but their goal was to see if LIDAR is vulnerable.

The attack is a real one, but at the same time it's not, in spite of the press, a particularly frightening one. It may cause a well designed vehicle to believe there are "ghost" objects that don't actually exist, so that it might brake for something that's not there, or even swerve around it. It might also overwhelm the sensor, so that it feels the sensor has failed, and thus the car would go into a failure mode, stopping or pulling off the road. This is not a good thing, of course, and it has some safety consequences, but it's also a fairly unlikely attack. Essentially, there are far easier ways to do these things that don't involve the LIDAR, so it's not too likely anybody would want to mount such an attack.

Indeed, to do these attacks, you need to be physically present, near the target car, and you need a solid object that's already in front of the car, such as the back of a truck that it's following. (It is possible the road surface might work.) This is a higher bar than attacks which might be done remotely (such as computer intrusions) or via radio signals (such as with hypothetical vehicle-to-vehicle radio, should cars decide to use that tech.)

Here's how it works: LIDAR works by sending out a very short pulse of laser light, and then waiting for the light to reflect back. The pulse is a small dot, and the reflection is seen through a lens aimed tightly at the place the pulse was sent. The time it takes for the light to come back tells you how far away the target is, and the brightness tells you how reflective it is, like a black-and-white photo.

To fool a lidar, you must send another pulse that comes from or appears to come from the target spot, and it has to come in at just the right time, before (or on some, after) the real pulse from what's really in front of the LIDAR comes in.

The attack requires knowing the characteristics of the target LIDAR very well. You must know exactly when it is going to send its pulses before it sends them, and thus precisely (to the nanosecond) when a return reflection ("return") would arrive from a hypothetical object in front of the LIDAR. Many LIDARS are quite predictable. They scan a scene with a rotating drum, and you can see the pulses coming out, and know when they will be sent.

In the simplest version of the attack, the LIDAR is scanning something like a wall in front of it. There are not such walls on the highway, but there are things like signs, bridges and the backs of trucks and some cars.

The attack laser sends a pulse of light at the wall or other object, but does it so that it will hit the wall in the right place, but earlier than the real pulse from the target LIDAR. This pulse will then bounce back, go in the lens of the LIDAR, and make it appear that there is something closer than the wall. (The legitimate pulse will also bounce back and arrive later, but many LIDAR designs might ignore that second pulse to return.)

The attack pulse does not have to be bright, it wants to be similar to the pulse from the LIDAR so that the reflection looks the same. The attack pulse must be at the very same wavelength, and launched at just the right nanosecond.

If you send out lots of pulses without good timing, you won't create a fake object, but you can create noise. That noise would blind the LIDAR about the wall in front of it, but would be very obviously noise. Petit and crew did tests of this noise attack and they were a success.

The fancier attack knows the timing of the target LIDAR perfectly, and paints a careful series of pulses so it looks like a complex object is present, closer than the wall. They were able to do this on a small scale.

This attack can only make the ghost object appear in front of another object, like another vehicle, or perhaps a road sign or bridge. It could also be reflected off the road itself. The ghost object, being closer than the road in question, would appear to be higher than the road surface.

(There are LIDAR designs which favour the strongest pulse or the last pulse, or even report multiple pulses. The latter would probably be able to detect a spoof. The latter -- if any exist -- could allow the creation of a false object behind a real object, or even hide the real object from view. The latest Velodyne models offer a choice of whether you wish the strongest or last return, or even both.)

The author also spoke of an attack from in front of the target LIDAR, such as in a vehicle further along the road. Such an attack, if I recall correctly, was not actually produced but is possible in theory. In this attack, you would shine a laser directly at the LIDAR. This is a much brighter pulse, and in theory it might shine on the LIDAR's lens from an angle, but be bright enough to generate reflections inside the lens which would be mistaken for the otherwise much dimmer return pulse. (This is similar to the bright spots known as "lens flare" in a typical camera shooting close to the sun.) In this case, you could create spots where there is nothing behind the ghost object, though it is unknown just how far an angle from the attack laser (which will appear as a real object) you could create the flare pulse. The flare pulse would have to be aimed quite precisely to hit the lens, and not hit other lenses if you want a complex object.

As noted, all this can do is create a ghost object, or create noise that temporarily blinds the LIDAR. The researchers also demonstrated a scarier (but less real) "denial of service" attack against the LIDAR processing software from the LIDAR's manufacturer. This software's job is to take the cloud of 3-D points returned by the LIDAR and turn it into a list of objects perceived by the LIDAR. Cars like Google's and most others don't use the standard software from the manufacturer.

The software they tested here was limited, and could only track a fixed number of objects, I believe around 64. If there are 65 objects (which is actually quite a lot) it will miss one. This is a much more series error called a false negative -- there might be something right in front of you and your LIDAR would see it but the software would discard it and so it would be invisible to you. Thus, you might hit it if things go badly.

The attack involves creating lots of ghost objects, so many that they overload the software, and go over this limit. They demonstrated this, but again, only on fairly basic software that was not designed to be robust. Even software that can only track a fixed number of objects should detect when it has reached its limit, and report that, so the system can mark the result as faulty and take appropriate action.

As noted, most attacks, even an overwhelming number of ghost objects, would only cause the car to decide to brake for nothing. This is not that hard to do. Humans do this all the time actually, braking by mistake, or for lightweight objects like tumbleweeds, or animals dashing over the road, or birds, or even mirages. You can make a perfectly functioning robocar brake by throwing a ball on the road. You can also blind humans temporarily with a bright flash of light or a cloud of dust or water. This attack is much more high tech and complex, but amounts to the same thing.

It is risky to suddenly brake on the road, as the car behind you may be following too closely and hit you. This happens even if you brake for something real. Swerving is a bit more dangerous, but normally only be done when there is a very high confidence the path being swerved into is clear. Still, it's always a good idea to avoid swerving. But again, you would also do this for the low-tech example of a balloon thrown on the road.

It is possible to design a LIDAR to return the distance of the first return it sees (the closest object, after all) or the last (perhaps the most solid.) Some may report the strongest, or report the multiple returns. The Velodyne used by many teams reports only one return.

If a LIDAR reports the first, you can fake objects in front of the background object. If it reports the last, more frighteningly, you can fake an object behind the background and possibly hide the background (though that would be quite difficult and require you making your fake object very large.) If it reports the strongest, and you are not as worried about eye safety, you can always be the strongest, and put your fake object in either place.

How to defend against it

Because this attack is not that dangerous, makers of LIDARs and software may not bother to protect against it. If they do bother, there are a number of potential fixes:

  • If their LIDAR is not predictable about when it sends pulses, the attack can only create noise, not coherent ghost objects. The noise will be seen as noise, or an attack, and responded to directly. For example, even a few nanoseconds of random variation in when pulses are sent would cause any attempted ghost object to explode into a cloud of noise.
  • LIDARS can detect if they get two returns from the same pulse. If they see that over a region, it is likely there is an attack going on, depending on the pattern. Two returns can happen naturally, but this pattern should be quite distinctive.
  • LIDARS have been designed that scan rather than spin. They could scan in an unpredictable way, again reducing any ghost objects to noise.
  • In theory, a LIDAR could send out pulses with encoded data, and expect to see the same encoded data back. This requires fancier electronics, but would have many side benefits -- a LIDAR would never see interference from other LIDARs, and might even be able to pull out information away from the background illumination (ie. the sun) with a better signal to noise ratio, meaning more range. However, total power output in the pulses remains fixed so this may not be doable.
  • Of course, software should be robust against attacks, and detect their patterns should they occur. That's a relatively cheap and easy thing to do, as it's just software.

Why it's not that scary

In general, I don't raise great alarm over this attack because:

  • It requires physical equipment near the car. As such
    • It does not scale, at most affecting only the cars close to it.
    • It is hard to do anonymously; you may get caught doing it
  • It is somewhat complex to do and there are less complex attacks that do something similar
  • It gains the attacker very little of value; mainly the jollies of having done it.

This is true of most localized attacks, including spoofing or jamming of other sensors (Radar, GPS, cameras) or even throwing balls in front of cars. Even V2V attacks, which can work over a larger range still are localized. Far greater concern should be given to remote computer intrusion attacks which can be global, anonymous and can gain the attacker something.

What about LIDAR interference

This topic brings up another common question about LIDARs, which is whether they might interfere with one another when every car has one. The answer is that they can interfere, but only minimally. A LIDAR sends out a pulse of light, and waits for about a microsecond to see the bounce back. (At a microsecond, it means the target is 150 meters away.) In other to interfere, another LIDAR (or attack laser) has to shine on the tight spot being looked at with the LIDAR's return lens during the exact same microsecond. Because any given LIDAR might be sending out a million pulses every second, this will happen -- but rarely, and mainly in isolated spots. As such, it's not that hard to tune it out as noise. To our eyes, the world would get painted with lots of laser light on a street full of LIDARs, but to the LIDARs, which are looking for a small spot to be illuminated for less than a nanosecond during a specific microsecond, the interference is much smaller.

Radar is a different story. Radar today is very low resolution. It's quite possible for somebody else's radar beam to bounce off your very wide radar target at a similar time. Auto radars are not like the old "pinging" radars used for aviation. They actually send out a constantly changing frequency of radiation, and look at the frequency of the return to figure out how long the signal took to come back -- they must also solve for how much that frequency changed due to the Doppler effect. This gives them some advantage, as two radars using this technique should differ in their patterns over time, but it's a bigger problem. Attack against radar is much easier because you don't need to be nearly so accurate and you can often predict their pattern. Radars which randomize their pattern could be robust against interference and attack.

Comments

"You can make a perfectly functioning robocar break by throwing a ball on the road" -- did you mean meake it brake?

It is the "unknown unknowns" that will be important to plan for. Who would have thought that creating a system for generating black inky smoke, to be roiled onto an adjacent car, would have any appeal to a truck owner.....but apparently it does. As an example, perhaps a vehicle ahead of you erects a piece of plywood above its roofline, such that it would strike an oncoming bridge if it were "real". It "blows down" at the last moment, and doesn't strike the bridge, but your onboard intelligence says, "Probably, an accident has occurred directly in front of you, as object taller than overpass was on a collision course. Pull over." The accident doesn't materialize, traffic is still moving fast. Sit tight? Resume travel?

I think a creative person can come up with many of these "confusing dangers". What if, on a two-lane road, the vehicle ahead of you drives in the "wrong lane" for an extended period? Pull over? Go slower? Ignore?

There will always be dicks.

I had not imagined cars might measure the car in front of them to judge if they will hit a bridge -- bridge hits do happen but are extremely rare -- but I get what you're saying, because you would not want to be behind such a vehicle when it happens. You would probably also honk loudly at it.

But in any event, the normal instinct will be to play safe. If events are rare, and you slow down on these rare events, I am not sure that's a big problem for anybody.

On the vehicle in front of you, I would not be bothered unless they did it with a blind turn coming up (which is crazy so it's not going to happen) or if I saw oncoming traffic heading for them, in which case I would back off as a human or as a robot. The car's radar will see oncoming traffic about 250-300m out if the road is straight, but I can't imagine the car doing anything more than I would do as a human seeing this.

A couple of things. Robocars (I am trying to promote a better name, something to get rid of "robot" connections) will have a lot of mapped landscape and features available to it for building the decision tree that will loop and refresh continuously, with a lot more information than we currently use as people...especially about surrounding vehicles and the landscape. So the robocar may be making decisions that a human won't even consider. It may know....it =will know=, that a block away, approaching by a cross-street, is that out-of-control trolley going 80 mph, and it will arrive in five seconds, though it cannot be seen. In fact, it cannot be seen, because in reality, it is some "dick" (as you describe) who has rigged up a flying drone giving off the signature of a five ton trolley car. Reading the signature of nearby vehicles is oh so useful, but can also have "unknown unknown" consequences.

Another problem may be comparative landscapes: that is, what is in the roadmap in memory, and what may be viewed by in-car LIDAR and radar. Our prodigious memory and evaluation system will be very tough for autonomous systems to match. For instance, if a line of telephone poles line a street, and LIDAR has detected one leaning, mismatched to the roadmap in memory, is that lean of the pole worthy of ignoring, or, instead, stopping and changing routes? If a pedestrian is standing there, with a 20-foot aluminum pole, is it because he wishes to drop it on our vehicle, or because he is installing it, ten feet behind his current position, as a flagpole. What does the robocar's memory know about flagpoles? The guy holding it? How does one make a judgement? And, how fast or slow does the robocar process these unusual circumstances? And does the judgment of one robocar, create a ripple effect, with all nearby vehicles reacting to the reaction of one robocar?

This is why my personal vision of autonomous vehicles begins with "Passenger-less" vehicles: small, two-wheeled autonomous vehicle (wheels side-by-side, like a Segway...not motorcycle-style) that make point-to-point deliveries, traveling 5-10mph, between 4AM and 6AM, to grocery stores and retail (and home) sites, from distribution centers. The value of the goods, and the speed of delivery, increase by increments, as "mapping" and "decision knowledge" grows and grows...'moar flagpoles!' The system needs to gain real-time systemwide evaluation data.

Since I don't believe cars will talk to cars for quite a long time, I am not sure how a drone would give off such a signal. However, I do agree that people will try to play tricks on cars, and they will succeed. I believe they will be few in number, and effort will be made so that tricks do not cause safety problems, and it is tolerable if the griefers can cause cars to stop for no reason from time to time.

That's because that's already the case today. Anybody can disrupt traffic and easily fool and spook human drivers. Tons of traffic jams are caused by almost anything untoward on the road. Something as simple as a crew cleaning trash often causes a traffic jam. So I don't think we need to make the cars completely immune to being fooled and slowing down, as long as they don't do something unsafe. It's a lot of work and griefers can adapt so the benefit is not that great.

LiDAR for automotive Market is estimated to be USD 735.0 million in 2025, growing at a CAGR of 28.32%, during the forecast period.

Read More: http://bit.ly/2vwGocW

Add new comment