Saturating and Spoofing LiDAR Attacks Cast Doubt on the Safety of Driverless Cars

July 01, 2017 by Cesco Saavedra

Researchers have carefully studied techniques that could be used to interfere with driverless systems. The conclusion: Don’t fall asleep at the wheel.

Researchers have carefully studied techniques that could be used to interfere with driverless systems. The conclusion: Don’t fall asleep at the wheel.

A research team at the Korea Advanced Institute of Science and Technology has demonstrated—if you can believe it—that driverless cars aren’t safe. Actually, they’ve merely demonstrated that one aspect of driverless systems is unsafe; the other two dozen risk factors will have to be addressed in additional studies.

Yes, it turns out that LiDAR is highly susceptible to malicious behavior carried out by the modern equivalent of the mafia guys who, back in the old days, simply cut the brake line. The conclusion of this study is fabulously candid:

  • Compromised LiDAR systems could “endanger human lives”.
  • Experimentation has confirmed that two types of attacks can “severely degrade” LiDAR performance.
  • Known countermeasures are either “infeasible” or of questionable efficacy.
  • Serious attempts to address attacks of this nature are currently “absent".
  • If we insist on pursuing driverless vehicles, automakers need to get their act together “before [it is] too late".

Credit to the researchers for acknowledging the severity of the problem. Just for the record, they don’t “advocate the complete abandonment of the transition toward autonomous driving". At least that’s their official position; the study was supported by Hyundai.


An example of a LiDAR image. Presumably this is more impressive than it looks. Image courtesy of NASA.


From Neurons to Electrons (and Photons)

LiDAR is one type of sensor involved in the immensely complex system required to replace the human beings that previously operated motor vehicles. A LiDAR device emits light (visible, infrared, or ultraviolet) and detects objects by analyzing the reflected signals. This classifies it as an active sensor: it emits energy in order to perform its sensing task. A passive sensor, such as an ambient light detector, only receives energy.

The LiDAR, being a machine, doesn’t have intuition about when the car in the other lane might be preparing for an assassination attempt. Thus, it is easily fooled into interpreting malicious return signals as normal return signals.

The study identifies two types of LiDAR attacks: “saturating” and “spoofing.”


A LiDAR unit. Image courtesy of Velodyne.


An amplifier cannot produce a meaningful signal if its output is saturated at the positive or negative supply rail. Likewise, a LiDAR cannot produce meaningful data if a malefactor is flooding it with light of the relevant wavelength. The authors of the study describe saturation as “unavoidable” and saturation attacks as “powerful”. They are also “stealthy”—vehicle LiDAR uses infrared, so someone could be shining the equivalent of a floodlight on the LiDAR receiver and no one would notice it.

It is possible for a LiDAR to detect saturation, but it cannot prevent the resulting loss of data. I suppose this means that the system could at least advise the driver to put down his smartphone and start driving the confounded vehicle.



This type of attack is a bit more complicated. The general idea is to fool the LiDAR into thinking that obstacles are present, when in reality there are no obstacles. LiDAR detection is based on the delay between emitting light and receiving the reflected light. A nearby vehicle could receive the LiDAR pulse, wait for a specific period of time, then transmit the delayed signal back to the “victim” LiDAR. This process could be used to create spurious obstacles, described in the following graphic as “fake dots”:


Diagram taken from the research paper.


The researchers are of the opinion that a successful spoofing attack could be “far more dangerous” than a saturating attack.


Closing Thoughts

LiDAR is a powerful means of gathering high-precision data about the surrounding physical environment. Research carried out in Korea indicates that it is also absurdly vulnerable to deliberate interference. The authors actually compare a compromised LiDAR system to a blind driver. How does that make you feel about autonomous vehicles?


This article is considered an opinion piece. The views expressed herein belong to the author and do not necessarily represent those of the site or site management.

  • P
    profbuxton July 03, 2017

    I bet they won’t publicize this too widely. I would NEVER want a self drive car. See how easy it is to hack PCs. I guarantee that will be hackers getting into these systems as soon as there are a number of them on the road! Who is going to be the first fatality from this?

    Like. Reply
  • T
    toddneff July 03, 2017

    This is why autonomous vehicles will have multiple sensors. Not a reason to dismiss lidar as fatally flawed or change one’s “feeling” toward autonomous driving (how do you “feel” about sharing the road with drunks/people texting? Because there a lot more of those out there than there will be villians with lidar-spoofing hardware hoping to foil your morning commute). Quanergy, as an example, plans to embed video into their solid-state lidars; no automaker is considering a lidar-only autonomous driving system.

    The paper dismisses sensor fusion with this: “Likewise, the fusion of multiple types of sensors cannot be an ultimate solution either. Radars [44], cameras [30, 44], and ultrasonic sensors [44] have all been revealed to be vulnerable to either blinding/jamming or spoofing.” I would disagree with the “cannot be an ultimate solution” bit. No human-built system is every going to be totally foolproof, but combining sensors can lower the risk of riding in an autonomous vehicle to levels far below what we subject ourselves to on the road now.

    Like. Reply
  • D
    DP27 November 22, 2017

    I don’t see this being any different than shining a spot light at a human driver, except you would have to be half way intelligent to do this.

    Like. Reply