A research team at the Korea Advanced Institute of Science and Technology has demonstrated—if you can believe it—that driverless cars aren’t safe. Actually, they’ve merely demonstrated that one aspect of driverless systems is unsafe; the other two dozen risk factors will have to be addressed in additional studies.
Yes, it turns out that LiDAR is highly susceptible to malicious behavior carried out by the modern equivalent of the mafia guys who, back in the old days, simply cut the brake line. The conclusion of this study is fabulously candid:
- Compromised LiDAR systems could “endanger human lives”.
- Experimentation has confirmed that two types of attacks can “severely degrade” LiDAR performance.
- Known countermeasures are either “infeasible” or of questionable efficacy.
- Serious attempts to address attacks of this nature are currently “absent".
- If we insist on pursuing driverless vehicles, automakers need to get their act together “before [it is] too late".
Credit to the researchers for acknowledging the severity of the problem. Just for the record, they don’t “advocate the complete abandonment of the transition toward autonomous driving". At least that’s their official position; the study was supported by Hyundai.
An example of a LiDAR image. Presumably this is more impressive than it looks. Image courtesy of NASA.
From Neurons to Electrons (and Photons)
LiDAR is one type of sensor involved in the immensely complex system required to replace the human beings that previously operated motor vehicles. A LiDAR device emits light (visible, infrared, or ultraviolet) and detects objects by analyzing the reflected signals. This classifies it as an active sensor: it emits energy in order to perform its sensing task. A passive sensor, such as an ambient light detector, only receives energy.
The LiDAR, being a machine, doesn’t have intuition about when the car in the other lane might be preparing for an assassination attempt. Thus, it is easily fooled into interpreting malicious return signals as normal return signals.
The study identifies two types of LiDAR attacks: “saturating” and “spoofing.”
A LiDAR unit. Image courtesy of Velodyne.
An amplifier cannot produce a meaningful signal if its output is saturated at the positive or negative supply rail. Likewise, a LiDAR cannot produce meaningful data if a malefactor is flooding it with light of the relevant wavelength. The authors of the study describe saturation as “unavoidable” and saturation attacks as “powerful”. They are also “stealthy”—vehicle LiDAR uses infrared, so someone could be shining the equivalent of a floodlight on the LiDAR receiver and no one would notice it.
It is possible for a LiDAR to detect saturation, but it cannot prevent the resulting loss of data. I suppose this means that the system could at least advise the driver to put down his smartphone and start driving the confounded vehicle.
This type of attack is a bit more complicated. The general idea is to fool the LiDAR into thinking that obstacles are present, when in reality there are no obstacles. LiDAR detection is based on the delay between emitting light and receiving the reflected light. A nearby vehicle could receive the LiDAR pulse, wait for a specific period of time, then transmit the delayed signal back to the “victim” LiDAR. This process could be used to create spurious obstacles, described in the following graphic as “fake dots”:
Diagram taken from the research paper.
The researchers are of the opinion that a successful spoofing attack could be “far more dangerous” than a saturating attack.
LiDAR is a powerful means of gathering high-precision data about the surrounding physical environment. Research carried out in Korea indicates that it is also absurdly vulnerable to deliberate interference. The authors actually compare a compromised LiDAR system to a blind driver. How does that make you feel about autonomous vehicles?