News

Tricking Autonomous Driving Systems Could Be as Simple as Subtly Altering Street Signs

February 16, 2018 by Chantelle Dubois

Recent research indicates that it may be easier to dupe a vision system in an autonomous vehicle than previously thought.

Reliable and accurate vision systems are critical to the success of autonomous vehicles. But these systems are vulnerable to third-party influence, rending them unreliable and unsafe. Recent research indicates that it may be easier to dupe a vision system than previously thought.

Autonomous driving systems use a full suite of tools to be able to detect their environment and safely navigate, including proximity sensors, GPS, and vision systems. Vision systems may be comprised of several different kinds of sensors (e.g., optical, LiDAR, cameras, RADAR) or a combination of multiple types. There isn't a consensus yet on the optimal system to allow autonomous vehicles to gather and process visual data.

Strides are quickly being made in developing more advanced vision systems. Along with these advancements, however, come new dangers. Vision systems can be vulnerable to outside influences that may hinder or prevent their ability to accurately gather and process visual data. As we've seen, aggressors can target LiDAR systems with spoofing and saturating attacks. But recent research suggests that much more primitive methods of interfering with visual systems are actually very effective.

Two specific types of vulnerabilities for vision systems are “physical world attacks” or “adversarial perturbations”. This occurs when input data, like an image, is altered in a way that can trick the system into thinking the data is something else. For example, a stop sign could be mistaken for a speed limit sign.

Several teams of researchers are investigating this issue in hopes of better understanding the threats behind vulnerable vision systems and how to protect them from attackers. Here is a look at some of their findings and what they mean for this ever-growing industry.

Robust Physical Perturbations

Researchers from the University of California, Berkeley, the University of Michigan, Ann Arbor, the University of Washington, and Stony Brook University published a demonstration late last year in which an autonomous driving system using deep neural networks to identify objects was tricked into reading a sign incorrectly. 

To demonstrate reliably that vision systems using deep neural networks could be attacked, the researchers had to identify the conditions in which an attack would be effective and reliable. 

Typically, physical world attacks are not that effective because the environmental conditions change, such as lighting or position, requiring the image to be viewed in just the right way to be effective.

While any possibility that a vision system could be tricked is problematic, it's extremely difficult to produce effective perturbations, making them generally unlikely to encounter. What this research aims to accomplish, however, is whether or not there are any conditions in which an everyday person could modify a sign (with typical tools) in a way that effectively and reliably tricks a vision system. If such a thing were possible, that would represent a realistic and much more concerning danger.

 

Abstract patterns look like graffiti to human eyes but can be detected as a speed limit sign by the autonomous car's vision system. Image courtesy of Arxiv.

 

Also taken into account is the fact that successful perturbations must trick two systems: the vision system on the car and the human “driving” it. 

The team identified several factors that would need to be considered to make attacks reliable and effective. A successful perturbation generally would need the following characteristics:

  1. Located on the object, itself, rather than in the background 
  2. Resilient to environmental conditions
  3. Detectable by sensors and not by humans
  4. Robust against fabrication processes—relying on very specific colors, for example, is not reliable since typical printers have a limited range of colors they can print in

For road signs, the image being read is fairly simple: usually large text on a specifically shaped sign with a predictable pattern of text. It is easy to teach a neural network to identify these, and for a human driver, it is also easy to see when a sign has been tampered with. So complex and subtle perturbations are hard to hide on a road sign. 

The attack algorithm developed used two different types of attacks: poster printing and sticker perturbation. With a poster printing perturbation, an overlay of a sign is printed and put on the sign (called the “subtle perturbations”). With a sticker perturbation (called the “camouflage perturbations”), stickers are placed in a specific pattern which tricks the vision system. For the human driver, a sign overlay may not be visually obvious and stickers would be mostly inconspicuous.

To test both attacks, two different convolutional neural network road sign classifiers were used: the German Traffic Sign Recognition Benchmark and the LISA dataset containing US road signs which was used to train a DNN classifier. 

In tests using the LISA classifier, the attacks were 100% successful in the lab and 84.8% successful in the field. For the GTSRB classifier, there was an 80% lab test success rate and 87.5% field test success rate.

 

Success rates were fairly high in different scenarios for the attacks. Image courtesy of Arxiv. 

 

You can peruse the University of Michigan's FAQ about the experiments here.

In a related study done by researchers from Princeton University and Purdue University, a method of attack on an autonomous vehicle's machine learning system was developed and demonstrated. They called these "sign embedding" attacks wherein "benign" (or "innocuous") signs are modified so that they are detected and classified as traffic signs.

Machine learning helps autonomous vehicles process visual data. Based on the training that the system receives (that is, the dataset it's exposed to and from which it can "learn"), an autonomous vehicle system detects and then classifies physical objects. Current systems classify objects with a weighted "confidence" level that reflects how likely it is that the object has been classified correctly. Based on these high confidence classifications, the car will "decide" what action to take.

In this research, the goal was to prove that sign embedding attacks are able to dupe vision systems into not only misclassifying objects but also misclassifying them with high confidence. This can be accomplished with signs that aren't part of the dataset from the system's "training", meaning that any innocuous objects could become a misidentified sign.

According to the paper, "In the virtual setting, our attack has a 99.07% success rate without randomized image transformations at test time and 95.50% with. We also conduct a real-world drive-by test, where we attach a video camera to a car’s dashboard and extract frames from the video for classification as we drive by (Figure 4). The Sign Embedding attack has a success rate of over 95% in this setting, where the success rate is the number of frames in which the adversarial image is classified as the target divided by the total number of frames."

 

On the left, a "benign logo" classified as a bicycle crossing sign with low confidence and rejected. On the right, an "adversarial logo" classified as a stop sign with high confidence and accepted as such. Image via Arxiv.

 

An example of this becoming a problem would be if the KFC sign in the image below were to be classified as a stop sign, causing an automated vehicle to stop in the middle of a busy road. A human passenger would be unlikely to detect anything amiss with the sign in question.

Adding Context, Seeking Solutions

The results of both studies are concerning because these attacks are difficult to detect by humans and have high reliability. This spells trouble for the burgeoning autonomous vehicle industry. After all, it’s crucially important to the adoption of autonomous driving vehicles to have the full trust of the user and the general public that they will be safe against any sort of attack or hack attempts.

How can these issues be overcome? Some suggestions rely on the use of context. For example, based on a sign's location, should there be a stop sign or a speed limit sign? Did this location previously have the sign type here? If an autonomous vehicle were able to broaden its computations to include such contextual information—and consult, say, its navigation system in the process—perturbations could be less effective. 

Regardless, the relative ease of carrying out these "attacks" adds yet another element that needs to be taken into account when it comes to safe and reliable autonomous driving. This will be an important issue in 2018 as automakers and tech giants alike look to new business models around autonomous vehicles, such as Mobility as a Service.

Autonomous vehicle security, hardware, and safety standards will continue to evolve quickly, but it's a good bet that those looking to interfere with these systems will also evolve, as well. These studies suggest that perhaps hindering autonomous vehicle vision systems may not be as complicated as we thought.

1 Comment
  • M
    MisterBill2 February 16, 2018

    The obvious and effective solution is also very expensive, which is to either put all of the signs in an on-board database, or put in local transponders that remind a vehicle that a given sign message is at a specific location. But all of the local transponders would then be life-critical devices and hence not cheap. And the database would need to be incredibly inclusive and updated frequently. Both the “incredibly inclusive” part and he “updated frequently” part are quite expensive, especially if they need to be secure, accurate, and reliable. This is a cost that I have not heard any of the touts admitting.

    Like. Reply