News

Ditching Radar: Tesla Bets on Computer Vision for the Future of ADAS

June 04, 2021 by Jake Hertz

Last week, Tesla announced that it will abandon radar in favor of camera-based vision for the future of its ADS. What are the benefits and drawbacks of such a system?

At its core, the effort towards developing advanced driver-assistance systems (ADAS) and autonomous driving systems (ADS) is a challenge of vision, both literally and figuratively. Whether monitoring the driver or the road, all ADAS/ADS systems rely heavily on modern-day advances in sensing technology and machine learning to interpret the environment. 

 

Different levels of automation will require a combination of radar, LiDAR, and cameras.

Different levels of automation will require a combination of radar, LiDAR, and cameras. Image used courtesy of NXP

 

To this end, the most popular technologies have primarily been LiDAR, radar, and cameras. However, more often than not, a combination of the three is typically required. 

While this may generalize the field, every company does things differently based on not only what it thinks provides the best performance today but what has potential in the future. 

Tesla is somewhat a notorious defector from the mainstream direction of sensing in autonomous vehicles––famously rejecting LiDAR altogether in favor of radar and camera solutions. 

Now, Tesla has made big news with its newest bet for the future: ditching radar altogether for a purely camera-based ADS system. 

 

Radar vs. LiDAR vs. Camera

As previously mentioned, there is no “right” solution in terms of vision. Most systems end up using a combination of radar, LiDAR, and cameras, as each one comes with benefits and drawbacks. 

Starting with LiDAR, one significant advantage is that it’s the only one of the three that provides high resolution at an increased range. LiDAR also benefits from immunity to natural lighting disturbances, including shadows, lighting, and glare, which severely plagues camera-based solutions. Finally, LiDAR also provides the system with highly detailed depth information, allowing it to map out its environment with a high degree of accuracy. 

Despite those benefits, one of LiDAR’s major downfalls is that it can’t detect colors or interpret text, limiting its use cases; for example, how is LiDAR supposed to interpret road signs or traffic lights? 

Beyond this, the technology, while getting cheaper, is also historically costly, which can impact its development and integration. 

 

Radar, LiDAR, and cameras each have their own strengths and weaknesses.

Radar, LiDAR, and cameras each have strengths and weaknesses. Image used courtesy of Analog Devices

 

Camera-based sensors, on the other hand, offer many advantages where LiDAR fails. For starters, camera vision can recognize colors and interpret text, allowing it to analyze its environment more human-intuitively. 

Camera systems also have incredibly high throughput and resolution, offering systems more bits/second than radar and LiDAR. While this is certainly beneficial, it also results in cameras requiring significantly more computing power than the other solutions. When power-constrained, a purely camera-based solution may not be entirely feasible, and this is one reason why many OEMs choose to combine cameras with other sensors. 

Finally, radar has the benefit of efficiently detecting in bad weather, which neither LiDAR nor cameras can do. Also, unlike LiDAR, radar does not require mechanical parts, simplifying design and increasing reliability. While radar does provide range information, its resolution at the range is inferior compared to LiDAR. Radar also suffers false detection problems for reflective objects and is generally less accurate than LiDAR and cameras. 

With each of these benefits and drawbacks outlined, it will be interesting to see how Tesla will adapt and attempt to overcome the challenges with camera systems without radar. 

 

Tesla’s Big Switch 

Last week, Tesla announced that it would be ditching radar in the current and future models. Instead, the company plans to use an approach purely based on camera vision and neural net processing to deliver the future of Tesla’s autopilot and, eventually, fully autonomous driving. 

 

Teslas use of camera and ultrasonic sensors for "Autopilot." Image used courtesy of Tesla

 

When asked about the company’s decision to switch, Elon Musk cited that cameras tend to have a higher data throughput than radar and LiDAR. On top of this, Tesla believes the trend in technology is that computer vision and neural processing continually get better. In the eyes of Tesla, these two facts will eventually lead to the obsolescence of non-camera-based solutions. 

As Musk put it: “When radar and vision disagree, which one do you believe? Vision has much more precision, so better to double down on vision than do sensor fusion.” 

 

Questions Ahead 

While many other companies still subscribe to multiple sensing solutions, Tesla has decided to commit exclusively to cameras for the time being. Tesla being one of the first companies to take this approach undoubtedly leaves many unanswered questions about the plan's viability––questions which only time will reveal the answer to. 

It will be interesting to see how Tesla's focus on camera systems for ADAS and ADS technology will develop and possibly spur growth or change over time. 

 


 

Interested in what other companies are creating for ADAS and ADS technology? Find out more in the articles down below.

Why the Industry is Demanding FPGAs for Advanced Driver-Assistance Systems (ADAS)

NXP is Revving Up ADAS Technology with 16 nm FinFET Processors

New Automotive SoCs Provide a Window to ADAS Trends