Ambarella Disrupts 4D Imaging Automotive Radar with Centralized Approach
Pushing an alternative to edge-based radar processing, Ambarella is rolling out a centralized processed solution for 4D automotive imaging radar.
In an era when lots of applications are moving processing to the edge, there’s a strong argument to be made for centralized processing in automotive radar image processing. Doing exactly that, today, Ambarella is unveiling what it claims as the industry’s first centralized 4D imaging radar architecture. “4D” here means the three physical dimensions plus time.
The solution includes the company’s Oculii radar software technology running on its 5 nm CV3 AI Domain Control SoCs. Target applications include AI-based ADAS (advanced driver assistance systems (ADAS) and L2+ to L5 autonomous driving systems, along with autonomous robots and drones.
In this article, we compare the centralized vs. edge radar processing in cars, we discuss the details of the Oculii technology, and we provide insights from our interview with Steven Hong, Ambarella’s VP and General Manager of Radar Technology.
Drawbacks of the Edge Sensor Approach
As Hong explains, automotive imaging radar data is traditionally processed at the edge. In those architectures, all of the processing for a radar sensor lives inside of the sensor module itself. “That really limits how much you can do with the processing because typically you've got power, size, and heat dissipation constraints with the sensor,” says Hong.
Traditional edge radar (left) needs thousands of MIMO antennas while the Oculii VAI radar (right) uses an order of magnitude few antennas, and significantly less power consumption.
Meanwhile, there’s a fundamental issue with edge-based radar sensing in that scaling up resolution is problematic. A traditional radar needs more MIMO (multiple input multiple output) antennas in order to achieve higher resolution. But for every antenna you add, you generate more data.
“If you want a high resolution radar, you typically need thousands of antennas,” says Hong, “Thousands of antennas translates into tens of terabytes of data per second that you generate. And that amount of data is just prohibitively large to move somewhere else.”
The result, according to Hong, is what he calls a “non virtuous cycle” that's happening with radar designs, where you’re stuck keeping all the processing inside of that module. The more processing and the more sensing you add to that module, you’re getting up into the 30 W to 50 W of heat dissipation. That’s especially a problem considering where these sensor modules need to be placed on a vehicle.
“This is something that goes behind the bumper. It's in a location with still air. So heat dissipation becomes a major issue. This is really what's limiting how much resolution and performance that traditional radar architectures are able to achieve.”
In contrast to the edge radar approach, Ambarella, with its new centralized paradigm, uses software to achieve higher resolution. “We do this in a way that allows you to actually reduce the number of antennas significantly, centralize the data, and then use AI software to compensate for what we haven't measured,” says Hong. “We do this in a way that allows us to still achieve very high resolution.”
To do that, Hong says they leverage a concept they call “sparsity” when capturing the sensor data. “Rather than capturing all the data from thousands of antennas, we take a selective few number of measurements,” he says. “We then use an intelligent adaptive waveform to compensate and generate the information adaptively across time. This allows us to effectively couple the computing capabilities with how much resolution can actually be generated.”
The centralized radar architecture approach relies on AI software running on a central CV3 SoC processor.
In contrast to traditional edge radar architectures, Hong says this centralized approach is a “virtuous cycle” because the more computing you have, the sparser the sensor array can be. “The sparser our array is, the less data that we generate, and the more likely it is that we can centralize this data into a more powerful computing engine,” he says.
Ambarrella’s centralized approach achieves a high angular resolution of 0.5 degrees, a dense point cloud up to 10s of thousands of points per frame, and a long detection range up to over 500 meters. Ambarella offers a YouTube video that demonstrates its centralized 4D imaging radar architecture
Oculii Software on the CV3 SoC
Ambarella’s 4D radar processing solution comprises both hardware and software. This includes the company’s Oculii radar software running on its new CV3 AI domain control SoC. Ambarella engineers optimized the Oculii algorithms for its CV3 AI domain controller SoC family.
Hong says that the software-defined centralized architecture enables dynamic allocation of the CV3’s processing resources. This can be done based on real-time conditions, both between sensor types and among sensors of the same type. “In extreme rainy conditions where long-range camera data isn’t reliable, the CV3 can reallocate some of its resources to improve radar inputs,’ he says.
In a similar way, if it is raining while driving on a highway, the CV3 processor can put its attention on data coming from front-facing radar sensors. This extends the vehicle’s detection range while providing faster reaction times. In contrast, an edge-based radar architecture can’t operate that way because, at the edge, processing capacity is specified for worst-case scenarios, so that processing often goes unused.
Built on a 5 nm process, the CV3 AI Domain Control SoCs are the first chips with Ambarella’s next-gen CVflow architecture, with a neural vector processor and a general vector processor—both of which include radar-specific signal processing enhancements. (Click image to enlarge)
According to the company, the CV3 SoC marks the first product with Ambarella’s next-generation CVflow architecture. This includes a neural vector processor and a general vector processor. Both were designed by Ambarella engineers from the ground up to include radar-specific signal processing enhancements, says the company.
The CV3 SoC’s processors work in tandem to run the Oculii advanced radar perception software with far higher performance than traditional edge radar processors—up to 100x more. The centralized approach also brings other benefits. One is easier over-the-air (OTA) software updates. A single OTA update can be loaded to the CV3 SoC and aggregated across all of the system’s radar heads.
Meanwhile, with the centralized approach, the radar heads no longer need their own processor. This shrinks costs, including both the initial bill of materials (BOM) for the design and for replacements in the event of damage from an accident. Radar edge sensors are typically placed behind the vehicle’s bumper, the accident issue is not trivial.
Ambarlla says that its new centralized architecture will be demonstrated at Ambarella’s invitation-only event taking place during CES 2023 next month.
A Contrary Embedded Processing Trend?
The market opportunity is huge. According to a report by Yole Group, there were around 100 million radar units manufactured in 2021 for automotive ADAS. The report forecasts this volume to grow 2.5x by 2027. This is attributed to more demanding regulations on safety and more advanced driving automation systems hitting the road. This includes a shift from the current 1-3 radar sensors per car, to OEMs moving to 5 radar sensors per car as a baseline, according to Yole Group.
For some years now, we’ve seen a movement toward doing more processing, including AI computing, at the edge. You see this particularly in markets such as the Internet-of-Things (IoT). Ambarella's new 4D radar sensor processing solution is perhaps an example, that, for some situations, a centralized processing architecture has benefits that not only usurp the edge approach, but also offer a roadmap to future scalability. Perhaps this scheme will open new capabilities for autonomous vehicles.
All images used courtesy of Ambarella