Is Stereovision the Key to Fully-Autonomous Vehicles?
Advanced driving-assistance systems have been evolving technology for the past few years. The next evolution of processing technology—stereovision—is pushing ADAS into the future.
Several months ago, Xilinx announced that they would be teaming up with Subaru to bring the next generation of advanced driver-assistance systems (ADAS) to the market with the Xilinx automotive-qualified (XA) Zynq UltraScale+ multiprocessor system-on-a-chip (MPSoC).
The new Subaru Levorg, currently marketed only for the Japanese market, will include the new UltraScale+ system on a chip (SoC).
Suburu tapped Xilinx for its Zynq UltraScale+ MPSoC to play a pivotal role in its new ADAS. Image used courtesy of Xilinx
ADAS brings a multitude of functions, including adaptive cruise control, lane keep assist and sway warning, pre-collision detection, and engine throttle. Subaru’s proprietary ADAS, called Eyesight, is based on stereovision technology and is available in several of their existing 2020 and 2021 vehicles.
What exactly is stereovision? And in what ways do electronic designers in the automotive space use this system for ADAS?
The Future of ADAS is Stereovision
Stereovision technology produces dense 3D digital representations of reality using algorithmic processors. According to researchers at the Technical University of Cluj Napoca, the general process flow of a dense stereovision system is image acquisition, stereo processing, disparity to 3D mapping, and finally, application of detection algorithms.
Stereovision hardware utilizes two cameras that acquire an optimized series of images with a maximum frame rate of 24 fps. The 3D reconstruction of the tracked images is processed through a dedicated hardware board that produces an output function—either a disparity map between the two processed images or a Z-map used to generate an X-Y coordinate system based on the left camera.
Demonstration of object tracking, lane detection, and distance measurements using an ADAS. Image used courtesy of the Technical University of Cluj Napoca
As discussed in a Texas Instruments’ white paper on stereovision, disparity mapping is inversely proportional to distance. Pixel disparity measured in single pixels has a significant error rate in long-distance measurements, but the system error is decreased in the shorter distances (<100 meter), resulting in suitable data.
Stereovision systems generate two types of environmental data: a density map of complex driving environments based on elevation or a series of geometric elements consisting of parametric lanes, tracked cuboids, and pedestrians.
These computational elements are time-consuming and intensive. In order to make real-time decisions based on received environmental data, a system needs significant data bandwidth and processing power, so parallelism is a must. This is where the Xilinx UltraScale+ SoC comes into play.
High-Precision 3D Point Clouds on Xilinx IP Cores
A closer look at Xilinx's automotive-grade Zynq UltraScale+ MPSoC can illustrate how such devices work produce stereovision.
Xilinx claims the Zynq UltraScale+ MPSoC provides key features for ADAS, including high-speed parallelized video and image processing on a Xilinx FPGA with algorithmic processing handled by an Arm Cortex-A53. Real-time event handling occurs on the Arm Cortex-R5.
Generalized block diagram of the automotive-grade Zynq UltraScale+ MPSoC. Screenshot used courtesy of Xilinx
“Stereo cameras are at the heart of Subaru’s ADAS applications," explains Suburu's chief technology officer Tetsuo Fujinuki. "Unlike common approaches, the image processing technology adopted in our new generation system scans everything captured by stereo cameras and creates high-precision 3D point clouds, enabling us to offer advanced features such as pre-collision braking at an intersection and assisting with hands-off driving in traffic congestion on a highway."
He concludes, "Because Xilinx automotive devices contain built-in capabilities that allow us to meet strict ASIL requirements, they are unquestionably the best technology to implement Subaru’s new ADAS vision system.”
In addition to the processing juggernaut, the UltraScale+ boasts an impressive array of features from power management and error correction to hardened crypto-accelerators and secure storage of cryptographic keys.
How UltraScale+ Helps Automotive Engineers
The level of integration provided by UltraScale+ eliminates a great deal of design work for hardware integration engineers while also providing a haven of unlimited potential for firmware and machine learning engineers.
Other engineering sub-disciplines that may benefit from this SoC include digital signal processing experts for FPGAs, firmware programmers for the real-time Cortex R5 and Arm Mali graphics processors, and programmers developing algorithms for the Cortex A53 application processors.
Stereovision: the Key to Fully Autonomous Vehicles?
Stereovision is one technology that is part of a mesh of interconnected technologies paving the way for fully autonomous vehicles.
A complete vision of an advanced driver-assistance system marrying several RF technologies with stereovision to produce a full 360° digitally processed environment. Image used courtesy of Synopsys
VLSI system-on-a-chip technology is providing the multi-core parallel processing capability to parse the immense amount of real-time data required for ADAS.