In an autonomous vehicle, reliable and accurate perception of the environment is critical to enable safe driving decisions. The output from the various sensors needs to be fused with no loss of information in order to produce an accurate model of the environment that captures every surrounding object. The standard approach used by perception platforms currently available in the ADAS and AD market is object fusion, where information about object detection performed by each type of sensor is brought together to support the driving decision making.
In this presentation we will be exploring how LeddarVision―the LeddarTech sensor fusion and perception platform―combines AI and computer vision technologies as well as deep neural networks with computational efficiency to scale up the performance of AV sensors and hardware essential for planning the driving path.
This approach allows for safer autonomous driving with better detection and helps to overcome some of the limitations found in other single sensor approaches. As this landscape continues to evolve, automakers are realizing that their specific design requirements often require a more open, flexible, and scalable sensor-fusion and perception solution that accommodates a vast array of sensor options. Join us for this one-hour webinar as we explore the ADAS and AD automotive landscape, the challenges, as well as the technology presently being employed.
In Partnership with Vicor
In Partnership with Murata Electronics India Private Limited
In Partnership with Digi-Key
In Partnership with ROHM Semiconductor
In Partnership with Infineon Technologies