News

Intel, Waymo Compare Vision Systems in Wake of Uber’s First Fatal Autonomous Car Crash

April 07, 2018 by Donald Krambeck

In the wake of Uber's fatal autonomous car crash, companies such as Intel and Waymo are insisting that their autonomous vehicle systems could have prevented this tragedy. How are these big names doing autonomous driving differently?

In the wake of Uber's fatal autonomous car crash, companies such as Intel and Waymo are insisting that their autonomous vehicle systems would have prevented this tragedy. What are these companies doing differently—and will those differences be enough to keep public confidence in the safety of these vehicles?

On March 19th in Tempe, Arizona, one of Uber's test vehicles struck and killed a pedestrian, 49-year-old Elaine Herzberg, despite a safety driver sitting behind the wheel of the car. This is the first fatal collision involving a pedestrian, though Tesla's Autopilot was involved in the first fatal autonomous crash with another vehicle in 2016.

There are clear ramifications for Uber directly, including Arizona's immediate suspension of Uber's autonomous vehicle testing privileges. Other autonomous vehicle manufacturers, however, are also anticipating backlash as the industry as a whole is cast in a negative light.

How does Uber's sensing technology compare to systems designed by competitors? How are these competing companies reacting to this historic and tragic milestone in autonomous vehicle history?

Uber's Approach to Autonomous Vehicle Sensor Systems

If you've seen the distressing video of the fatal car crash, you'll notice that Uber's car in autonomous mode did not brake or try to avoid the pedestrian at all. This is a clearcut failure of Uber's sensors' ability to detect and avoid obstacles in its path, though it hasn't yet been established what precisely went wrong. 

One obvious possibility is that the vehicle's sensors weren't operating properly or well enough for driving conditions. For its environment sensing, Uber utilizes Velodyne's LiDAR sensors. At its basest level, LiDAR technology emits laser pulses which are reflected back after hitting objects in the environment. 

Velodyne is among several other companies developing solid-state LiDAR for automotive purposes, aiming to reduce both the sensors' footprint and costs. These advancements represent further investment in the future of autonomous driving that sensor companies are incentivized to protect.

 

Two iterations of the Velodyne Velarray sensor. Left: Velarray as pictured in its press release (image courtesy of Business Wire). Right: Velarray on display at CES 2018.

 

It remains uncertain, however, whether the incident was caused by this sensing technology, the decision-making systems that interpret sensor data, or some other portion of the process that autonomous vehicles use to make decisions.

Velodyne's President, Marta Hall, stated in no uncertain terms that "we do not believe the accident was due to LiDAR." She went on to say that Velodyne doesn't have anything to do with the decision-making systems that interpret the visual data their LiDAR sensors gather.

Also trying to distance themselves from the fallout of the incident is NVIDIA, which was quick to point out that their partnership with Uber is limited to GPUs. It's notable that NVIDIA offers an AI platform for autonomous driving, NVIDIA DRIVE, which was not involved in the fatal incident. In a show of proactivity, however, NVIDIA has decided to halt its testing of autonomous vehicles on public roads, anyway. NVIDIA's co-founder and chief executive officer, Jen-Hsun Huang believes that it's extremely important that all autonomous companies take a step back and try to learn from this accident.

Intel Demonstrates Proprietary ADAS Over Uber Crash Footage

In a statement from Intel's newsroom, Professor Amnon Shashua, Senior Vice President at Intel Corporation, as well as the CEO and CTO of Mobileye (an Intel company) responded to the incident in broad strokes. He stated that he believes this to be the right time to take a step back and analyze the current designs in hardware and software in these vehicles. In order to ensure that autonomous vehicles are completely safe to pedestrians and drivers, Shashua believes that the autonomous vehicle field needs to look at sensing and decision-making.

Intel's response as a whole pointed out why their ADAS design is different from Uber's, such as that their system includes features such as automatic emergency braking (AEB) and lane keeping support that could have helped prevent this accident from ever occurring.

In a bold demonstration, Intel ran their ADAS technology overtop the footage of the fatal accident in Tempe. In the demonstration, Intel's software picked up on Ms. Herzberg and her bicycle. The three images below include green detection boxes that were created from pattern recognition and a "free-space" detection module. 

 

The footage from Uber's crash overlaid with Mobileye's ADAS system response. Image courtesy of Intel.

 

This software comes standard in ADAS-equipped vehicles, which have been driven billions of miles in testing.

Along with this software, Mobileye's Road Experience Management™ (REM) utilizes an ultra-high refresh rate to ensure its low Time to Reflect Reality (TTRR) is well above qualification. In terms of hardware, Mobileye's system-on-chip (SoC) comes from the EyeQ® family.

 

The Mobileye family of EyeQ chips and which level of autonomy each supports. Image from Mobileye.

 

What sets Mobileye apart is their proprietary computation cores (or accelerators). These accelerators are used for various computer-vision, signal-processing, and machine-learning tasks. Below is a list of what each programmable accelerator core provides.

  • The Vector Microcode Processors (VMP) is a VLIW SIMD processor that provides hardware support for operations similar to computer vision applications.
  • The Multithreaded Processing Cluster (MPC) which is much more versatile than any GPU and more efficient than any CPU produced. 
  • The Programmable Macro Array (PMA) enables computation density nearing that of fixed-function hardware accelerators without sacrificing any programmability. 

Parallels with Waymo

Waymo CEO John Krafcik addressed the Uber accident, as well, stating to Forbes that "What happened in Arizona was a tragedy. It was terrible." Like Intel, Krafcik claims that he feels very confident that their car would have handled that situation.

Waymo, like Uber, uses LiDAR technology in their autonomous vehicles. In fact, the two companies have been engaged in a major lawsuit regarding patents on LiDAR sensing technology.

A differentiating factor, however, is that Waymo develops their hardware and software under the same roof. Waymo's sensors are developed by software experts that specialize in AI which are built into a single integrated system. 

Waymo intends to open an autonomous ride-sharing business in Phoenix this year, putting even more scrutiny on their LiDAR technology and Arizona's approach to the public roadways for autonomous vehicles.

The Fate of an Industry

This incident is not the first to cast doubt on the safety of autonomous vehicles. Aside from lapses in either sensor and vision systems or in vision processing software, there are also plenty of issues that may arise with actively malicious third parties interfering with these systems. From altered street signs to spoofing attacks that directly target autonomous systems, autonomous vehicles are facing many challenges. 

Herzberg's death represents a major event for this industry, which has seen extraordinary interest from sensor developers to automakers alike. Will more companies, like NVIDIA, press pause on their testing programs? Will regulatory bodies create regulations in response? Given the response to Uber's first fatal crash already, it's clear that 2018 will be a lynchpin year for the autonomous vehicle industry.

16 Comments
  • M
    MisterBill2 April 08, 2018

    The fact is that regardless of the vision system there is an intrinsic flaw in the concept, which is that the control computer will not recognize a potential hazard, only an actual one. The result is that much of the time the computer will not decide to make any corrective action until too late. The Tempe tragedy is a perfect example, in that a competent human driver would have anticipated a possible problem and moved over, either to the edge of the lane or possibly into the next lane, if it was clear. But the computer program saw no potential problem and thus maintained speed and concentrated in staying in the lane. The second failure is that the computer did not attempt to swerve and avoid the lady, probably because that was not a choice included in the software. This second flaw is also a show-stopper, in that there will never be an adequate number of options available for the computer to be able to select the best one, even if it was fast enough.

    Like. Reply
    • Alin Popescu April 10, 2018
      MisterBill2, you do realize that you just prove you have low IQ with this post? You have absolutely no understanding of what you are talking about, but somehow you stil act like you do. Actually those system are "Artificial intelligence", if you had any experience in the domain, you would know they aren't call "INTELLIGENCE" for nothing. These are programmed and taught to actually learn to predict accidents way better than humans drivers. Also waymo is already safer than human drivers. So what exactly was the point of you post?
      Like. Reply
      • V
        virtualmo April 13, 2018
        @Alin Popescu – Wow, you're clearly the big man, here. Do you feel better about yourself now that you've bashed someone who may have less knowledge than you? And after that, how has your post forwarded the discussion? Let me help you: it hasn't. Lest you lack an understanding of what it's like to be flamed for having less knowledge than someone else, let me point out that you have a poor grasp of written English. How does that feel, big man? Congratulations – six out of six of your sentences exhibit poor grammar and or spelli, and that equates to a 100% failure rate. With that track record, I hope you're not working on autonomous sensors or software. But, is any of this pertinent in this discussion? No. Neither is it pertinent to flame MB - even if he might be less knowledgeable than you. So, what exactly is the point YOU are trying to make, big man? I'm making the point that you're a troll. But I digress, so a return to the topic at hand. . . While "those systems" may be called "INTELLIGENT" for a reason, they are also ARTIFICIAL. Historically this technology will improve over time, but don't make the mistake of comparing the current state of this technology to the eons of evolution of NATURAL intelligence we humans enjoy – which has the distinct advantage of thinking outside of programmatic responses; as well as, the much less understood, but highly significant component of intuition. I've driven an estimated 300,000 accident free miles in my lifetime. I've done this by anticipating the actions of other drivers. While my attention is on the cars immediately in front and to the sides of me, I'm observing and analyzing the traffic several cars ahead. Since I am human, I can think like a human and therefore, can anticipate how other humans are going to act or react to a dynamic set of conditions. I think what MB is getting at is that there is no practical way to program the infinite variables that a modern driver, artificial or not, must process in real time. For machine learning to work, it has to be fed conditions to learn from (data, or what we might call "experience"). We contemporary humans are fed that data over 15+ years as passengers before we get behind the wheel. That's a lot of experience, even if it's second hand. Autonomous vehicles don't have that experience base; thus, vehicle engineers have to feed that experience, that data, into the system. Simply identifying a hazard and slamming on the brakes is not adequate – even if it slams the brakes on 100 times faster than a human can. But, it's a false dichotomy to pit all human drivers agains all AI drivers. I will be the first to state that most humans aren't equipped mentally to be really good drivers – IMO. If they were, my 300,000 miles of accident free driving wouldn't be remarkable. There have been times, once this winter in fact, when I have had to do the unnatural thing of accelerating through a hazard to avoid a collision when my body's "automatic" reaction favors hard braking. I challenge an autonomous vehicle and its engineering team to program that kind of possible reaction. Those unconventional responses, of which there are many in my history, happen too fast to think about and therefore why I earlier introduced intuition into this discussion. What I think would be awesome would be a BigBlue vs. Kasparov event applied to this arena: let's put a GoogleMobile in the Indy500. Ultimately, it's ignorant to claim that AI is superior to human intelligence at this stage of the game. Pump the brakes on the techo-chauvinism.
        Like. Reply