In the wake of Uber's fatal autonomous car crash, companies such as Intel and Waymo are insisting that their autonomous vehicle systems would have prevented this tragedy. What are these companies doing differently—and will those differences be enough to keep public confidence in the safety of these vehicles?

On March 19th in Tempe, Arizona, one of Uber's test vehicles struck and killed a pedestrian, 49-year-old Elaine Herzberg, despite a safety driver sitting behind the wheel of the car. This is the first fatal collision involving a pedestrian, though Tesla's Autopilot was involved in the first fatal autonomous crash with another vehicle in 2016.

There are clear ramifications for Uber directly, including Arizona's immediate suspension of Uber's autonomous vehicle testing privileges. Other autonomous vehicle manufacturers, however, are also anticipating backlash as the industry as a whole is cast in a negative light.

How does Uber's sensing technology compare to systems designed by competitors? How are these competing companies reacting to this historic and tragic milestone in autonomous vehicle history?

 

Uber's Approach to Autonomous Vehicle Sensor Systems

If you've seen the distressing video of the fatal car crash, you'll notice that Uber's car in autonomous mode did not brake or try to avoid the pedestrian at all. This is a clearcut failure of Uber's sensors' ability to detect and avoid obstacles in its path, though it hasn't yet been established what precisely went wrong. 

One obvious possibility is that the vehicle's sensors weren't operating properly or well enough for driving conditions. For its environment sensing, Uber utilizes Velodyne's LiDAR sensors. At its basest level, LiDAR technology emits laser pulses which are reflected back after hitting objects in the environment. 

Velodyne is among several other companies developing solid-state LiDAR for automotive purposes, aiming to reduce both the sensors' footprint and costs. These advancements represent further investment in the future of autonomous driving that sensor companies are incentivized to protect.

 

Two iterations of the Velodyne Velarray sensor. Left: Velarray as pictured in its press release (image courtesy of Business Wire). Right: Velarray on display at CES 2018.

 

It remains uncertain, however, whether the incident was caused by this sensing technology, the decision-making systems that interpret sensor data, or some other portion of the process that autonomous vehicles use to make decisions.

Velodyne's President, Marta Hall, stated in no uncertain terms that "we do not believe the accident was due to LiDAR." She went on to say that Velodyne doesn't have anything to do with the decision-making systems that interpret the visual data their LiDAR sensors gather.

Also trying to distance themselves from the fallout of the incident is NVIDIA, which was quick to point out that their partnership with Uber is limited to GPUs. It's notable that NVIDIA offers an AI platform for autonomous driving, NVIDIA DRIVE, which was not involved in the fatal incident. In a show of proactivity, however, NVIDIA has decided to halt its testing of autonomous vehicles on public roads, anyway. NVIDIA's co-founder and chief executive officer, Jen-Hsun Huang believes that it's extremely important that all autonomous companies take a step back and try to learn from this accident.

 

Intel Demonstrates Proprietary ADAS Over Uber Crash Footage

In a statement from Intel's newsroom, Professor Amnon Shashua, Senior Vice President at Intel Corporation, as well as the CEO and CTO of Mobileye (an Intel company) responded to the incident in broad strokes. He stated that he believes this to be the right time to take a step back and analyze the current designs in hardware and software in these vehicles. In order to ensure that autonomous vehicles are completely safe to pedestrians and drivers, Shashua believes that the autonomous vehicle field needs to look at sensing and decision-making.

Intel's response as a whole pointed out why their ADAS design is different from Uber's, such as that their system includes features such as automatic emergency braking (AEB) and lane keeping support that could have helped prevent this accident from ever occurring.

In a bold demonstration, Intel ran their ADAS technology overtop the footage of the fatal accident in Tempe. In the demonstration, Intel's software picked up on Ms. Herzberg and her bicycle. The three images below include green detection boxes that were created from pattern recognition and a "free-space" detection module. 

 

The footage from Uber's crash overlaid with Mobileye's ADAS system response. Image courtesy of Intel.

 

This software comes standard in ADAS-equipped vehicles, which have been driven billions of miles in testing.

Along with this software, Mobileye's Road Experience Management™ (REM) utilizes an ultra-high refresh rate to ensure its low Time to Reflect Reality (TTRR) is well above qualification. In terms of hardware, Mobileye's system-on-chip (SoC) comes from the EyeQ® family.

 

The Mobileye family of EyeQ chips and which level of autonomy each supports. Image from Mobileye.

 

What sets Mobileye apart is their proprietary computation cores (or accelerators). These accelerators are used for various computer-vision, signal-processing, and machine-learning tasks. Below is a list of what each programmable accelerator core provides.

  • The Vector Microcode Processors (VMP) is a VLIW SIMD processor that provides hardware support for operations similar to computer vision applications.
  • The Multithreaded Processing Cluster (MPC) which is much more versatile than any GPU and more efficient than any CPU produced. 
  • The Programmable Macro Array (PMA) enables computation density nearing that of fixed-function hardware accelerators without sacrificing any programmability. 

 

Parallels with Waymo

Waymo CEO John Krafcik addressed the Uber accident, as well, stating to Forbes that "What happened in Arizona was a tragedy. It was terrible." Like Intel, Krafcik claims that he feels very confident that their car would have handled that situation.

Waymo, like Uber, uses LiDAR technology in their autonomous vehicles. In fact, the two companies have been engaged in a major lawsuit regarding patents on LiDAR sensing technology.

A differentiating factor, however, is that Waymo develops their hardware and software under the same roof. Waymo's sensors are developed by software experts that specialize in AI which are built into a single integrated system. 

Waymo intends to open an autonomous ride-sharing business in Phoenix this year, putting even more scrutiny on their LiDAR technology and Arizona's approach to the public roadways for autonomous vehicles.

 

The Fate of an Industry

This incident is not the first to cast doubt on the safety of autonomous vehicles. Aside from lapses in either sensor and vision systems or in vision processing software, there are also plenty of issues that may arise with actively malicious third parties interfering with these systems. From altered street signs to spoofing attacks that directly target autonomous systems, autonomous vehicles are facing many challenges. 

Herzberg's death represents a major event for this industry, which has seen extraordinary interest from sensor developers to automakers alike. Will more companies, like NVIDIA, press pause on their testing programs? Will regulatory bodies create regulations in response? Given the response to Uber's first fatal crash already, it's clear that 2018 will be a lynchpin year for the autonomous vehicle industry.

 

Comments

16 Comments


  • MisterBill2 2018-04-08

    The fact is that regardless of the vision system there is an intrinsic flaw in the concept, which is that the control computer will not recognize a potential hazard, only an actual one. The result is that much of the time the computer will not decide to make any corrective action until too late. The Tempe tragedy is a perfect example, in that a competent human driver would have anticipated a possible problem and moved over, either to the edge of the lane or possibly into the next lane, if it was clear. But the computer program saw no potential problem and thus maintained speed and concentrated in staying in the lane. The second failure is that the computer did not attempt to swerve and avoid the lady, probably because that was not a choice included in the software. This second flaw is also a show-stopper, in that there will never be an adequate number of options available for the computer to be able to select the best one, even if it was fast enough.

    • Alin Popescu 2018-04-10

      MisterBill2, you do realize that you just prove you have low IQ with this post?

      You have absolutely no understanding of what you are talking about, but somehow you stil act like you do.

      Actually those system are “Artificial intelligence”, if you had any experience in the domain, you would know they aren’t call “INTELLIGENCE” for nothing. These are programmed and taught to actually learn to predict accidents way better than humans drivers.

      Also waymo is already safer than human drivers.  So what exactly was the point of you post?

       

      • virtualmo 2018-04-13

        @Alin Popescu – Wow, you’re clearly the big man, here.  Do you feel better about yourself now that you’ve bashed someone who may have less knowledge than you?  And after that, how has your post forwarded the discussion? Let me help you: it hasn’t.  Lest you lack an understanding of what it’s like to be flamed for having less knowledge than someone else, let me point out that you have a poor grasp of written English.  How does that feel, big man?  Congratulations – six out of six of your sentences exhibit poor grammar and or spelli, and that equates to a 100% failure rate.  With that track record, I hope you’re not working on autonomous sensors or software.  But, is any of this pertinent in this discussion? No.  Neither is it pertinent to flame MB - even if he might be less knowledgeable than you.  So, what exactly is the point YOU are trying to make, big man?  I’m making the point that you’re a troll.  But I digress, so a return to the topic at hand. . .

        While “those systems” may be called “INTELLIGENT” for a reason, they are also ARTIFICIAL.  Historically this technology will improve over time, but don’t make the mistake of comparing the current state of this technology to the eons of evolution of NATURAL intelligence we humans enjoy – which has the distinct advantage of thinking outside of programmatic responses; as well as, the much less understood, but highly significant component of intuition.  I’ve driven an estimated 300,000 accident free miles in my lifetime.  I’ve done this by anticipating the actions of other drivers.  While my attention is on the cars immediately in front and to the sides of me, I’m observing and analyzing the traffic several cars ahead.  Since I am human, I can think like a human and therefore, can anticipate how other humans are going to act or react to a dynamic set of conditions.  I think what MB is getting at is that there is no practical way to program the infinite variables that a modern driver, artificial or not, must process in real time.  For machine learning to work, it has to be fed conditions to learn from (data, or what we might call “experience”).  We contemporary humans are fed that data over 15+ years as passengers before we get behind the wheel.  That’s a lot of experience, even if it’s second hand.  Autonomous vehicles don’t have that experience base; thus, vehicle engineers have to feed that experience, that data, into the system.  Simply identifying a hazard and slamming on the brakes is not adequate – even if it slams the brakes on 100 times faster than a human can.

        But, it’s a false dichotomy to pit all human drivers agains all AI drivers.  I will be the first to state that most humans aren’t equipped mentally to be really good drivers – IMO.  If they were, my 300,000 miles of accident free driving wouldn’t be remarkable.  There have been times, once this winter in fact, when I have had to do the unnatural thing of accelerating through a hazard to avoid a collision when my body’s “automatic” reaction favors hard braking.  I challenge an autonomous vehicle and its engineering team to program that kind of possible reaction.  Those unconventional responses, of which there are many in my history, happen too fast to think about and therefore why I earlier introduced intuition into this discussion.

        What I think would be awesome would be a BigBlue vs. Kasparov event applied to this arena: let’s put a GoogleMobile in the Indy500.

        Ultimately, it’s ignorant to claim that AI is superior to human intelligence at this stage of the game.  Pump the brakes on the techo-chauvinism.

        • virtualmo 2018-04-13

          PS: my misspelling of “spelling” was irony, in case that was missed.

    • kjmclark 2018-04-13

      Don’t know about the low IQ part, but you do have to wonder what in the world MB is talking about. 
      - “competent human driver would have anticipated…” ???  A competent human driver might have slowed down because it was dark, but the rest of that sentence makes no sense.  Leaving your lane because “something might happen!” is dangerous and illegal.
      - “... computer did not attempt to swerve and avoid… ” the system should have braked first.  Believe it or not, hitting the brakes buys you time to come up with other alternatives. 
      - “... never be an adequate number of options available…”  ???  There’s always hitting the brakes, which those systems are better at than people are, and that SUV in particular has terrific brakes. 

      Really, this is just Uber’s system’s incompetence.  You can just stop there.

      • Thenextman 2018-04-13

        Don’t forget the assistance driver. I feel bad laying any type of blame at his feet, because I can see myself being completely distracted as well babysitting a self-driving car. That said, the video is misleading in that it makes viewers feel like they would not have reacted even if they were paying attention. The road was simply not as dark as the Uber video suggests - https://www.youtube.com/watch?v=gM-OsmGRh3k - sure, the camera on the left could be enhancing visibility somewhat, but it is a LIT street, not some rural two lane highway with no lights. I’d like to think that Uber’s training for backup driver’s stressed that it was experimental technology that NEEDED to be watched very attentively - i.e. don’t be looking at your phone.

        Beyond this, yes, Uber’s tech failed catastrophically - this is exactly the type of situation where we would expect the autonomous vehicle to perform better than a human driver. If I had to bet, I would say the Lidar system definitely saw her, and that the flaw lays in Uber’s hardware or software.

        What is especially distressing is that, in my mostly uninformed opinion, Volvo’s stock driver assistance technology, before being removed by Uber, would have stopped or at least slowed the vehicle before collision.

      • virtualmo 2018-04-13

        “Really, this is just Uber’s system’s incompetence.  You can just stop there.” 

        No, YOU can stop there.  In fact stop before that.  Oops, unless of course you are a forensic investigator with deep, firsthand knowledge of the vehicle, its operating system, the sensor arrays, and the software matrix installed on this vehicle.  AND, unless you are part of the investigative team that has concluded that, “. . . this is just Uber’s system’s incompetence.”

        C’mon people! I expect us engineers, technicians, and generally tech-savvy audience of this forum to think MUCH more critically.  Use some logic, use some reason, use some part of your brain besides the impulse to rush to judgement. Don’t be a sucker to the media and propaganda. Isn’t that type of behavior melting down our culture enough? Check yourself for Pete’s sake.

        For the record, I’m not a fan of autonomous vehicles - period.  Besides the fact that I really enjoy driving, I have driven ~300K miles over 36 years ACCIDENT FREE.  So, I’ll take on ANY current generation of autonomous vehicles, and possibly the next one as well.  I challenge these readers to think more broadly.  One thing to consider is this: The POTENTIAL safety and convenience of this technology is a collateral benefit at best.  The real motivation behind autonomous vehicles is the centralized control of all technology for the ultimate aim of controlling the movement and activities of people.  Do your homework. Think beyond CNN and MSNBC – OPEN YOUR MIND AND EYES - dig into UN Agenda 2030.  (sure, roll your eyes all you want, but please do so after reading through that document and interpreting its vagaries. It explicitly details the effort to concentrate people into “population centers” and to use autonomous vehicles to limit access beyond urban boundaries.  (https://sustainabledevelopment.un.org/post2015/transformingourworld

        But I digress. . .

        You think there’s nothing fishy in this incident and the “news” coming out of it?  You don’t see something strange about releasing the “official dash-came video” as being odd unto itself?  Why would you take it at face value?  (See THENEXTMAN’s link below.) You don’t think it’s weird that the “news” reported that the investigative team concluded that no human would have been able to avoid hitting this jay-walking pedestrian.  Bullshit.  To your point KJLMC, this appears to be a system issue that indicates the technology is not ready for primetime.  But instead of there being some voice of reason out there saying, “there appears to be more development time needed before this technology is unleashed on the public,” the tech is absolved of responsibility because “no human could have prevented the accident.”  again: B.S. Tell me this: If the tech is supposed to be so awesome – and superior to human action – why is it held to such a low threshold as that of human capability?  Why isn’t the tech held to a higher standard that reflects how superior it is?  I’ll tell you why: because it’s not superior.

        As for leaving one’s lane being, “dangerous and illegal,” what planet do you drive on? Lane changes – especially those to avoid danger or collisions – is an everyday practice for the rest of us.  What cop is going to ticket you for an improper lane change because you swerved to avoid a cross-walking pedestrian?!?  Your argument sound almost as moronic as Alin P.

        Lay off the armchair quarterbacking.  You weren’t there, and you don’t appear to know squat about the system, its competencies, OR its incompetencies.  But it does seem to be a system flaw or failure, I’ll give you that – but that’s not what the hyper-critical news is reporting.  Am I the only one here who finds that suspicious?

        • kjmclark 2018-04-20

          I find it suspicious that you’re off your medication.  You don’t seem to have read the article, the comment I was responding to, or my comment for the most part.  Probably should have stopped before commenting.

  • ronsoy2 2018-04-13

    The point of that post is that any human driver that was paying attention would have slammed on the brakes. The machine didn’t.

  • Heath Raftery 2018-04-13

    This tendency to see-a-problem-solve-a-problem is flawed. Agile is a wonderful technique for designing webpages, but when Silicon Valley interacts with the physical world, the cracks start appearing. What if the solution to these “flaws” is not to whack-a-mole and wait for the next one? What if the problem is actually the assumption that technology companies should use commercial techniques to aim for autonomous cars? Sooner or later we need to decide whether we want programmers playing computer games on our roads.

    Driver assistance technologies have made a tremendous improvement on the safety and comfort of driving. Autonomous technologies have a place in the lab, or in other domains where unassuming humans don’t interact with uninformed humans, such as planes and trains. Why jeopardise the progress of driver assistance techniques for the moonshot of autonomous techniques?

    • virtualmo 2018-04-13

      While I generally agree with the thoughtfulness of the questions you postulate, I don’t agree that DA technologies are an improvement to the traffic landscape as a whole.  What I see is that as technology replaces the need for drivers to be alert and to react competently, they are becoming worse at the skill of driving overall.  Just try driving in Massachusetts, you’ll see exactly what I see on a daily basis: distracted drivers drifting in and out of lanes, extreme tailgating at high speeds, and countless near-miss and not-so-near-miss collisions.  Just look at the television advertisements for cars with these technologies: they almost always show the technology saving the ass of a distracted driver.  What’s the message: get driver assistance and enjoy more time pouring over your FaceBook feed.

      If we want to increase the safety on our roads and reduce injuries and deaths from automobile accidents, we need to set much higher standards for driver education and institute much more driver training. (possibly even a nationalize standard, instead of state-to-state standards which are CLEARLY lower in some states than in others.)

  • gnagy 2018-04-13

    The problem is that current AI is at least 3 levels of abstraction below human drivers. They analyse the surroundings and make decisions, at a very low level, no matter how much data they were trained with.
    They suck at extrapolating into the future. I had driven a car with such a system, and as soon as it couldn’t see the lane dividers beyond a slight uphill, it gave up and demanded that I take the steering wheel.
    A human driver has no problem understanding that the highway continues beyond a small hump on the road, even if there are no other cars in sight, or anything that can be used as a reference.
    Not to mention that a human driver would understand much higher level concepts, like anticipating actions of other drivers, pedestrians, bicyclists, etc. based on the understanding of other human minds acting.
    Understanding human nature like that is far beyond the capabilities of today’s AI.

    • gnagy 2018-04-13

      I would barely trust an AI to drive a car in a static environment, let alone in one with complex, thinking agents (humans).
      It’s (relatively) trivial to train AI for the “easy” situations” (clear day, clearly visible signs and markings), but don’t be fooled by this and think that will be able to handle the difficult cases (bicyclist, or drunk, unpredictable pedestrians in the dark, kids playing on the sidewalk, and jumping on the road, chasing a ball, missing lane markings, rain, fog, vandalized traffic signs, etc.).
      We are at least 15-20 years away from AI that would match a human for the difficult driving situations. Those pretty much require human-level AGI (Artificial General Intelligence).

    • kjmclark 2018-04-20

      1) You’re assuming that most human motorists are attempting to intuit what other people around them are doing.  That’s kind of a puzzling and unnecessary assumption.  It’s like assuming that you need to be able to read the mind of another chess player to beat them at chess, which we know isn’t true.
      2) These situations you think are so tricky are pretty easily resolved - maintain a safe distance, have some basic understanding of the type of things that can go wrong, react much faster than any human can. 
      3) A human driver has no problem assuming without evidence that nothing will go wrong beyond that small hump in the road, and driving as though nothing could go wrong.  You’ve never seen someone slamming on their brakes and skidding out of control because they assumed things that they couldn’t see? 

      Software-driven vehicles will succeed not because they have the best human capabilities of the best empaths and deep thinkers, but because they don’t have the poor assumptions, skills, and inattention of the typical human driver.  It’s operating a machine, not acting on stage.  It actually requires about the brains of a songbird, but faster reaction times and better attention skills, which these systems are amply capable of.

      • virtualmo 2018-04-21

        Damn, dude, are you STILL being bullied at 49? (Are you even 49, because you act a lot like my 13 year old niece.)  Or maybe your Real Doll™ has a headache tonight.  Whatever your issue, the more you type, the more your inner troll just oozes out.  Off my meds?? Seriously? IS that the best you can do?  Advice for forums:  Try not to melt down over other well-articulated responses – even if you choose to see them as tangential.  It’s quite clear that I read the post and all subsequent replies. I even quoted you to clearly identify what I was countering in your reply above.  Just like I’m going to do again here:

        Regarding your item #1: Gnagy makes a pretty simple statement: he doesn’t believe that the current state of this technology is sophisticated enough to be let loose on the streets.  That’s about this full of it.  Yet you go on to extrapolate a whole series of assumptions.  How do you know what gnagy is assuming?  It’s YOU who is doing all the assuming - and assuming quite a bit. In fact, you use the word assume in 3/3 sentences.  That’s a lot of assuming - on YOUR part. Mind-reading chess? What? That’s such a poor analogy.  There’s no mind reading in driving or chess, but both depend on extrapolating probabilities given a set of possibilities.  Mind reading?!? 

        #2: What are you even talking about?!? Have you spent a minute behind the wheel yourself?  Did daddy let you do more than steer the car around the driveway, Raymond?  Of course the act of maneuvering a vehicle through traffic is extremely complex. That might explain why so many people die each year doing it, duh! “...are pretty easily resolved.” What?!?  Wake up, your Voltron doll is not driving you to school each day, junior.  Maintain a safe distance, sure that’s pretty straight forward tech - I’ll give you that. But I love how you gloss over the complexity of “a basic understanding of the kinds of things that can go wrong.” Yeah, sounds like somebody needs a basic understanding of the kinds of things that can go wrong.  Dude, newsflash: this is where it gets REALLY complex, REALLY quickly.

        #3. I’m going to just skip this one altogether. I have no idea what you’re saying here. Are you being facetious?

        “Software-driven vehicles will succeed not because they have the best human capabilities of the best empaths and deep thinkers. . .”  Let’s deconstruct this insanity:  First of all, no vehicle, software driven or not, has human capabilities.  The have hardware and software capabilities.  They aren’t empathic as they don’t feel. They aren’t deep thinkers - when’s the last time a vehicle pondered its own consciousness or its place in the universe? In fact, they don’t ‘think’ at all, they regurgitate routines that they are pre-programmed with.  Your predilection for machine love is laughably unrealistic and your arguments non-coherent. But, you’re still entitled to post such blather, I guess.

  • catalin_cluj 2018-04-20

    I saw a post about this at Shouldopia.com:
    “Self driving cars should have a standard minimum amount of sensors and processing capability”.
    I feel like we’re still in the wild west here and much benefit would come from competing companies joining forces until self-driving-cars are safe enough.
    Compete afterwards or on extras.