Industry Article

The Ethical Minefield for Autonomous Vehicles

April 04, 2017 by Rudy Ramos, Mouser

As autonomous vehicle technology progresses, public acceptance lags behind.

As autonomous vehicle technology progresses, there is one area that lags behind: public acceptance. This article will look at some of the ethical dilemmas and current lawsuits barring the acceptance and trust of autonomous cars by consumers.

The job title “elevator operator” is rarely heard these days, but in the 1940s, New York alone had more than 23,000 elevator operators in 2,000 buildings. When they went on strike in 1945, demanding higher wages, newspapers reported that the elevators across the city were unusable and “vast throngs of office workers … struggled up stairways that seemed endless, including those of the Empire State Building.”


Will public acceptance lag decades behind engineering capability when it comes to autonomous vehicles?


Today, the concept seems bizarre: men and women being paid to stand or sit inside elevators and “drive” them from floor to floor. Perhaps one day, people will think it is equally bizarre that we once steered our cars ourselves.

What’s even more bizarre is the fact that, by 1945, fully automated elevators had already been available for decades, but passengers remained reluctant to use them, mainly due to safety concerns.

Will the development of autonomous vehicles follow a similarly slow path? Will public acceptance lag decades behind engineering capability? The answers to these questions hinge on ethical and liability issues that seem intensely complicated – and ultimately, on public perception of the industry’s solutions to those issues.


The Trolley Problem

One popular way to highlight these issues is by taking into consideration the “The Trolley Problem,” a thought experiment in which a subject is forced to choose between two unpleasant outcomes.


The Trolley Problem. Image courtesy of McGeddon [CC SA-4.0]


Imagine that the lane ahead of an autonomous vehicle is suddenly blocked by a falling boulder. A collision would be fatal. The car must choose between running into oncoming traffic in the other lane or driving off the road into a river. Ideally, we would like the vehicle to choose the least harmful outcome. Perhaps we would prefer the car to run into the river and risk the life of its elderly driver, instead of crashing into a busload of school kids (the driver might not agree with us, of course).

Plainly, though, that is asking the car to have a depth of understanding of the world that’s far beyond any current-generation AI. In fact, it is a choice that would also be impossible for a human to fully weigh up in the brief moments available. This in itself shows how the impact of adding current-generation AI into the safety equation is perhaps overestimated.​

It is important to realize that these tough ethical and liability questions are not new. This is because we already live in a world where complex systems, involving humans, machines, and nature, interact and produce unpredictable outcomes. We already live in a world where we usually do not have the resources or information to make optimal decisions at all times – we can only do our best. Extensive standards and codes of best practice have already been developed to guide developers through this complex world.


The FTC vs. D-Link

While thought experiments like The Trolley Problem do have a role to play in providing us with a theoretical framework for understanding ethical and liability questions. It is real-world examples, such as the recent case in which the FTC filed a lawsuit against IoT hardware manufacturer, D-Link, for negligence in protecting its customers from hacking attacks, which illustrate how liability is assessed by the legal system in reality.

Here are some key phrases from that case: according to the FTC, consumers’ “sensitive data” was at risk. The FTC says that the company claimed that its products were “easy to secure,” but actually “failed to take reasonable steps to” avoid “widely known and reasonably foreseeable risks.”

The FTC vs. D-Link case is a single example, but it suggests how companies can reduce their potential liability by exploring potential risks, building robust systems to cope with those risks – and, finally, being cautious in their claims about product safety.

Autonomous vehicle systems developers can start by becoming familiar with international standards for safe software and hardware development, such as the basic IEC 61508 standard, and its automotive derivative, ISO 26262. These standards lay out guidelines for the design of safety-critical products and systems throughout the entire development cycle – starting with initial risk assessment.



We might hope that one day, tests will prove that autonomous vehicles are significantly safer than human drivers and will save lives, and then they will be swiftly accepted. However, the many decades it took the public to accept automatic elevators suggest otherwise. It is recognized that elevators became accepted because the technology was tested and proven for many years and, as the elevator operators’ strike in New York in 1945 showed, the rising cost of labor was making human operators uneconomic.

Industry Articles are a form of content that allows industry partners to share useful news, messages, and technology with All About Circuits readers in a way editorial content is not well suited to. All Industry Articles are subject to strict editorial guidelines with the intention of offering readers useful news, technical expertise, or stories. The viewpoints and opinions expressed in Industry Articles are those of the partner and not necessarily those of All About Circuits or its writers.