News

What Does the Dallas Shooting Mean for Ethics in Robotics?

July 11, 2016 by Kate Smith

The decision of Dallas police officers to use a robot with intent to kill set a historic precedent.

The decision of Dallas police officers to use a robot with intent to kill set a historic precedent.

Stopping an Active Shooter in Dallas

On July 7th, Micah Xavier Johnson opened fire on police officers on duty at a peaceful protest in downtown Dallas. Five police officers were killed in the ensuing gunfight and twelve more were injured.

The details of the incident are still being reported, but we do know that Johnson was ultimately killed after he was cornered and negotiations failed.

The method by which the Dallas Police Department killed Johnson has since come under immense scrutiny. According to a statement released by the department, officers used a REMOTEC Andros Mark V-A1 bomb disposal robot to deliver C4 explosives to the suspect and detonate them, killing him.

This is the first intentional use of lethal force by police using such a device on US soil.

Watch Dallas Police Chief David Brown explain the situation:

It's important to note that a "robot" is defined as a machine that is autonomous or semi-autonomous. As the REMOTEC Andros Mark V-A1 does not fall into this category, it is not technically a robot at all. However, because Northrup Grumman (the machine's manufacturer), the US military, and the media all refer to these machines as robots, it's possible that the word will soon be co-opted in the public lexicon.

 

Robotics Used by the Police and Military

Make no mistake. Unmanned vehicles have been used by police quite often in the past. Their primary functions in police situations are to investigate possible bombs, deactivate bombs, and sometimes deliver tear gas. In one instance, a REMOTEC Andros F6A was used to deliver pizza and a phone to a man feared to be suicidal on a Silicon Valley overpass.

 

An Andros F6A delivers pizza and a phone to a possibly suicidal man. Image courtesy of IEEE.

 

They’ve also been used regularly in military applications, especially in situations where they can be used to find and detonate dangerous IEDs (improvised explosive devices). Semi-autonomous robots are increasingly used as medics, pack-mules, and surveillance drones.

Robotics have also been used to deliver lethal force, particularly in the drones deployed for airstrikes.

When it comes to ground-based robotics, however, there have been few reports of robots delivering lethal force. There is at least one case, covered by modern warfare expert Peter W. Singer in his book, Wired for War, where US troops in Afghanistan rigged explosives to an EOD robot to neutralize an enemy combatant.

But this is the first time that such a tactic has been used on an American citizen.

 

Concerns from Robotics Experts and Roboethicists

Beyond the immediate and often emotionally-charged responses to the incident that have appeared in the media, robotics experts and ethicists have also publicly come forward to assess the situation. Some bridge the two worlds, operating in a field that’s come to be known as “roboethics”.

For example, Patrick Lin is the Director of the Ethics + Emerging Sciences Group, run by California Polytechnic State University. He recently wrote an article on the IEEE website, posing the question as “Should the Police Have Robot Suicide-Bombers?

(Read more of Patrick Lin’s writings on technology and ethics here.)

Robotics and ethics continue to be interwoven disciplines as robots become more prevalent— and more advanced.

This is hardly the first time robotics has faced questions regarding their ethical implications. For example, experts have been contemplating the ethics of allowing robots to replace human workers for years.

 

A prototype unmanned medic. The teddy bear face was intended to sooth wounded soldiers. Image courtesy of the US Army MRMC.

 

There’s also extraordinary interest in the subject of autonomous robots. Scientists and philosophers alike have been researching and discussing the subject of trying to make machines moral.

Organizations such as the International Society for Ethics and Information Technology, IEEE Robotics and Automation Society’s Technical Committee on Robot Ethics, and the Center for Ethics in Science & Technology lead important discussions about the use of technology, including robots, and their effects on humans and society.
 

Possible Implications for Developing Robotics

It is far too soon to jump to conclusions about what this incident will mean for future robots and the laws that govern building them.

It is possible, however, that the circumstances of Johnson’s death could inspire new regulations on how robots are used.

No matter how Johnson’s death affects the discussion regarding ethics and robotics, it is a discussion worth having. It’s also pivotal that this discussion occurs before artificial intelligence is integrated into robotics in military or police applications.

 

Update 7/22/2016: Some language has been changed and added to reflect the definitive difference between a robot and an unmanned vehicle. 

30 Comments
  • B
    Buddy343 July 12, 2016

    Using robots to kill a madman means the bad guys lose. The police need every advantage possible to save lives. My best wishes to the Dallas PD and all law enforcement.

    Like. Reply
    • AEKron July 12, 2016
      Using robots to kill anybody means we all lose. Peace.
      Like. Reply
      • B
        Buddy343 July 14, 2016
        Would it have been different if they used a gun to stop him from killing more people. Use any means necessary.
        Like. Reply