The decision of Dallas police officers to use a robot with intent to kill set a historic precedent.

Stopping an Active Shooter in Dallas

On July 7th, Micah Xavier Johnson opened fire on police officers on duty at a peaceful protest in downtown Dallas. Five police officers were killed in the ensuing gunfight and twelve more were injured.

The details of the incident are still being reported, but we do know that Johnson was ultimately killed after he was cornered and negotiations failed.

The method by which the Dallas Police Department killed Johnson has since come under immense scrutiny. According to a statement released by the department, officers used a REMOTEC Andros Mark V-A1 bomb disposal robot to deliver C4 explosives to the suspect and detonate them, killing him.

This is the first intentional use of lethal force by police using such a device on US soil.

Watch Dallas Police Chief David Brown explain the situation:


It's important to note that a "robot" is defined as a machine that is autonomous or semi-autonomous. As the REMOTEC Andros Mark V-A1 does not fall into this category, it is not technically a robot at all. However, because Northrup Grumman (the machine's manufacturer), the US military, and the media all refer to these machines as robots, it's possible that the word will soon be co-opted in the public lexicon.


Robotics Used by the Police and Military

Make no mistake. Unmanned vehicles have been used by police quite often in the past. Their primary functions in police situations are to investigate possible bombs, deactivate bombs, and sometimes deliver tear gas. In one instance, a REMOTEC Andros F6A was used to deliver pizza and a phone to a man feared to be suicidal on a Silicon Valley overpass.


An Andros F6A delivers pizza and a phone to a possibly suicidal man. Image courtesy of IEEE.


They’ve also been used regularly in military applications, especially in situations where they can be used to find and detonate dangerous IEDs (improvised explosive devices). Semi-autonomous robots are increasingly used as medics, pack-mules, and surveillance drones.

Robotics have also been used to deliver lethal force, particularly in the drones deployed for airstrikes.

When it comes to ground-based robotics, however, there have been few reports of robots delivering lethal force. There is at least one case, covered by modern warfare expert Peter W. Singer in his book, Wired for War, where US troops in Afghanistan rigged explosives to an EOD robot to neutralize an enemy combatant.

But this is the first time that such a tactic has been used on an American citizen.


Concerns from Robotics Experts and Roboethicists

Beyond the immediate and often emotionally-charged responses to the incident that have appeared in the media, robotics experts and ethicists have also publicly come forward to assess the situation. Some bridge the two worlds, operating in a field that’s come to be known as “roboethics”.

For example, Patrick Lin is the Director of the Ethics + Emerging Sciences Group, run by California Polytechnic State University. He recently wrote an article on the IEEE website, posing the question as “Should the Police Have Robot Suicide-Bombers?

(Read more of Patrick Lin’s writings on technology and ethics here.)

Robotics and ethics continue to be interwoven disciplines as robots become more prevalent— and more advanced.

This is hardly the first time robotics has faced questions regarding their ethical implications. For example, experts have been contemplating the ethics of allowing robots to replace human workers for years.


A prototype unmanned medic. The teddy bear face was intended to sooth wounded soldiers. Image courtesy of the US Army MRMC.


There’s also extraordinary interest in the subject of autonomous robots. Scientists and philosophers alike have been researching and discussing the subject of trying to make machines moral.

Organizations such as the International Society for Ethics and Information Technology, IEEE Robotics and Automation Society’s Technical Committee on Robot Ethics, and the Center for Ethics in Science & Technology lead important discussions about the use of technology, including robots, and their effects on humans and society.

Possible Implications for Developing Robotics

It is far too soon to jump to conclusions about what this incident will mean for future robots and the laws that govern building them.

It is possible, however, that the circumstances of Johnson’s death could inspire new regulations on how robots are used.

No matter how Johnson’s death affects the discussion regarding ethics and robotics, it is a discussion worth having. It’s also pivotal that this discussion occurs before artificial intelligence is integrated into robotics in military or police applications.


Update 7/22/2016: Some language has been changed and added to reflect the definitive difference between a robot and an unmanned vehicle. 




  • Buddy343 2016-07-12

    Using robots to kill a madman means the bad guys lose. The police need every advantage possible to save lives. My best wishes to the Dallas PD and all law enforcement.

    • AEKron 2016-07-12

      Using robots to kill anybody means we all lose.

      • Buddy343 2016-07-14

        Would it have been different if they used a gun to stop him from killing more people. Use any means necessary.

      • LEhenson 2016-07-22

        Grow up and read my post above

      • KLRJUNE 2016-07-23

        It was not a robot. It was an RC vehicle. Not much different than tying a long string a to a trigger.

    • drhowarddrfine 2016-07-22

      Agree. Bad guys are an affront to society. Bad guys put themselves into this situation themselves. How these things end is their own responsibility brought onto themselves by themselves. Such things don’t happen to decent, moral law-abiding citizens.

      • kjmclark 2016-07-22

        That’s right, and when we have our coup in a decade or two like Turkey just did, I’m sure the people who win won’t have battlefield robots shooting their political opponents.

  • Steve Spence 2016-07-22

    It’s a tool, nothing else.

    • That’s what I was going to say..  The robot, just as a gun, is performing a task as directed by the human controlling it.  This is no different from a police officer using his police car to crash intentionally into someone fleeing in order to protect children in the neighborhood the bad guy is about to drive into.  In each instance, the human is deciding what the risk vs reward calculation is for using each tool at his disposal and then choosing the right tool for the job.  I am all for giving the police any tools that help them stop the bad guy, protect themelves and the good guys, without infringing on the rights of the citizens.  I suspect the next step in the ethics discussion will come when the robot can act autonomously.  For example, when “robot v2” comes out, and has (for example) a “loaded gun detector” and you can send it autonomously into a mall to search down and kill the bad guy holding hostages the question would appear to become more complex, but in fact, to me it is again just a tool being deployed by a human.  That human still must weigh the risks vs. reward of the use of the tool in that specific case and then it is still the judgement of the human making the decision whether lethal force was warranted that we must trust or question, but some BODY gave the robot the command to proceed on whatever task is to be done whether the robot is largely autonomous, or if the police officer pulls the “remote trigger” manually.

  • Keith Bradley 2016-07-22

    Until they manufacture a robot with the capabilites of rational thought and able to make rational decisions, this article is meaninglessness. A remote controlled device is not a robot.

    • kjmclark 2016-07-22

      Since when did Asimov’s laws only apply to robots capable of rational thought?  So, if it’s only as smart as a hyena, it’s OK if it kills a room full of people?

  • Paul Kruger 1 2016-07-22

    Let’s be clear about how we define “robotics” here.  This was a remote controlled vehicle that is conceptually not much different than a remote control toy car…just beefier and able to carry a payload. You could load the same one pound of C4 onto your kids RC car and do basically the same thing.

    Their “Robot” did not make any decisions to kill, it was totally controlled by people and amounted to nothing more than the “long arm” of the law being extended.  Throwing a grenade would be no different in terms of this standoff.

    This would be a discussion if AI were involved that put decision making in the hands of a computer or the device itself, but this is not the case…it is not a “robot” it is an advance RC toy.

  • ThatAintWorking 2016-07-22

    This is really a stupid question. The “ethics” of the robot in question don’t exist, it didn’t make any decisions in this case because It was a little more than a remote control car. Just like with guns, it’s about the person, not the tool they are using.

    • KLRJUNE 2016-07-22

      Exactly true.

    • kjmclark 2016-07-22

      So try this one instead - the ethics of using a remotely controlled device to kill someone when the police normally would have just waited them out.  How intelligent does that device have to be before you think the ethics of the programmers and the people giving direction become involved?  Even something as dumb as a yellow jacket will sting you if it thinks you’re a problem.  So you’re OK with the police sending in yellow jacket level intelligence drones to kill people if the police are the ones telling them where to go?  If the dumb drones then kill 11 people instead of one?

  • Robert VA3ROM 2016-07-22

    Isaac Asimov codified three laws of robotics:
    1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

      A robot must obey orders given it by human beings except where such orders would conflict with the First Law.

      A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

    • kjmclark 2016-07-22

      Don’t those already seem quaint?  Some day, after many people have died, they’ll actually make those suggestions some kind of requirement.  But it may be too late by then.

  • KLRJUNE 2016-07-22

    I don’t have a problem with what they did considering the situation.

  • Robert VA3ROM 2016-07-22

    This wasn’t a “robot”, it was an human radio controlled device, and it was a human who made the “ethical” decision when and where to pull the “kill” trigger. Humans make the decision to kill humans, including controlling machines to do it for us. In this case, what’s the difference if a SWAT marksman took him out with a head shot, or a human controlled or programmed device did the same? There was no “robot” involved here, and people who write articles about “robots” should first read Asimov and the “Three Laws of Robotics”:
    1.  A robot may not injure a human being or, through inaction, allow a human being to come to harm.
    2.  A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
    3.  A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

    • kjmclark 2016-07-22

      So industrial “robots” aren’t robots either?  Funny how high the bar rises for intelligence when people like the outcome.

  • fortney 2016-07-22

    Good end for a terrorist. The ethics is to kill him before he can shoot anyone else.

  • ilane 2016-07-22

    Meaningless, imo. The fact that the mass media, some games organizers, and even some techies call the vehicle used in Dallas a robot does not make it one. It is merely a grown-up remote-control toy. As long as all its actions are fully controlled by a human, it is not a robot, and it has nothing to say about the future ethics of robots. Discussion of that subject is sorely needed, and the Dallas incident is as good a trigger as any, but it would be nice to educate all that a remote-controlled gizmo is not a robot.

    • kjmclark 2016-07-22

      Yeah, and those industrial robots putting people out of work aren’t really anything more than toys with a tiny bit of programming!  After all, they don’t do anything without someone writing a program, which us just modern verbal instructions written in code.

      • windadct 2016-07-22

        Or cars, they put a lot of people out of work.  We used to need a team of people to care for and manage horses for the people that could afford them… progress is a disaster.

  • chrisbaron 2016-07-22

    This is idiotic.  What do you think a guided missile is?  This police “robot” had far less autonomy than an ICBM or radar guided missile yet somehow this is a huge unprecedented ethical milestone.  Please give me a break from breathless overheated unthinking commentary!

    • kjmclark 2016-07-22

      Wait, so we have guided missiles pointed at US citizens suspected of crimes?  Thanks, I totally didn’t know that.

      • mishaonmac 2016-07-24

        US citizen seems to be already different case from a human. The article above contains similar statement. My congratulations, dear citizens of US!

  • kjmclark 2016-07-22

    If we didn’t know full well that the military will soon have semi-autonomous battlefield robots killing people *without* direct control, the distinctions people are making would be meaningful.  As it is, they’re just making the on-ramp to the slippery slope. 
    - “It wasn’t a real robot, and the guy deserved to die” - this instant, apparently, couldn’t wait him out like we usually do.
    -> “It was a *semi* autonomous drone, and the commanders directed it to fire, and those militants deserved to die”.
    -> “That robot was *directed* to fire those tear gas grenades into the crowd of protesters - it didn’t decide to do it by itself.  It’s nobody’s fault if one of the protestors was hit in the head and died.”
    -> “It’s too bad that robotic tank killed those libertarians, but they were armed, and it was their leader who started the coup. We wouldn’t want law-abiding people dying to deal with treasonous scum like that.  And those socialists better get in line, or they’re next.”
    -> “Madame Secretary, can you please comment on why our drone aircraft decided to level those 67 villages in North Korea?  Are you aware that the Chinese are threatening to send their own autonomous aircraft to retaliate in South Korea?”

  • pooroldplowboy 2016-07-22

    Why do bleeding hearts concern themselves with the death of a crazed man who just killed 5 officers in cold blood and wounded 12 more?  Some people don’t have any sense of right and wrong.  He was given a chance to give himself up and it was totally his decision that caused it to end the way it did.  The shooter was a terrorist by definition, he killed innocent American citizens with plan to terrorize the others.  We blow up other terrorists by the hundreds (sic thousands) by drones and every conceivable weapon and now the “robo-ethicist’s) cry because a robot was used to defuse this situation before anyone else had to die.  What if they hand threw C4 into the building, would that be ok.  As good ole Charlie Brown would say, “Good Grief.”

    • jssamp 2016-08-04

      No. I don’t think it would have been OK. My issue isn’t that the police killed him; he clearly needed killing. I am also not concerned about their use of a remote control toy to deal with him; they avoided putting any more officers at risk. But C4?! Are side-arms, shotguns and assault rifles no longer up to the task? Not deadly enough? I guess when they get REALLY pissed off they pull out missile launchers and tactical bombers. Isn’t the use of high explosives against a human target the very definition of excessive force? I wasn’t there and don’t know the layout or the location of innocents but an RC suicide bombers has a wide area of action; not exactly the contained targeted force of a bullet.