News

Elon Musk and the Future of Life Institute Pursue Warnings About Lethal AI Technology

August 25, 2017 by Chantelle Dubois

Recently, a series of news outlets published headlines about SpaceX CEO Elon Musk urging the United Nations to take action against “killer robots”. But he's not the only one supporting these efforts.

Elon Musk's name draws a lot of attention, especially when he offers opinions on AI, a field that's experiencing astronomical growth. But he's not the only one concerned about the future of AI, especially as it relates to weaponry.

Recently, a series of news outlets published headlines about SpaceX CEO Elon Musk urging the United Nations to take action against “killer robots”. While this is sort of the spirit of the intended message, what actually occurred is slightly different—and certainly didn't feature Musk as a sole actor.

The Future of Life Institute (FLI) is a conglomerate of technology experts and other famous figures working to ensuring the safety of humanity's future against existential threats. What happened is that this organization issued a statement in support of the United Nations Convention on Certain Conventional Weapons (UNCCCW), which recently established a Group of Governmental Experts (GGE) to review and advise on the dangers and use of lethal autonomous weapons systems. This proposal is in addition to the typical method for tackling specific issues that have a potential to impact the world, which is the creation of focused groups within United Nations committees.

The statement in question, released in the form of a letter on the Institute's website, was signed by many high profile members, including Musk, physicist Stephen Hawking, and actor Morgan Freeman. 

The letter references a meeting that was supposed to occur on August 21, marking the first time the GGE would have met to discuss this topic, and also commends the appointment of disarmament and regional security expert, Amandeep Singh Gill, as the ambassador of this group. However, it seems as though this meeting will now convene in November.

 

Elon Musk's efforts include revolutionizing rocket technology, electric cars, and solar energy. Photo by Dan Taylor used courtesy of Heisenberg Media.

 

While the headline that Musk is warning the world about killer robots is appealing, the reality concerning misuse of technology remains real, even if it is not nearly as sensational in reality. Humans using technology against other humans is a consistent and regular threat, with the nuclear arms race being both a historical example and a continuing threat.

In early 2015, FLI also published an open letter encouraging the international scientific and technology community to ensure that artificial intelligence remains used for the benefits of humanity. Since then, over 8,000 people have signed the letter in support.

Today's Nuclear Disarmament, Tomorrow's Killer Robots

There are arguments that autonomous weapons can save lives, limit casualties, and hasten the outcome of a conflict that may otherwise extend for years. Part of these arguments is the assertion that the automated nature of such weapons would remove human error and judgement, as well as remove humans (on the aggressor's side, at least) out of harm's way.

However, within the ethical, political, and military realms, one of the theories used to guide warfare is that of “jus bellum iustum”, Latin for "just war theory". In a broad sense, this is a philosophical theory that contemplates that nature of warfare and how we justify engaging in it. These conversations about autonomous weapons bring to bear part of this theory that covers how warfare is conducted once it has begun, known as “jus in bello” (literally, "justice in war"). This term is used by many organizations, including humanitarian groups like the International Committee of the Red Cross.

This philosophy generally asserts that war should be proportionally fought and should be instigated out of military necessity. It also is accepted to assert that combatants should not target non-combatants (such as civilians), should treat prisoners of war fairly, and should not conduct evil (also known as “malum in se”). 

It is this final bit, malum in se, that has been brought up in discussions regarding nuclear, biological, and chemical weapons. It's also what many of the experts involved in the FLI are concerned with when it comes to autonomous lethal weapons.

 

The Geneva Conventions are an important example of treaties supporting jus in bello. Image courtesy of the Swiss Federal Archives.

 

While technology and technological advancements have always been used in war for an advantage (think the Roman Phalanx or the catapult), the concern is that current technology can already make fighting a war massively destructive. Making such weaponry autonomous could exacerbate these issues by surrendering decision-making to our automated systems—and specifically violate the proportionality aspect of jus in bello.

According to the IFL's statement, "Lethal autonomous weapons threaten to become the third revolution in warfare. Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend."

The FLI also mentions a few of its major concerns about autonomous lethal weapons in particular: that such weapons can be used to terrorize, could possibly be hacked and used in undesirable ways, and can be used in a much faster time scale.

The Role of Drone Technology

Advanced technology is already used with regularity in martial efforts by many countries. While the word “drone” may bring to mind quadcopters and other consumer-type devices, military drones also refer to unmanned military aerial vehicles (UAVs) that are operated remotely to conduct military missions.

Currently, only the USA has long range drone technology capable of such operations; several other countries have more limited drone technology with more immediate ranges. In a publication titled “The Consequences of Drone Proliferation: Separating Fact from Fiction”, the authors (academics from the University of Pennsylvania, Cornell, and Texas A&M) believe that it will be years until other nations will have similar capabilities.

 

The MQ-9 Reaper drone. Photograph by U.S. Air Force photo/Staff Sgt. Brian Ferguson via the USAF Photographic Archives

 

However, this drone technology is still operated by a “pilot”, even if it's from a distance. The drones aren’t making fully automated decisions on who or what to target. They are also still limited in range and how much of a payload (AKA how much weaponry) they can carry. 

On the other hand, drone and UAV technology is also being used frequently in humanitarian efforts, tracking migrants at sea, delivering medicine, observing and studying storms, and imaging coast lines to help scientists determine things like erosion. 

UAVs, in particular, bring up many questions about how AI could be used in warfare due to their remotely-controlled nature. 

Other Organizations Supporting the Cause

Besides the Future of Life Institute, there are a few other organizations which are also actively campaigning to promote the peaceful, and beneficial use of technology for humans. The Campaign to Stop Killer Robots is an international organization which networks various NGOs, companies, and individual experts from around the world to advocate against lethal, autonomous technology.

 

 

Companies like Canada's Clearpath Robotics have agreed to support the Campaign's mission of ensuring the peaceful uses of robotics. Similarily to FLI, the Campaign to Stop Killer Robots issued a statement about the cancellation of August 21's first meeting of the GGE, expressing their disappointment.

The International Committee for Robot Arms Control is a non-profit which also advocates for the prevention of lethal uses of robotic technology. Their members include experts in robotics, engineering, international relations, international security, and arms control. 

Technology has the opportunity to make positive impacts on humanity and it appears there are many people in the world who are working hard to ensure that remains the case. How the efforts of these organizations affect the decisions of regulatory bodies is yet to be seen. In the meantime, the enormous growth of the AI industry continues as thousands of companies around the world in hundreds of fields continue to pursue the novelty and efficiency of utilizing neural networks.

1 Comment