Academics and Research

Quick questions: the Korbel School’s Heather Roff Perkins on lethal autonomous weapons systems

Heather Roff Perkins, a visiting professor
 at the Josef Korbel School of International Studies, will travel to Geneva, Switzerland, in April to attend the United Nations meeting of member states under the Convention on Conventional Weapons. There, she will speak as an invited expert on lethal autonomous weapons systems (LAWs).

 

Q: What are lethal autonomous weapons systems?

A: Lethal autonomous weapons systems are weapons systems that can target and fire without the control of a human operator.

 

Q: Are they in use now by militaries? 

A: Most state militaries claim that they are not currently in use. However, much depends on how one defines “control” by a human operator and when target selection must occur. For example, the U.S. uses the Aegis system on several of its naval ships, and that system can track, cue and fire automatically without human intervention.

Israel uses the Iron Dome, which also has this capacity, and the United Kingdom uses the Brimstone missile, which has the potential to select individual targets, from a preselected class, on its own. Lockheed Martin in the U.S. also has the Long-Range Anti-Ship Missile (L-RASM) that has similar functions. Others, like South Korea, have stationary boarder systems that can detect a person through heat seeking and automatically fire.

 

Q: What is the argument for using them in combat situations?

A: There are many arguments. Depending upon the domain — air, land, sea or cyber — autonomous weapons may permit forces to navigate and act in denied environments — in other words, in environments where the U.S. is unable to communicate, or where there is little freedom of overt action. They may also act as force multipliers and bring intelligence, surveillance, reconnaissance capacities and a forward presence where a large military footprint is unacceptable. In ground situations, some make arguments that the machines will be better at discriminating between combatants and civilians, and thus better uphold the laws of war.

 

Q: What are the objections to using them?

A: Again, there are many. Chief among them are that the machines are unable to uphold the laws of war, particularly the principles of discrimination and proportionality, as well as violating the Marten’s Clause against using weapons or methods of war that violate public international conscience.

Some make the claim that using weapons that involve little cost to the possessor will lower the barriers to conflict, and thus war will become more likely. Others also warn that the development and deployment of autonomous weapons will start an arms race between major powers, and that the older and less accurate technology will proliferate to the middle and small states.

 

Q: These systems have been called “killer robots” by some critics. Is that a fair description?

A: These systems were termed “killer robots” in a 1983 Newsweek article called “The Birth of Killer Robots.” In 2013, 30 years after the first mention of “killer robots,” Human Rights Watch launched a Campaign to Stop Killer Robots, which is a coalition of more than 50 NGOs fighting to preemptively ban the development and use of autonomous weapons systems.

The critics of autonomous weapons systems are referring to their lethal nature and the fact that they are robots. They have sensors, actuators, processors and are, for all intents and purposes, robots. Thus, if one is asking about a definition, then yes, it is a correct one.

While proponents of autonomous weapons systems tend to dismiss the idea that they are “killer robots,” this is done from an attitude that the worries of the critics are mere science fiction. However, the worries are not science fiction. Autonomous weapons systems have been on the U.S. Department of Defense’s docket from the early 1980s, and research and development on unmanned systems have been going on since the 1950s.

One Comment

  1. Great topic and article. I am not a pacifist. I am a not war hawk either. I am 100% against weapon drones and all types of lethal autonomous weapons systems. My reason is this. I think we need to know exactly how horrible war and combat really are – what it is to be charged with killing another human being in that capacity. Because only by knowing the true reality of that will we keep war/combat as the absolute last resort and only when there is no other alternative left but to defend ourselves by those means. These systems all but remove the humanity – the human element. If we make killing clean, distant, sterile like a computer game/simulation, that doesn’t register. Operators of these systems will never have to look into the eyes of their enemy or witness the bloodshed. The article mentions science fiction so I will go there. There is an episode of Star Trek the Original Series where people are at war through a computer simulation. Those “killed” via computer then walk into a chamber to be “vaporized”. Apparently in this show, it has been going on for generations. That episode possibly makes the point I am trying to make here better than I can. My other argument is all the things that could go wrong – a system malfunctions, falls into the wrong hands or is hacked, and targets innocents. Just because the technology exists to have these weapons, doesn’t mean it should be used.

Leave a Reply

Your email address will not be published. Required fields are marked *

*