In Opposition to Killer Robots

By Joanne Dufour

Work to disarm our planet continues. Some good news of late reveals increasing support of the Treaty to Ban Nuclear Weapons with the ratification by South Africa, the 22nd country to do so with the distinction of being the only country which gave up its program to develop nuclear weapons and is now a strong supporter of banning them. In addition, Washington DC has joined with other cities in support of the treaty with unanimous consent by their City Council.

Recently, attention to the use of “killer robots” has been aided by the publication of a new resource guide by the Women’s International League for Peace and Freedom and their Reaching Critical Will leadership: "Killer robots are fully autonomous weapon systems. These are weapons that operate without meaningful human control. The weapon itself can make decisions about when, where, and how it is used; what or whom it is used against; and the effects of its use.”

Campaigners stand in front of the UN Headquarters wearing white jumpsuits emblazoned with letters that spell out "Stop Killer Robots"

Campaigners calling on the UN to "Stop Killer Robots" pose in front of the United Nations Headquarters in New York City.

The previous and following selections are taken from this new Resource Guide with the kind permission of its author Ray Acheson.

There’s a distinction between lethal drones and fully autonomous weapons or “killer robots.” “Drones, or ’uncrewed aerial vehicles’ (UAVs) are remotely piloted by humans who select targets and choose when to fire upon them. A fully autonomous weapon would be programmed so that once it is deployed, it operates on its own. It would be able to select and fire upon targets independently. In essence, this means that machines would have the power to make life-and-death decisions over human beings.”

“While the countries that want to develop killer robots say they could have positive benefits [such as lack of emotional responses in a wartime setting; no vengeful rampages or rape], many roboticists, scientists, tech workers, philosophers, ethicists, legal scholars, human-rights defenders, peace and disarmament activists, and governments of countries with less-advanced militaries have called for an international ban on the development of such weapons. They are concerned that these weapons will result in more civilian deaths, be unable to comply with international humanitarian law or human rights law, make war more likely, encourage an arms race, destabilize international relations, and have moral consequences such as undermining human dignity.”

Operating on algorithms programmed to kill, these weapons “would create a perfect killing machine, stripped of the empathy, conscience, or emotion that might hold a human soldier back.” They would comply unquestioningly with the laws of war “…and if this includes massacring everyone in a village, they will do so without hesitation.”

Illustration in red and black of two people, an adult and child, looking off the balcony of a tall building at an armed drone which seems to be pointed at them.

Graphic from the Campaign to Stop Killer Robots showing a child (holding stuffed bear) and an adult with an armed robot hovering before them as if poised to strike.

International Humanitarian Law comprises a set of rules that seek to protect those who do not take part in hostilities and to restrict the means and methods of warfare. It calls upon the military to respect the rights of civilian non-combatants, those wounded in the fighting, those surrendering, prisoners of war, protected persons such as medical, religious, and relief personnel, and to evaluate the proportionality of an attack. Every country has signed on to upholding IHL, more commonly known as the Geneva Conventions. While government militaries are the monitoring agents, guaranteeing its enforcement is essentially up to us.

The Guide continues with the warning, “Fully autonomous weapons risk lowering the threshold for war. They present a perception of “low risk” and “low cost” to the military deploying the weapon. This perception increases the scope for the deployment of weapons into situations and to carry out tasks that might otherwise not be considered possible. Replacing troops with machines could make the decision to go to war easier. The implications of having an amoral algorithm determine when to use force means that we will likely see more conflict and killing, not less.” They could be (and have been) “unleashed upon populations that might not be able to detect their imminent attack and might have no equivalent means with which to fight back. Thus the features that might make autonomous weapons attractive to technologically advanced countries looking to preserve the lives of their soldiers will inevitably push the burden of risk and harm onto the rest of the world.”

This changes the nature of war. “The increasing automation of weapons systems helps to take wars and conflict outside of the view of the deploying countries’ citizenry. If its own soldiers are not coming back in body bags, will the public pay attention to what its government does abroad? Does it care about the soldiers or the civilians being killed elsewhere? From what we have seen with the use of drones, it seems that it is easier for governments to sell narratives about terrorism and victory if their populations cannot see or feel the consequences themselves.”

Is this what we really want?