Autonomous Weapon Systems: An Ethical Basis for Human Control?

This article by Neil Davison was originally published by the International Committee of the Red Cross in April 2018 as part of their series on the topic of Autonomous Weapon Systems. Sharing was encouraged. It coincides with ongoing work at the United Nations, with the second meeting this year on the UN Convention on Certain Conventional Weapons (CCW) focusing on autonomous weapons, and other organizations studying developments in the creation and use of Autonomous Weapons Systems. It is with appreciation that we share this with you.

Photo of an armed aerial drone in the sky

The Requirement for Human Control

The risks of functionally delegating complex tasks – and associated decisions – to sensors and data-driven algorithms is one of the central issues of our time, with serious implications across sectors and societies. Nowhere are these more acute than in relation to decisions to kill, injure and destroy. Concerns about loss of human control over these decisions underlie current international work by governments and civil society to address increasingly autonomous weapon systems, as well as broader reflections on ‘war algorithms.

International discussions have tended to focus on the compatibility of autonomous weapon systems (broadly defined by the ICRC as weapons with autonomy in their critical functions of selecting and attacking targets, and including some existing weapons) with the rules governing the conduct of hostilities under international humanitarian law and the use of force under international human rights law. These rules require a minimum level of human control be retained over weapon systems and the use of force, variously characterized as ‘meaningful’ or ‘effective’ ‘human control’ or ‘appropriate human involvement.’

Ethical Debates

Ethical questions have often appeared something of an afterthought in these discussions. But, for many, they are at the heart of what increasing autonomy could mean for the human conduct of warfare and the use of force more broadly. It is precisely fear about the loss of human control over decisions to kill, injure and destroy – clearly felt outside the confines of UN disarmament meetings – that goes beyond issues of compliance with our laws to encompass fundamental questions of acceptability to our values.

Can we allow human decision-making on the use of force to be effectively substituted with computer controlled processes, and life-and-death decisions to be ceded to machines?

With this in mind, the ICRC held a round-table meeting with independent experts in August 2017 to explore the issues in more detail… Based on discussions at the meeting, and from the ICRC’s perspective, the ethical considerations with most relevance to current policy responses are those that transcend context – whether armed conflict or peacetime – and transcend technology – whether ‘dumb’ or ‘sophisticated’ (AI-based) autonomous weapons systems.

Human Agency

Foremost among these is the importance of retaining human agency – and intent – in decisions to kill, injure and destroy. It is not enough to say ‘humans have developed, deployed and activated the weapon system’. There must also be a sufficiently close connection between the human intent of the person activating an autonomous weapon system and the consequences of the specific attack that results.

One way to reinforce this connection is to demand predictability and reliability both in terms of the weapon system functioning and its interaction with the environment in which it is used, while taking into account unpredictability in the environment itself. However, these demands are challenged by the very nature of autonomy. All autonomous weapons systems – which, by definition, can self-initiate attacks – create varying degrees of uncertainty as to exactly when, where and/or why a resulting attack will take place. Increasingly complex algorithms, especially those incorporating machine learning, add a deeper layer of unpredictability, heightening these underlying uncertainties.

So, what is ‘meaningful’ or ‘effective’ human control from an ethical perspective? One way to characterize it would be the type and degree of control that preserves human agency and intent in decision to use force. This doesn’t necessarily exclude autonomy in weapon systems, but it does require limits in order to maintain the connection between the human intent of the user and the eventual consequences.

Limits on Autonomy

To a certain extent, ethical considerations may demand some similar constraints on autonomy to those needed for compliance with international law, in particular with respect to: human supervision and ability to intervene or deactivate; technical requirements for predictability and reliability; and operational constraints on the tasks, the types of targets, the operating environment, the duration of operation and the scope of movement over an area.

These operational constraints – effectively the context of use – are critical from an ethical point of view. Core concerns about loss of human agency and intent and related concerns about loss of moral responsibility and loss of human dignity … are most acute with the notion of autonomous weapon systems used to target humans directly.

It is here that ethical considerations could have the most far-reaching implications, perhaps precluding the development and use of anti-personnel autonomous weapon systems, and even limiting the applications of anti-materiel systems, depending on the associated risks for human life. Indeed, one could argue, this is where the ethical boundary currently lies, and a key reason (together with legal compliance) why the use of autonomous weapon systems to date has generally been constrained to specific tasks – anti-materiel targeting of incoming projectiles, vehicles, aircraft, ships or other objects – in highly constrained scenarios and operating environments.

Human-Centred Approach

What is clear is that – from both ethical and legal perspectives – we must place the role of the human at the centre of international policy discussions. This is in contrast to most other restrictions or prohibitions on weapons, where the focus has been on specific categories of weapons and their observed or foreseeable effects. The major reason for this – aside from the opaque trajectories of military applications of robotics and AI in weapon systems – is that autonomy in targeting is a feature that could, in theory, be applied to any weapon system.

Ultimately it is human obligations and responsibilities in the use of force – which cannot, by definition be transferred to machines, algorithms or weapon systems – that will determine where internationally agreed limits on autonomy in weapon systems must be placed. Ethical considerations will have an important role to play in these policy responses, which – with rapid military technology development – are becoming increasingly urgent.