This paper provides an initial sketch of responses to AI and automated decision-making in wider society, while contextualising these responses in relation to autonomy in weapons systems.
For more than nine years, autonomous weapons systems have been the subject of international discussion in various fora, including in the UN Human Rights Council, the UN General Assembly First Committee on Disarmament and International Security, and the Convention on Certain Conventional Weapons (CCW) Group of Governmental Experts on Emerging Technologies in the Area of Lethal Autonomous Weapons Systems (GGE on LAWS). In these discussions, states, United Nations agencies, international organisations, and non-governmental organisations have highlighted the various and serious ethical, moral, humanitarian and legal implications of artificial intelligence (AI) and autonomous weapons systems. Despite a majority of states supporting negotiation of a legal instrument, the Sixth Review Conference of the CCW in December 2021 failed to agree a mandate to work towards any form of regulation.
In compiling the report, forty states were identified which had publicly released specific policy documents or other strategies on the development and use of artificial intelligence domestically. The report assesses numerous relevant national AI strategies and positions, EU-level reports and regulations, international guidelines, and other documents, in order to draw out core themes and concerns regarding the adoption and use of AI and automated decision-making technologies in the civil space.
- The extent to which the serious risks and challenges presented by the use of AI and automated decision-making technologies are recognised by many states and international bodies in the civil space should be taken as validation for parallel and related concerns in the military space.
- Given the nature and scale of harms at stake in automated processing in the context of military targeting, and the difficulties in applying civilian oversight mechanisms in the military space, the challenges associated with autonomous weapons systems are particularly acute.
- The development of human rights-focussed responses to AI and automated decision making in the civil space should impel states to attend to the rights of those affected as a fundamental starting point for the generation of necessary rules in the military space.
- The importance of accountability and responsibility for the use of autonomous weapons systems has implications for how certain rules need to be drawn in order to avoid an erosion of accountability, and to ensure the protection and fulfilment of international humanitarian law rules and fundamental human rights.