The United States’ Defense Advanced Research Projects Agency (DARPA) has a number of programmes underway related to further developing artificial intelligence and algorithms for combat and war-fighting purposes, raising concerns about the removal of human decision-making and judgment in warfare.
‘Assured Neuro Symbolic Learning and Reasoning’ (ANSR) programme
The ‘Assured Neuro Symbolic Learning and Reasoning‘ (ANSR) programme will develop ‘hybrid AI techniques to enable new mission capabilities’. The guiding challenge for the ANSR programme is the execution of an unaided intelligence, surveillance and reconnaissance (ISR) mission ‘in a highly dynamic dense urban environment’. The system will ‘deliver insights that help characterize friendly, adversary, and neutral entities’, and will also ‘carry an effects payload to reduce sensor-to-effects delivery time.’
The programme announcement notes that intelligence, surveillance, and reconnaissance missions are currently ‘conducted by warfighters either through forward presence, or through teleoperated ISR assets such as drones. The ISR asset in these cases simply provides a video feed to the warfighter, who then has to process and analyze video feeds. The warfighter needs to distinguish adversaries from noncombatants, understand adversarial activities, analyze the scene to identify additional scan paths and focus areas, and maneuver the ISR asset to maximize stealth and safety’, and says that ‘These are challenging activities that currently impose high cognitive burden on the warfighter and require them to continuously be in-the-loop.’
As the International Committee of the Red Cross (ICRC) notes, such systems raise numerous concerns, and ‘could lead to increased risks for civilian populations’. The ICRC points out that ‘the use of AI and machine learning for targeting decisions in armed conflict, where there are serious consequences for life, will require specific considerations to ensure humans remain in a position to make the context-based judgements required for compliance with the legal rules on the conduct of hostilities’ – whatever the cognitive burden on the warfighter.
‘In the Moment’ programme
Meanwhile, DARPA’s ‘In the Moment‘ programme, announced in March 2022, ‘will research and develop technology to support building, evaluating, and fielding algorithmic decision-makers that can assume human-off-the-loop decision-making responsibilities in difficult domains, such as medical triage in combat’. According to the In the Moment programme manager Matt Turek, ‘The DoD needs rigorous, quantifiable, and scalable approaches to evaluating and building algorithmic systems for difficult decision-making where objective ground truth is unavailable. Difficult decisions are those where trusted decision makers disagree, no right answer exists, and uncertainty, time-pressure, and conflicting values create significant decision-making challenges.’
AI ethicists have raised numerous concerns regarding the use of AI to make medical decisions in combat settings, and the ICRC has argued that ‘applications for humanitarian action also bring potential risks, as well as legal and ethical questions’.
Sign up for our email alerts
"*" indicates required fields
We take data privacy seriously. Our Privacy Notice explains who we are, how we collect, share and use Personal Information, as well as how you can exercise your privacy rights, but don’t worry, we hate spam and will NEVER sell or share you details with anyone else.