Our news briefings are emailed to our newsletter subscribers every week. Sign up via the contact form below if you’d like to receive them.
Autonomy in weapons systems, and AI and autonomy in security and defence
- UK: Majority of British public support ‘laws and regulations’ to guide the use of AI, according to a new nationwide survey: A new survey of 4000 British adults by the Turing Institute and the Ada Lovelace Institute has found that ‘People are most concerned about advanced robotics such as driverless cars (72%) and autonomous weapons (71%)’. The full report on the survey can be read here.
- International: Autonomous weapons: ICRC urges states to launch negotiations for new legally binding rules: Following the recent meeting of the Convention on Certain Conventional Weapons Group of Governmental Experts on Lethal Autonomous Weapons Systems (CCW GGE on LAWS) in May, the ICRC has reiterated its call for states to ‘formally launch negotiations for new and legally binding international rules on autonomous weapon systems.’ For more on the May CCW GGE meeting, listen to Article 36’s recent podcast, in which Richard Moyes provides a summary of the content of the report, and highlights that it fails to reflect the real momentum and energy we have already seen in 2023 towards a legal instrument on autonomous weapons.
- International/US: No, a rogue U.S. Air Force drone did not just try t kill its operator (but it might as well have done): This article notes that ‘The omnipresent Frankenstein/Terminator narrative’ around AI is ‘drowning out discussion about the real issues involved with autonomous weapons such as ethical considerations, accountability , lowering thresholds, algorithmic bias and “digital dehumanisation.”’
- Antigua & Barbuda: Autonomous weapons – a real and urgent danger to people: Writing in the St Kitts & Nevis Observer, Sir Ronald Sanders, Ambassador of Antigua & Barbuda to the United States and the Organization of American States, says that ‘small states in the Caribbean and everywhere should immediately join the growing international movement to prohibit and regulate autonomous weapons systems’, and argues that small states should also ‘adopt laws, prohibiting the importation and use of autonomous weapons and applying stiff penalties for violations.’
- Palestine: Palestinian forum highlights threats of autonomous weapons: Human Rights Watch’s Susan Aboeid writes that ‘Autonomous weapons systems could help automate Israel’s uses of force. These uses of force are frequently unlawful and help entrench Israel’s apartheid against Palestinians. Without new international law to subvert the dangers this technology poses, the autonomous weapon systems Israel is developing today could contribute to their proliferation worldwide and harm the most vulnerable.’
- Palestine: How AI is intensifying Israel’s bombardments of Gaza: This piece notes that there are a ‘litany of dangers’ posed by AI military systems, ‘from digital dehumanization that reduces human beings into lines of code for a machine to determine who should live or die, to a lowered cost and threshold for warfare that replaces ground troops with algorithms’, and further notes that ‘As Mona Shtaya, advocacy director at 7amleh, explained: “if the data is biassed, then the product’s end result is going to be biassed against Palestinians.”’
- New Zealand: AUKUS is already trialling autonomous weapons systems – where is NZ’s policy on next-generation warfare? Jeremy Moses, Associate Professor in International Relations, and Sian Troath, Postdoctoral fellow, both at the University of Canterbury in New Zealand, write that there is a ‘lack of clarity’ on ‘just what New Zealand’s policy position on AWS currently is.’
- US: Can Congress bar fully autonomous nuclear command control?: This piece notes that U.S. senators Edward Markey, Elizabeth Warren, Jeff Merkley, and Bernie Sanders have ‘recently released a draft bill to safeguard nuclear command and control from future policy changes that might allow an artificial intelligence (AI) system to make nuclear launch decisions. Specifically, the bill (which we term the Markey proposal) would prohibit the use of federal funds to use an autonomous weapons system that is not subject to meaningful human control to either launch a nuclear weapon or select or engage targets for the purposes of launching a nuclear weapon.’
- EU: Milrem hails conclusion of EU robotics program amid security review: Defense News reports that ‘A prominent European robotics program came to a close last month, with lead contractor Milrem Robotics celebrating the results while awaiting the verdict of a European Union security review over its new owners in the United Arab Emirates.’
- US: Marines betting big on “critical” air-launched swarming drones: The Drive reports on a U.S. Marine Corp media roundtable, at which the Marine Corps said that ‘the development of a new family of relatively low-cost, long-range air-launched loitering munitions, or kamikaze drones, is “critical” to its new expeditionary and distributed concepts of operations.’
- US: Senators plan briefings on AI to learn more about risks: This year, US Senators have agreed to attend educational briefing sessions on ‘AI and machine learning’, including ‘a classified session dedicated to AI employment by the U.S. Department of Defense and the intelligence community’.
- Russia: Russia Taps Power Of Tech Startups For New Kamikaze Drones: Forbes reports that according to The Russian News Agency (TASS), ‘Russia’s Privet-82 kamikaze drones have successfully undergone combat tests.’
Research and reports:
- Loitering Munitions and Unpredictability: Autonomy in Weapon Systems and Challenges to Human Control: This new report published by the Center for War Studies, University of Southern Denmark and the Royal Holloway Centre for International Security and written by Dr Ingvild Bode and Dr Tom F.A. Watts examines how integrating automated, autonomous & AI technologies into loitering munitions presents new challenges to human control over the use of force, and urges ‘states to develop and adopt legally binding international rules on autonomy in weapon systems, including loitering munitions as a category therein.’
Regulation and enforcement
- EU: EU’s AI ambitions at risk as US pushes to water down international treaty: EURACTIV reports on recent developments noting that ‘the US government has been pushing to limit the scope of the AI Convention to only public bodies, leaving out the private sector.’ The concern here is that ‘this would mean lowering the AI Convention below the Council of Europe standards.’
- UK: Britain to remove Chinese surveillance gear from government sites: The Guardian recently reported that the British government has major security concerns about the use of Chinese made surveillance equipment. This week the government ‘announced the removal of equipment such as CCTV cameras from ‘sensitive buildings’.’
- US: Judges decide cops can’t use black-box biometrics systems in criminal case: BiometricUpdate reports that ‘In an enforceable blow for transparency in facial recognition, an appeals court in the U.S. state of New Jersey has ruled that a defendant in a criminal trial must be shown information about systems being used to incriminate him.’ The court ruled that ‘that defendants have the right to know certain raw information about the AI algorithm used to match his face against a database of faces.’
Facial recognition, biometric identification, surveillance
- International: As AI Act vote nears, the EU needs to draw a red line on racist surveillance: AI systems are increasingly a feature of the EU’s approach to migration. Of great concern is that this ‘exposes people of colour to more surveillance, more discriminatory decision-making in immigration claims, and more harmful profiling.’ Experts from European Digital Rights (EDRi) and the Platform for International Cooperation on Undocumented Migrants (PICUM) are calling for an EU AI Act that ‘must prohibit and prevent discriminatory AI profiling tools used in migration procedures.’
- Ireland: Facial recognition technology row is Hamlet without the biometric prints: In an op-ed in The Irish Times, Liam Herrick, executive director of the Irish Council for Civil Liberties, writing on the debate around the introduction of facial recognition technology, argues that ‘the widespread public concerns about facial recognition technology engage key human rights standards under Irish and European law’, and that ‘even as the technology becomes more sophisticated, the risks to rights only increases.’
- UK: Facial recognition ‘debate needed’ says police commissioner: Thames Valley’s police and crime commissioner has said that he is ‘he was working to build a CCTV partnership between the police and local authorities’, and that ‘discussions with people when out and about have led him to believe that facial recognition could be used to aid investigations.’
Algorithms and autonomy in the wider world
- International: AI Doomerism Is A Decoy: In this piece in The Atlantic, the author addresses the AI & existential risk debate, noting that the divide ‘is not over whether AI is harmful, but which harm is most concerning – a future AI cataclysm only its architects are warning about and claim they can uniquely avert, or a more quotidian violence that governments, researchers, and the public have long been living through and fighting against – as well as who is at risk and how best to prevent that harm.’ More on the ‘existential risk’ framing here.
- International: Gulf states spending big on AI: Opportunity or oppression?: Deutsche Welle reports that digital rights activists are becoming increasingly ‘concerned about data security, surveillance, as well as the potential for “dual use” of certain AI-linked technologies.’ This comes at a time where the Gulf region is said to be heavily investing in AI technology ‘because it is an important part of future plans to develop their national economies away from oil income.’
- US: Banking chatbots have ‘harmed’ consumers, according to new Fed report: A report from the Consumer Financial Protection bureau has found that ‘The deployment of deficient chatbots by financial institutions risks upsetting their customers and causing them substantial harm, for which they may be held responsible.’
Other items of interest
- The Sociological Review: Artificial Intelligence: The June 2023 issue of The Sociological Review focuses on AI, with a number of interesting contributions, including on AI-assisted risk assessments in criminal justice decisions; on AI’s algorithmic violence and how ‘there is a single social thread running through contemporary AI from end to end: algorithmic violence, which encompasses both symbolic violence and predictive intervention’; and on the use of AI by the U.S. military, arguing that ‘like war itself, and like “great powers”, nothing about this moment and its technologies is inevitable. The hype, predictions and technologies themselves are instead the result of intrinsically political and, moreover, locatable investments.’
- Webinar: Artificial intelligence and human rights: the case of Israeli surveillance of Palestinians: On 15 June, ‘The Arab Center Washington DC is convening a panel of experts to explore the role of artificial intelligence in the Israeli occupation and its biometric surveillance of Palestinians, and to discuss the implications for human rights and international humanitarian law.’ The session will be hosted on the Zoom platform and you can register here.
Sign up for our email alerts
"*" indicates required fields
We take data privacy seriously. Our Privacy Notice explains who we are, how we collect, share and use Personal Information, as well as how you can exercise your privacy rights, but don’t worry, we hate spam and will NEVER sell or share you details with anyone else.