Autonomy in weapons systems, and AI in security and defence
- International: Landmark joint call from UN Secretary-General and ICRC President urging states to to establish prohibitions and regulations on autonomous weapons: The UN Secretary General and the President of the ICRC have issued a significant joint appeal urging states to take ‘decisive action’ in creating new prohibitions and regulations for autonomous weapons systems. States must ‘come together constructively to negotiate new rules that address the tangible threats posed by these weapon technologies’, write the Secretary-General and the ICRC President.
- International/US: AI and the Future of Drone Warfare: Risks and Recommendations: In Just Security, Brianna Rosen, a Senior Fellow at Just Security and a Strategy and Policy Fellow at Oxford University’s Blavatnik School of Government, writes that the most immediate threat from AI is ‘not the “AI apocalypse” – where machines take over the world – but humans leveraging AI to establish new patterns of violence and domination over each other’, and that ‘As AI reduces human involvement in killing, drone warfare will most likely become less explainable and transparent than it is now.’
- US: AI Security Center to Open at National Security Agency: The U.S. National Security Agency has announced that it will open an AI Security Center, which will become ‘the focal point for developing best practices, evaluation methodology and risk frameworks with the aim of promoting the secure adoption of new AI capabilities across the national security enterprise and the defense industrial base.’
Facial recognition, biometric identification, surveillance
- US: Predictive Policing Software Terrible At Predicting Crimes: An investigation by The Markup has found that ‘a software company sold a New Jersey police department an algorithm that was right less than 1% of the time’, and that ‘a major problem with Geolitica’s system, as it was used in Plainfield, is that there were a massive number of predictions compared to a relatively small number of crimes’. Dillon Reisman, founder of the American Civil Liberties Union of New Jersey’s Automated Injustice Project, commented that ‘I think that what this shows is just how unreliable so many of the tools sold to police departments are’.
- UK: Police using passport images for facial recognition would be a ‘setback’ for trust in AI: Following British policing minister Chris Philip’s comments that police facial recognition technology could be given access to Britain’s passport database of more than 45 million photos, privacy campaigners, experts, and others have called on the government to roll back such plans, noting that ‘If the state routinely runs every photograph against every picture of every suspected incident of crime simply because it can, there is a significant risk of disproportionality and damaging public trust.’
AI, algorithms and autonomy in the wider world
- International: Workers could be the ones to regulate AI: This opinion piece in The Financial Times notes that ‘workers who have an everyday experience with the new technology are in a good position to understand how to curb it appropriately’, and that ‘most people understand that if AI isn’t human-centred, and ultimately human labour-enhancing, we’re in for some very ugly politics.’
Research and reports
- International: Digitally Divided: Technology, Inequality, and Human Rights: A new briefing from Amnesty US highlights that ‘It is clearer than ever that digital technologies, particularly in the absence of robust regulation, can amplify and exacerbate underlying social, racial, and economic inequalities, helping to re-entrench patterns of structural exploitation.’
- International: The Repressive Power of Artificial Intelligence: Freedom House’s 2023 digital rights report finds that ‘advances in artificial intelligence (AI) are amplifying a crisis for human rights online. While AI technology offers exciting and beneficial uses for science, education, and society at large, its uptake has also increased the scale, speed, and efficiency of digital repression. Automated systems have enabled governments to conduct more precise and subtle forms of online censorship. Purveyors of disinformation are employing AI-generated images, audio, and text, making the truth easier to distort and harder to discern. Sophisticated surveillance systems rapidly trawl social media for signs of dissent, and massive datasets are paired with facial scans to identify and track pro-democracy protesters.’ A podcast conversation with two of the reports authors (with transcript) is available here.
Sign up for our email alerts
"*" indicates required fields
We take data privacy seriously. Our Privacy Notice explains who we are, how we collect, share and use Personal Information, as well as how you can exercise your privacy rights, but don’t worry, we hate spam and will NEVER sell or share you details with anyone else.