Autonomy in weapons systems, and AI in security and defence
- International: UN Secretary-General Antonio Guterres calls on states to negotiate a new, legally binding instrument on autonomous weapons: In the Secretary-General’s New Agenda for Peace, released yesterday 20th July, the Secretary-General has called on states to ‘conclude, by 2026, a legally binding instrument to prohibit lethal autonomous weapon systems that function without human control or oversight, and which cannot be used in compliance with international humanitarian law, and to regulate all other types of autonomous weapons systems.’ The New Agenda for Peace also states that the ‘design, development and use of these systems raise humanitarian, legal, security and ethical concerns and pose a direct threat to human rights and fundamental freedoms’.
- Israel: Israel Quietly Embeds AI Systems in Deadly Military Operations: Writing for Bloomberg, Marissa Newman reports on Israel’s use of AI to ‘select targets for air strikes and organize wartime logistics’. While the IDF would not comment on specific operations, the IDF said that it ‘now uses an AI recommendation system that can crunch huge amounts of data to select targets for air strikes. Ensuing raids can then be rapidly assembled with another artificial intelligence model called Fire Factory, which uses data about military-approved targets to calculate munition loads, prioritize and assign thousands of targets to aircraft and drones, and propose a schedule.’
- Russia: Russia Boosts Production And Displays New ‘Swarming’ Version Of Lancet-3 Kamikaze Drone: Forbes reports that Russia is increasing production of its Lancet 3 loitering munition, having become Russia’s ‘preferred weapon for counter-battery strikes’. As this report notes, while the Lancet has a level of autonomous decision making, ‘it is not clear whether this extends to identifying and selecting its own targets — yet.’
- Ukraine and Russia: Drone Boats Used In Kerch Bridge Strike: Reports: The recent attack on the Kerch bridge, which links Russia to Crimea, was reportedly carried out by Ukraine’s federal security service in cooperation with the Ukrainian navy using uncrewed surface vessels (USVs), which ‘were directed to the bridge and then detonated from beneath the roadway.’
- South Africa: Militarisation of AI has severe implications for global security and warfare: Professor Tshilidzi Marwala, the seventh Rector of the United Nations (UN) University and UN Under Secretary-General, writes in an op-ed that ‘As AI continues to advance, the need to effectively regulate these algorithms becomes more crucial’, and notes that ‘The militarisation of AI has profound implications for global security and warfare.’
- International: A Battlefield AI Company Says It’s One of the Good Guys: Wired reports on Helsing AI, the defence technology company building defence AI systems which ‘maps the electromagnetic spectrum, the invisible space where different machines send electronic signals between one another to communicate’ in order to ‘help soldiers make faster, better-informed decisions and will be accessible on a variety of devices, so soldiers in frontline trenches can see the same information as commanders in control centers.’
- Hungary: Hungary Awards Major Contract to Rheinmetall and UVision for Hero Loitering Munitions: Hungary has awarded a contract ‘in the three-digit million euro range’ to Rheinmetall AG and Uvision for delivery of Hero loitering munitions to be delivered by 2024-25. The Hero Loitering Munitions reportedly ‘possess autonomous target engagement capabilities, including reconnaissance and surveillance. They can locate, track, and engage emerging enemy targets with low signatures beyond the line of sight.’ More on the Hero loitering munition, and on Hungary’s relationship with Rheinmetall, here.
- US: DoD partners call for central database for training military AI: A number of experts testifying before the House Armed Services Committee ‘said the US Department of Defense urgently needs to streamline its lagging data collection practices if it wants to maintain its lead’ in the ‘global AI arms race’. This piece further notes that ‘nowhere during the hearing was there any meaningful discussion of whether or not the military should continue its investment in AI-enabled weapons systems in the first place.’
Facial recognition, biometric identification, surveillance
- US: This AI Watches Millions Of Cars Daily And Tells Cops If You’re Driving Like A Criminal: Thomas Brewster, writing for Forbes, brings to attention that ‘Automatic License Plate Recognition (ALPR) technology is typically used to search for plates linked to specific crimes.’ However, this was not the case during 2020 where this type of AI technology was used to track a potential drug trafficker based on the routes he used and driving patterns. He plead guilty and his lawyer later stated that the danger of this technology is that ‘with no judicial oversight this type of system operates at the caprice of every officer with access to it.’
- US: New Report Warns Of Growing Surveillance Threat For Abortions Or Gender-Affirming Care: A report published this week from the Surveillance Technology Oversight Project (S.T.O.P), notes that ‘pregnant people could be tracked and prosecuted for seeking care after the Supreme Court upended federal abortion access during 2022. The advocacy group is deeply concerned that ‘such practices could increase as state officials and law enforcement utilise new data points to track patients travelling out of state for abortion or gender-affirming care.’
- International: OpenAI Worries About What Its Chatbot Will Say About People’s Faces: In this piece, The New York Times writes on some of the ways in which AI can change our lives when it comes to privacy. The piece focuses on OpenAI, the developers of ChatGPT that have disclosed their own concerns given that the AI powered tool has an advanced version that is not available to the public yet. This program has the ability to analyse images as well as recognise people’s faces. OpenAI is concerned that this would raise legal questions and ‘making such a feature publicly available would push the boundaries of what was generally considered acceptable practice by U.S. technology companies.’ Sandhini Agarwal, an OpenAI policy researcher, also mentions that the company ‘is figuring out how to address this and other safety concerns before releasing the image analysis feature widely.’
- Australia: AFP pauses use of controversial surveillance tech Auror found in Woolworths, Bunnings: ‘The Australian Federal Police (AFP) has suspended the use of a surveillance platform ‘Auror’ (a retail crime intelligence and loss prevention platform)’ that is currently operational ‘in 40% of Australian retailers, including Woolworths and Bunnings.’ It has been mentioned that ‘Auror uses machine learning to aggregate data sources.’ There are privacy concerns being raised that ‘the data is accessible to police without any oversight.’
- US: Facial Recognition and Racial Bias: Dr. Dédé Tetsubayashi, founder of Incluu, writes on her concerns about the use of facial recognition technologies, and in particular their impact on Black, Indigenous, People of Color in the US; which results in racial bias and wrongful arrests in some cases. She notes that ‘participation occurs without consent, or even awareness, and is bolstered by a lack of legislative oversight.’ She suggests that ‘technological design should be just, equitable, and relationship-centered; it should be built with not just for all users and those impacted by its use.’
AI, algorithms and autonomy in the wider world
- International: The workers at the frontlines of the AI revolution: The global labour workforce is currently experiencing the competitive nature of generative AI. This comes at a time where companies globally are ‘implementing cost cutting measures.’ Uma Rani, senior economist at the International Labour Organization says ‘This is very clearly a new way of offshoring services today.’ In this article, Rest of World showcases some examples of the experiences shared from ‘freelance workers around the world’ on how they use generative AI and how it has had an impact on their work and income.
- International: Want agency in the AI age? Get ready to fight: MIT Technology Review writes on the various platforms where battles are being waged against generative AI, which (irrespective of developers) is accused of illegally using data from the internet to train their models.
Government Regulation
- International: First Draft of UN Cybercrime Convention Drops Troubling Provisions, But Dangerous And Open-Ended Cross Border Surveillance Powers Are Still on the Table: The Electronic Frontier Foundation writes on the recently released ‘zero draft’ by the UN Cybercrime Convention. The EFF raises concerns that the instrument, which is slated for adoption in January 2024, provides a wide net of powers for international surveillance and data collection. The piece raises concerns regarding infringement of privacy and free speech rights.
- US: Sen. Casey rolls out bills to protect workers from AI surveillance and ‘robot bosses’: Democratic Senator Bob Casey has proposed the introduction of two pieces of legislation in an attempt to regulate automated decisions and surveillance at the workplace. The Senator’s office has said that these bills would address the fact that ‘systems and software, not humans, are increasingly making decisions on whom to interview for a job, where and when employees should work, and who gets promoted, disciplined, or even fired from their job.’
- France: France lawmakers pass bill allowing remote surveillance despite civil liberties concerns: The Jurist reports that that French National Assembly passed a bill which would allow for remote surveillance including access to ‘devices’ cameras, microphones and location services for investigations of terrorism, organised crime or crimes punishable by five or more years in prison.’ Though the access to this would have to be approved by a judge on a case to case basis, people have been voicing concerns about privacy and civil liberties.
- EU/ Australia: Regulating artificial intelligence: How the EU just got us closer: This piece attempts to draw parallels between the EU’s AI Act and Australia’s attempts at regulating AI. It suggests that the risk categorisations of applications might be a practice that can be replicated.
Research and Reports
- UK: Regulating AI in the UK: This report by the Ada Lovelace Institute ‘contextualises and summarises the UK’s current plans for AI regulation, and sets out recommendations for the Government and the Foundation Model Taskforce.’
- US: Automated Firing & Algorithmic Management: Mounting a Resistance, with Veena Dubal, Zephyr Teachout and Zubin Soleimany | AI Now Salons: This piece is the latest addition to the AI Now Salon Series by AI Now Institute. It covers a conversation with three prominent U.S.-based lawyers and activists on the scope and extent of privacy invasion by the use of automated decisions at the workplace.
Sign up for our email alerts
"*" indicates required fields
We take data privacy seriously. Our Privacy Notice explains who we are, how we collect, share and use Personal Information, as well as how you can exercise your privacy rights, but don’t worry, we hate spam and will NEVER sell or share you details with anyone else.