Autonomy in weapons systems, and AI in security and defence
- International: Oppenheimer’s warning lives on: international laws and treaties are failing to stop a new arms race: Alexander Gillespie, Professor of Law at the University of Waikato, Aotearoa New Zealand, writing on AI in weapons systems, notes that ‘international law and regulation are left scrambling to catch up with the march of technology – to govern what Oppenheimer called “the relations between science and common sense”’, and notes also that more than 90 countries support the negotiation of a new, legally binding instrument on autonomy in weapons systems.
- Ukraine/Russia: The war in Ukraine is spurring a revolution in drone warfare using AI: The Washington Post reports on the development of AI technology in drones, spurred by Russia’s war on Ukraine. The Post states that ‘AI technology, under development by a growing number of Ukrainian drone companies, is one of several innovative leaps underway in Kyiv’s domestic drone market’, with ‘improvements in speed, flight range, payload capacity and other capabilities’ having ‘an immediate impact on the battlefield’.
- Ukraine/Russia: Roles and Implications of AI in the Russian-Ukrainian Conflict: Samuel Bendett, adjunct senior fellow with CNAS, writes on the emergence of AI ‘as a significant asset in the ongoing Russian-Ukrainian conflict. Specifically, it has become a key data analysis tool that helps operators and warfighters make sense of the growing volume and amount of information generated by numerous systems, weapons and soldiers in the field.’ He also notes that ‘So far, Ukraine has managed to maintain a human-centric approach toward AI use, with operators making the final decisions.’
- Russia: Russia prepares an ‘avalanche’ of FPV kamikaze drones: In Forbes, David Hambling reports that Russian forces ‘“will soon see an avalanche-like increase in strikes using this weapon,” claims one Russian Telegram user. “The Lancet will be the long arm and a flagship kamikaze drone at operational depth, while the FPV drones will take over tactical depths.’
- US: More battlefield AI will make the fog of war more deadly: In Wired, Will Knight writes that ‘greater use of AI will create a growing number of military encounters in which humans are removed or abstracted from the equation. While some people have compared AI to nuclear weapons, ‘the more immediate risk is less the destructive power of military AI systems than their potential to deepen the fog of war and make human errors more likely.’
- US: The AI-powered, totally autonomous future of war is here: Also in Wired, Will Knight has a long piece on the adoption of AI and autonomous systems by the US Naval Service, highlighting that ‘One need only look to the civilian world to see how this technology can go awry—face-recognition systems that display racial and gender biases, self-driving cars that slam into objects they were never trained to see. Even with careful engineering, a military system that incorporates AI could make similar mistakes. An algorithm trained to recognize enemy trucks might be confused by a civilian vehicle. A missile defense system designed to react to incoming threats may not be able to fully “explain” why it misfired.
Facial recognition, biometric identification, surveillance
- Calls for Policing Board to investigate ‘secret surveillance of journalists in Northern Ireland’: Amnesty International and the Committee on the Administration of Justice (CAJ) ‘have jointly written to the [Northern Irish] Policing Board, asking it to launch an investigation’ into ‘police surveillance of journalists following revelations that the PSNI accessed a prominent reporter’s phone.’
- US/Canada: ESRB proposes facial recognition age verification for parental consent: The Entertainment Software Regulation Board has proposed a new verification mechanism which ‘would use facial age assurance software to verify the age of the parent’ giving consent for children under the age of 13 in order to collect personal information.
- UK: Live Facial Recognition Technology – Data Protection Reminders: The UK’s Information Commissioner’s Office has issued a set of ‘reminders’ regarding the use of live facial recognition technology, noting that the use of live facial recognition ‘must be strictly necessary for the law enforcement purposes’ and that ‘There should be periodic testing and reviews of the technology to ensure that it remains accurate and effective towards understanding and eliminating bias.’
- US: Is Artificial Intelligence Anti-Black?: The author of this piece provides an overview of racist outcomes from the use of AI in various technologies, including facial recognition, recruiting and hiring platforms, and generative AI.
- US: Should police use facial-recognition technology? Ann Arbor to vote on issue: The city of Ann Arbor, Michigan, is ‘considering restricting police use of a facial-recognition technology’, with city officials due to vote on the issue soon.
- EU: Ryanair challenged by Noyb over ‘invasive’ facial recognition: ‘Noyb, is a non-profit that aims to launch strategic court cases and media initiatives in support of the General Data Protection Regulation (GDPR). This week the organisation ‘filed a complaint against Ryanair for its use of facial recognition technology.’ The complaint stems from allegations that ‘a customer was given the choice of verifying her booking through facial recognition or going to the check-in counter at the airport more than two hours before departure and was also charged a fee for the verification process.’
AI, algorithms and autonomy in the wider world
- India: Tech companies say no to more laws, statutory body to regulate AI: In India, the technology industry has ‘strongly opposed the creation of more laws or a statutory authority to regulate AI in the country’, arguing that it could ‘impede the growth of the evolving AI ecosystem in India’. This comes following a report from the Telecom Regulatory of India which recommends that an independent statutory authority ‘be established immediately for development of responsible AI and regulation of use cases in the country.’
- International: It’s high time for more AI transparency: The MIT Technology Review writes on the probable impact that would stem from lack of transparency from generative AI systems that have recently been released. Due to lack of transparency, any changes or tweaks in the generative AI system might render systems that were created based on these may glitch.
- International: UN Human Rights Council Resolutions Offer Crucial Safeguards For Civil Society In AI-Driven Digital Age: This piece by the European Center for Not for Profit Law talks about the importance of two resolutions by the UN Human Rights Council which have an impact on protecting rights of individuals against possible infringement by various applications of AI.
- International: Why actors are really worried about the use of AI by movie studios: Last week Reuters reported that ‘Since June, Hollywood studios and performers have debated the use of artificial intelligence in film and television.’ Actors are concerned with the use of data collected that can be reused with AI. This specific article builds on the piece from Reuters and explains how ‘new processes such as machine learning – AI systems that improve with time – could turn an actor’s performance in one movie into a new character for another production, or for a video game.’
- South Africa: Algorithm bias — Synthetic data should be option of last resort when training AI systems: Writing an op-ed for the Daily Maverick, this article from Professor Tshilidzi Marwala, the seventh Rector of the United Nations (UN) University and UN Under Secretary-General, articulates his views and shares that ‘Fake data gives impaired AI systems. Therefore, when synthetic data is used to train AI systems, it must be used cautiously.’
- US: AI Boom Creates Concerns for Recent Graduates: Cengage Group, a global education technology company, has published the Group’s 2023 Employability Report that ‘Exposes New Hiring Trends and Shaky Graduate Confidence.’ The report elaborates on the concerns from graduates on how prepared they are for the workforce given the ‘emergence of generative AI platforms, like ChatGPT’. The results are split on ‘whether their job would be replaced by AI, with 46% feeling threatened and 55% believing their job could never be fully replaced by AI.’
Government Regulation
- International: The Perils and Promise of AI Regulation: In this piece, Just Security writes about the global ‘rapid deployment of AI’ since last year and how this has culminated into several talks in various forums and the legislation being developed. The authors note that there is clear consensus and divergence between the US and EU in terms of the legal aspects of developing regulation in this area.
Research and Reports
- Report: Adding Structure to AI Harm: An Introduction to CSET’s AI Harm Framework: The Center for Security and Emerging Technology has created a framework to analyse the harms of an AI system. The system takes into consideration individual and collective harms, tangible and intangible harms and harms that are probable as opposed to harms that have occurred. The report is an attempt to create ‘a standardized conceptual framework to support and facilitate analyses of AI harm.’
- The academic journal Science as Culture has published a special issue on border technologies, with the introduction (which is free to access) noting that ‘Border technologies constitute and stabilise hegemonic border-control regimes but rely on particular securitised and techno-optimistic understandings of migration control. These notions are based on exerting epistemic control over the agendas that determine what and how migration and security research receives funding and feeds back into policymaking. The kinds of epistemic control exercised by EU institutions and security agencies marginalise divergent directions of critical migration and security research and make contestation inherently difficult.’
Sign up for our email alerts
"*" indicates required fields
We take data privacy seriously. Our Privacy Notice explains who we are, how we collect, share and use Personal Information, as well as how you can exercise your privacy rights, but don’t worry, we hate spam and will NEVER sell or share you details with anyone else.