Autonomy in weapons systems, and AI in security and defence
- International: War is messy. AI can’t handle it: This co-authored article published by the Bulletin of the Atomic Scientists by Ian Reynolds and Ozan Ahmet Cetin delves into the links between software and war. Earlier this year, ‘technology company Palantir released a demo of a large language model (LLM)-enabled battle management software called Artificial Intelligence Platform (AIP) for Defense.’ The authors point out that ‘there are ongoing discussions about how AI will be used for military planning and command decision-making more generally’ and have some suggestions on how some tools ‘can help mitigate the problem of explainability in AI systems.’
- International: What are the AI Act and the Council of Europe Convention: Stop Killer Robots: This week, Francesca Fanucci, Senior Legal Advisor at the European Center for Not-for-Profit Law and Automated Decision Research manager Catherine Connolly write about the EU’s AI Act and the Council of Europe’s Framework Convention on AI. The piece focuses on ‘what needs to happen to ensure against the unrestricted use of AI systems on national security & defence grounds, and the need for a new international legal instrument on autonomous weapons systems.’
- International: As Militaries Adopt AI, Hype Becomes a Weapon: Jstor Daily reports on the global race on AI, and the hype created around applications of AI for defence and military purposes. The piece outlines how countries have hyped AI capabilities, and their ambitions regarding the same, and asks the question: ‘Which is more dangerous for the future of humanity: a military AI that hums with seamless efficiency, or one that is riddled with errors and wielded by a resentful officer corps?’
- US: ‘Digidogs’ are the latest in crime-fighting technology. Privacy advocates are terrified: Politico reports on ‘digidogs’,robotic droids used by law enforcement agencies in New York, and highlights the concerns of civil society advocates, who note that ‘More departments are using more tools that can collect even more data for less money’. The piece further highlights other concerning technologies deployed by the NYPD, including automated car plate readers and facial recognition technologies, as well as discussing the weaponisation of other robots elsewhere.
- US: AI pilots US Air Force Valkyrie combat drone for the first time: New Atlas reports that ‘US Air Force’s XQ-58A Valkyrie experimental combat drone has made its first flight under the control of artificial intelligence’ at Eglin Test and Training Complex in Florida. This is an extension of the Skyborg Vanguard program which focuses on ‘developing the ability of an artificial intelligence/machine learning agent to fly safely while solving tactically relevant challenges.’
- International: Inside the messy ethics of making war with machines: This piece by MIT Technology Review discusses ethical and legal questions that come along with the integration of autonomy in military functions and operations. This human- machine teaming has been seen in a number of new weapon systems, where machine learning has ‘set off a paradigm shift in how militaries use computers to help shape the crucial decisions of warfare.’ With weapon systems capable of ‘human target detection and identification’, might dilute the complex processes involved in human decision making. Apart from various other challenges that this automation might bring, it also threatens ‘something ineffable about the act of war could be lost entirely.’
- Ukraine: After ‘Army of Drones,’ Ukraine Now Wants ‘Army of Robots’: This piece by The Defense Post reports on plans by the Ukrainian Army, through its Ministry of Digital Transformation, to ‘launch an electronic warfare army and an “Army of Robots” to better respond to more sophisticated threats’ with the cooperation of industry partners. This happens at a time when various international civil society organisations have ‘asserted that killing people based on data collected by sensors and processed by machines would violate human dignity.’
- Australia: A swarm of technological advances: The Australian army’s Robotic and Autonomous Systems Implementation & Coordination Office (RICO) conducted a human machine team demonstration at Puckapunyal in June. This demonstration included ‘optionally crewed combat vehicles (OCCV) – enhanced M113AS4s’ and armed (stimulated) drones. The piece states that it was the ‘first time the Army fired a remote weapon system from the remote-controlled vehicle.’ It further says that ‘In theory, robots operating on advanced artificial intelligence software are then let loose to clear the last 300 metres, reducing risk to soldiers. Using image recognition and context awareness, they would identify dead, injured and surrendering enemy personnel while supported by tanks.’
Facial recognition, biometric identification, surveillance
- Czech data protection authorities question police use of facial recognition: Biometric Update reports that the Czech Office for Personal Data Protection is assessing the use of a facial recognition system named DOP (Digitálních podob osob) by the Czech Police for almost a year. The system ‘relies on images from government identity card and travel document registers’ and has scope to ‘be used for monitoring people’s activities, including those online.’ Raising concerns regarding privacy and data rights of citizens, lawyers have suggested that ‘mass use of CCTV surveillance requires a new legal framework, even if the images are only used by the facial recognition system retroactively.’
- US: US regulatory body set to unveil measures for ‘Surveillance Industry’: Wion reports that The Consumer Financial Protection Bureau (CFPB) is set to introduce regulation aimed at companies ‘collecting and selling personal data.’ Rohit Chopra, head of the US Consumer Financial Protection Bureau, said the actions of ‘data brokers’ could lead to personal data being obtained from military personnel and people with dementia with the use of algorithms, which is worrisome.
- Facial recognition technology should be regulated, but not banned: This piece in Euronews by Tony Porter, Chief Privacy Officer, Corsight AI (a facial recognition software company), and Dr Nicole Benjamin Fink, Founder, Conservation Beyond Borders, elaborates on discussions surrounding the ban of facial recognition technologies in the European Union. The authors argue that, despite concerns expressed by privacy advocates and studies pointing to the inefficacy of facial recognition technologies, a ‘2020 NIST report claimed that FRT performs far more effectively across racial and other demographic groups than widely reported, with the most accurate technologies displaying “undetectable” differences between groups.’
- India: “Surveillance Of Citizens”: Editors Guild’s On Data Protection Bill: NDTV reports on the Editors Guild of India’s response to certain provisions of the Digital Personal Data Protection (DPDP) Bill, which the Guild says’ creates an enabling framework for surveillance of citizens, including journalists and their sources.’ Sections in the bill empower the government to ‘ask any public or private entity (data fiduciary) to furnish personal information of citizens, including journalists and their sources.’ Further, it empowers the government to notify any ‘instrumentality of state’ as exempt from the data protection provisions, and allows the government to hold personal data for an unlimited amount of time.
- China: China releases plans to restrict facial recognition technology: CNBC reports on new guidelines and restrictions on the use of facial recognition technologies in China. While its use for ‘administrative purposes’ and ‘public safety’ are allowed while other uses ‘require individual consent, and a specific purpose.’ The guidelines also encourage the use of ‘national systems.’
- US: Montana Passes Law Regulating Facial Recognition Use by Police: Biometric Update reports that Montana has passed the Facial Recognition for Government Use Act, which allows government agencies including law enforcement to use facial recognition ‘to look for suspects, victims of, or witnesses to serious crimes.’ The law however prohibits the ‘use of continuous facial recognition and establishes human review and audit procedures to ensure compliance with the technology.’
AI, algorithms and autonomy in the wider world
- New York: Rules for AI hiring tools begin to take shape: Politico reports on New York city law on the use of AI in hiring decisions which came into force on July 5th through the Department of Consumer and Worker Protection (DCWP). The report cites the DCWP guidance document which states that there is no ‘direct requirement that an employer stop using a given tool if it fails a bias audit.’ However, ‘existing municipal, state and federal anti-discrimination laws’ could be used as seen in the recent settlement at the Equal Opportunity Commission on a ‘AI hiring tool accused of illegally screening out older applicants.’
- Australia: Australia Needs AI Regulation: The Australian Human Rights Commission, in this piece, expressed concern about emerging harms of AI applications such as privacy, algorithmic discrimination, automation bias, misinformation and disinformation. As these harms may extend beyond these with the ‘increasing interoperability’ of these technologies, the commission calls for ‘Immediate steps’ to ‘regulate AI and protect individuals from these unique risks.’ The Commission further suggests that the government should ‘commit to reviews of existing legislation, and then address shortfalls in AI-specific legislation’ as a way forward.
- UK/ International: UK Plans AI Summit to Assert Global Leadership in Artificial Intelligence: Cryptopolitan reports on an upcoming AI summit hosted by the UK, which is expected to host ‘prominent global leaders and top AI executives.’ On one hand, the summit could help situate the UK as a ‘hub for AI talent and adoption’, and simultaneously placing ‘guardrails’ including computing power threshold, AI chip regulation and watermarking AI-generated material.
- International: Vatican announces that artificial intelligence will be theme of next World Day of Peace: The Catholic News Agency reports on a statement from the Vatican’s Dicastery for Promoting Integral Human Development announcing that Pope Francis’ annual peace message for 2024 will focus on artificial intelligence. While highlighting the urgent need for guidance and ethical reflection on this, the statement also said that ‘open dialogue on the meaning of these new technologies, endowed with disruptive possibilities and ambivalent effects’ is necessary.
- US: States try to control flood of AI tech: This piece focuses on a flood’ of legislation being brought in by legislatures at various levels across the US to regulate and govern emerging AI technologies, noting that this year, at least 25 states have introduced bills, while 14 have adopted resolutions and enacted legislations. It adds nuance to the various utilities of these technologies, and the challenges by legislatures in properly categorising and assessing risk.
- International: Zoom denies training AI on calls without consent: BBC News reports that Zoom launched new AI-powered features in June, where experts have questioned the wording of the terms of service because the terms have loopholes and have been called ‘opaque’ that would ‘access more user data than needed.’ The terms of service have now been updated to request consent from users ‘to use their data to train AI models.’
- International: Rules to keep AI in check: nations carve different paths for tech regulation: This article provides a succinct ‘guide to how China, the EU and the US are reigning in artificial intelligence.’ During June this year, the EU’s parliament passed the AI Act — ‘a giant piece of legislation that would categorise AI tools on the basis of their potential risk.’ The author notes that ‘in contrast to the EU, the United States has no broad, federal AI-related laws nor significant data-protection rules. China has so far issued the most AI legislation although it applies to AI systems used by companies, not by the government. A 2021 law requires firms to be transparent and unbiased when using personal data in automated decisions, and to let people opt out of such decisions.’
- US: Meet the hackers who are trying to make AI go rogue: This week a ‘red-teaming’ event will take place at the Def Con hacker convention in Las Vegas, United States. In this piece, Will Oremus writes for the Washington Post and notes that ‘chatbots can be biased, deceptive or even dangerous.’ ‘Hackers within teams will be competing to find out weaknesses and exploit loopholes in computer systems.’
- International: Why it’s impossible to build an unbiased AI language model: Melissa Heikkilä, a senior reporter at MIT Technology Review, writes about how artificial intelligence is changing our societies. She points out that ‘it is dangerous to push a narrative that AI is unbiased…it will only exacerbate the problem of humans’ tendency to trust computers, even when the computers are wrong. In fact, AI language models reflect not only the biases in their training data, but also the biases of people who created them and trained them.’ Chan Park, a PhD researcher at Carnegie Mellon University, also says results from a study indicate that ‘we believe no language model can be entirely free from political biases.’
- International: AI and the environment: What are the pitfalls?: Originally published by Deutsche Welle, the Indian Express covers a story about the huge impact that AI is having on the environment. For example, Anne Mollen, a researcher at the Berlin-based NGO Algorithmwatch says ‘data centre infrastructure and data submission networks account for 2-4% of global CO2 emissions.’ Benedetta Brevini, associate professor of political economy of communication at the University of Sydney, Australia, also asks an important question ‘Why are we not having a conversation about how to reduce this carbon footprint?.’ More on this here, noting for example that a one-hour video call uses between 2-12 litres of water, and that turning your video off during online video calls and going audio-only can reduce the carbon footprint of your call by as much as 96%.
Research and reports
- How modern militaries are leveraging AI: This report by Julia Siegal and Tate Nurkin, who work with the Scowcroft Center for Strategy and Security at the Atlantic Council explores opportunities and challenges in the US army’s wide ranging adoption of human machine teaming (HMT). HMT could prove to be a military advantage in various use cases, however the report says, ‘AI agents’ would need to overcome issues in reliability, data security and transparency. The report also implores the US Department of Defence to thoroughly consider all scenarios where AI agents may target humans directly, or other scenarios which might lead to escalation.
- Ready but irresponsible? Analysis of the Government Artificial Intelligence Readiness Index: This research by Stany Nzobonimpa and Jean-François Savard in the Policy and Internet Journal, analyse country scores on the Oxford Insights AI Readiness Index. Further indicators such as privacy, inclusion, accountability and transparency are analysed. The research has found that ‘world’s top AI innovators are ready but are not prioritising and promoting responsible AI.’
Sign up for our email alerts
"*" indicates required fields
We take data privacy seriously. Our Privacy Notice explains who we are, how we collect, share and use Personal Information, as well as how you can exercise your privacy rights, but don’t worry, we hate spam and will NEVER sell or share you details with anyone else.