Our news briefings are emailed to our newsletter subscribers every week. Sign up via the contact form below if you’d like to receive them.
Autonomous weapon systems, and AI and autonomy in security & defence
- UK/Australia/US: World first as UK hosts inaugural AUKUS AI and autonomy trial: In a press release, the UK’s Ministry of Defence and Defence Science and Technology Laboratory (DSTL) announced that it held the first joint Australian, UK and US AI and autonomy trial, which ‘saw the initial joint deployment of Australian, UK and US AI-enabled assets in a collaborative swarm to detect and track military targets in a representative environment in real time.’ The trial, which was organised by DSTL, ‘achieved world firsts, including the live retraining of models in flight and the interchange of AI models between AUKUS nations. The AUKUS collaboration is looking to rapidly drive these technologies into military capabilities.’ The press release notes that ‘More than 70 military and civilian defence personnel and industry contractors were involved in the exercise in April 2023.’ DSTL has released a video of the trail on twitter.
- Israel: Israel aims to be ‘AI superpower’, advance autonomous warfare: Reuters reports that the Israeli Defence Ministry director-general Eyal Zamir has stated that their ‘mission is to turn the State of Israel into an AI superpower and to be at the head of a very limited number of world powers that are in this club.’ More on this story here, noting Zamir’s comments that ‘AI technologies will create many additional capabilities including the operation of platforms in groups and swarms and independently operated combat systems. These technologies will integrate into the battlefield and provide an advantage to those who know how to develop them and use them operationally’,
- US: Start-Ups Bring Silicon Valley Ethos to a Lumbering Military-Industrial Complex: The New York Times carries a feature on small U.S. technology firms and the U.S. Department of Defense, noting that Ukraine has become a testbed for some of these smaller firms, many of whom are building systems with autonomous functions – including AeroVironment, who make the Switchblade drones, and Anduril.
- US/International: AI may pull the trigger in war, but it shouldn’t call the shots: In an op-ed in The Boston Globe, Seth Moulton, a Democratic member of the U.S. Congress who sits on the House Armed Services Committee and a former U.S. Marine Corps Officer, writes that ‘AI has no moral compass. It cannot weigh the ethical costs and benefits of an action’, and argues that the risks of autonomous weapons ‘make it even more urgent that the world’s leading military powers work together to limit their use.’ However, and as noted in last week’s news briefing, Moulton is one of the Congressmen who has introduced a bill to ‘create an office inside the U.S. Department of Defense to align and accelerate the delivery of autonomous technologies for military use.’
- US/International: AI on the battlefield: Next stop for Peter Thiel after Paypal, Hulk Hogan, Trump and Facebook: El País features a piece on the billionaire Peter Thiel and Palantir (he is a co-founder of the firm). The piece focuses on Palantir’s Artificial Intelligence Platform, ‘the software at the center of the company’s efforts to take AI into the arena of warfare.’ More on this Palantir platform here and here.
- US: NGA making ‘significant advances’ months into AI-focused Project Maven takeover: The US National Geospatial Agency has said that it has made ‘important strides’ since taking over Project Maven several months ago: ‘We work closely with the combatant commands to integrate AI into workflows, accelerating operations and speed-to-decision. This benefits maritime domain awareness, target management and our ability to automatically search and detect objects of interest.’
Government regulation and enforcement
- UK: Worker-focused AI Bill introduced by backbench MP Mick Whitley: Labour MP Mick Whitley ‘introduced a bill to regulate the use of artificial intelligence (AI) in the workplace, with the goal of creating “a people-focused and rights-based approach” to ensure all workers are better protected against deployments of the technology.’ Computer Science Weekly notes that ‘Although 10-minute rule motions rarely become law, they are often used as a mechanism to generate debates on an issue and test opinion in the Parliament.’
- Italy: Exclusive: Italy watchdog to review other AI systems after ChatGPT brief ban: Italy’s data protection authority will ‘review other artificial intelligence platforms and hire AI experts, a top official said, as it ramps up scrutiny of the powerful technology after temporarily banning ChatGPT in March.’
- US: Biden-Harris administration takes new steps to advance responsible artificial intelligence research, development and deployment: The Biden-Harris administration has announced a suite of new efforts that ‘will advance the research, development, and deployment of responsible artificial intelligence (AI) that protects individuals’ rights and safety and delivers results for the American people.’
Facial recognition, biometric identification, surveillance
- St.Maarten: Police chief advocates the use of artificial intelligence in policing: At a symposium in Aruba, the police chief of St. Maarten spoke to attendees on ‘the importance of AI for policing and law enforcement’, including facial recognition software.
- US: Opposition gathers but biometric privacy laws grow: BiometricUpdate carries a brief overview of ‘forces for and against biometric information protection laws in the United States.’
- US: Using surveillance to punish and evict public housing tenants is not new: This piece by the Washington post is in continuation to its report on may 16, 2023 on the use of surveillance by public housing authorities which revealed that they ‘often use surveillance technology, including facial recognition, to punish and evict public housing residents who incur minor lease violations.’ This report traces the history of similar policy and practice to target public housing.
- International: Android biometric safeguards fail to withstand brute-force attack: Biometric Update reports on the findings of a research paper titled ‘Bruteprint: Expose Smartphone Fingerprint Authentication to Brute-force attack’ by a team at Zhejiang University and Tencent Labs. This research exposed that brute force attacks were able to gain ‘easy access to biometric fingerprint data stored on the devices or acquired through online databases’ and also circumvented the ‘feature designed to limit the number of unsuccessful fingerprint matches.’
- UK: Understanding live facial recognition statistics: Big Brother Watch writes on the accuracy of live facial recognition technology being deployed in London, and says that 85% of the matches were found to be false, while 15% were found to be true matches. The piece further disputes the 15% success rate with verification methods used by police officials.
AI, algorithms, and autonomy in the wider world
- International: AI is steeped in Big Tech’s digital colonialism: In an interview with Wired, academic and AI expert Abeba Birhane discusses digital colonialism, noting that ‘As technology is exported to the global south, it carries embedded Western norms and philosophies along with it. It’s sold as a way of helping people in underdeveloped nations, but it’s often imposed on them without consultation, pushing them further into the margins.’
- US: Eating disorder helpline fires staff, transitions to chatbot after unionization: Four days after workers unionised, executives at the U.S. National Eating Disorders Association (NEDA) ‘decided to replace hotline workers with a chatbot named Tessa’. According to Abbie Harper, one of the helpline associates, ‘We asked for adequate staffing and ongoing training to keep up with our changing and growing Helpline, and opportunities for promotion to grow within NEDA. We didn’t even ask for more money. When NEDA refused [to recognize our union], we filed for an election with the National Labor Relations Board and won on March 17. Then, four days after our election results were certified, all four of us were told we were being let go and replaced by a chatbot.’
- UK: Automated UK welfare system needs more human contact, ministers warned: The Guardian reports that UK government ministers have been warned that ‘More human contact is needed in the UK’s automated welfare system’, as ‘350 low-paid workers every day are raising complaints about errors in welfare top-ups, causing financial hardship and emotional stress.’
- US: AI scanner used in hundreds of US schools misses knives: The BBC reports that ‘a security firm that sells AI weapons scanners to schools is facing fresh questions about its technology after a student was attacked with a knife that the $3.7m system failed to detect.’
- International: ‘There was all sorts of toxic behaviour’: Timnit Gebru on her sacking by Google, AI’s dangers and big tech’s biases: In an interview with The Guardian, Distributed AI Research Institute founder Timnit Gebru notes that “‘There’s a lot of exploitation in the field of AI, and we want to make that visible so that people know what’s wrong,” she says. “But also, AI is not magic. There are a lot of people involved – humans.”’
- International: What you need to know about generative AI and human rights: AccessNow has released an explainer ‘to get to the truth about what generative AI can (and can’t) do, and why it matters for human rights worldwide.’
Research, reports, and other items of interest
- Life, love and lethality: history and delegating death on the battlefield: In the Lieber Institute’s Articles of War blog, Dr Helen Durham, a global expert in international humanitarian law, humanitarian action and diplomacy, and Dr Kobi Leins, an Honorary Senior Fellow of King’s College, London, writes that ‘Automation of killing potentially creates greater power imbalances, destabilizes our global order, and may dehumanize us further. Past experiences and carefully listening to the warnings of experts, are the best data sets we have to navigate the future. We also need to ensure that we continue to be able to apply the salient equilibrium required by IHL with military necessity fairly balanced against the principle of humanity – a principle that hasn’t been stripped of the human(e).’
Sign up for our email alerts
"*" indicates required fields
We take data privacy seriously. Our Privacy Notice explains who we are, how we collect, share and use Personal Information, as well as how you can exercise your privacy rights, but don’t worry, we hate spam and will NEVER sell or share you details with anyone else.