Our news briefings are emailed to our newsletter subscribers every week. Sign up via the contact form below if you’d like to receive them.
Research and reports
- Convergences in state positions on human control: This week, Automated Decision Research published our latest report, which examines convergences in state positions on human control in autonomous weapons systems. We also held a side event for diplomats this week at the CCW GGE on LAWS at the UN in Geneva; more info on our side event here, here and here.
Autonomous weapons systems, and AI and autonomy in military/defence
- US: Lawmakers seek special focus on autonomy within Pentagon’s AI office: Two congressmen have introduced a bill to Congress which would ‘create an office inside the U.S. Department of Defense to align and accelerate the delivery of autonomous technologies for military use.’ The bill, titled the Autonomous Systems Adoption & Policy Act ‘would nest a so-called Joint Autonomy Office within the Chief Digital and Artificial Intelligence Office.’
- UK: Artificial intelligence, drones, and technology focus of First Sea Lord’s keynote speech: A press release from the UK’s Royal Navy states that ‘In a wide-ranging speech at his annual Seapower Conference at Lancaster House in London, First Sea Lord Admiral Sir Ben Key announced more striking power for future warships and continuing investment in drones and autonomous systems.’
- Denmark: Denmark must take the lead in getting an international treaty against autonomous weapons: Several Danish AI researchers recently sent an open letter to the Danish Foreign Minister, ‘calling on the Danish government to take the lead in efforts to have an international treaty against autonomous weapons formulated and adopted.’ This article from one of the signatories argues that ‘Through international negotiations, states can decide which weapons to ban and which to regulate and how. The goal must be a legally binding text which, after signing, becomes international law.’ (in Danish, translated using google translate)
- US: USAF Sees ‘100 Roles’ for Its Robot Wingmen—and Firms Are Lining Up to Make Them: The US Air Force envisions a ‘hundred different roles for its new drones that will accompany fighter pilots into combat—with dozens of companies already lining up to build the wingmen drones.’
- US: Regulate AI to boost trustworthiness and avoid catastrophe, experts tell lawmakers: DefenseOne covers the U.S. Congressional hearings on AI, noting experts’ calls for regulation and that ‘Some key military officials share those concerns about the trustworthiness of AI tools even as the military seeks to use AI in a wide variety of areas.’
- UK/Netherlands/Germany/Hungary/Israel: Rheinmetall and Elbit Systems conduct live-fire demonstration of automated 155mm L52 wheeled self-propelled howitzer: In a press release, Rheinmetall announced that it successfully conducted a live-fire demonstration of an automated, self-propelled howitzer. The demonstration took place in Israel and was attended by ‘high-ranking officials of the armed forces of the United Kingdom, Germany, the Netherlands and Hungary.’ Rheinmetall and Elbit Systems cooperated on the system, and a ‘technically mature system is already available, enabling the integration of a Rheinmetall gun into the unmanned, fully robotic artillery turret of the Elbit system.’
- US: Defense Business brief: DefenseOne’s business brief reports on increasing ties between established defence firms and specialised tech startups – ‘most aimed at serving a Pentagon craving new ways to automate decades-old processes’ – and notes a number of examples, including American Rheinmetall, who are ‘installing Anduril’s software inside its Lynx armored vehicle’, and Northrop Grumman, who have teamed ‘with Shield AI to compete for the Army’s Future Tactical Unmanned Aircraft System, a new drone that will replace the RQ-7B Shadow.’
- Israel: IAI’s new BlueWhale XLUUV breaks cover: IAI (Israel Aerospace Industries) recently unveiled its BlueWhale System, ‘an autonomous submarine system designed to carry out Anti-Submarine Warfare (ASW), Mine CounterMeasure (MCM) and Intelligence, Surveillance and Reconnaissance (ISR) missions.’, the BlueWhale system can ‘detect and track targets, both above and below surface, and perform onboard sensor data processing. Actionable intelligence is relayed in real-time to the command and control over a dedicated broadband-secured satellite channel.’
- International: Are killer robots the future of war? Al Jazeera features a piece on autonomous weapons, noting that ‘a growing chorus of voices — especially from the Global South — is calling for their regulation, and experts believe a global taboo of the kind that is in place against the use of chemical weapons is possible. Major military powers may be intrigued by the potential battlefield advantages such systems could give them, but there seems to be little appetite for them outside governments and generals.’
- US: Border Industry Peddles Robot Dogs and AI Surveillance Amid End of Title 42: At a recent border security technology expo in Texas, ‘More than 1,700 DHS and industry officials pushed for the deployment of more militarized technology’ at the U.S. border, with talks focussing on ‘how best to procure new technologies and accelerate the bipartisan buildout of a digital “smart” wall powered by biometric data, artificial intelligence, facial recognition, aerial drones, infrared cameras, motion sensors, license plate-readers, radar, vehicle-mounted mobile surveillance’ and more.
- US: Sailor talks to Phalanx CIWS as it targets a 737 like a dog about to bite the mailman: The Drive covers an incident of an MK-15 Phalanx Close-In Weapon System ‘drawing a bead on a 737 passing over what appears to be a Harpers Ferry or Whidbey Island class amphibious dock landing ship. Sailors nearby laugh as they tell the sinister-looking Phalanx “no… No… NO!” as if it’s a dog about to do something it shouldn’t before it drops its barrel and forgets about the juicy target passing overhead.’
Regulation and enforcement
- EU: EU draft legislation will ban AI for mass biometric surveillance and predictive policing: The Verge carries a piece on the latest draft of the EU’s AI Act, which includes ‘prohibitions on mass facial recognition programs in public places and predictive policing algorithms that try to identify future offenders using personal data’. The piece also features reaction from numerous civil society organisations involved in campaigning on the Act, including AccessNow and European Digital Rights.
- France: French data protection authority lays out action plan on AI, ChatGPT: The French data protection authority has developed ‘an action plan for the deployment of AI systems that respect the privacy of individuals’.
- US: Sam Altman: CEO of OpenAI calls for US to regulate artificial intelligence: Testifying before a U.S. Senate Committee, OpenAI CEO Sam Altman called for government action on AI regulation. The BBC reports that ‘What was clear from the testimony is that there is bi-partisan support for a new body to regulate the industry.’
Facial recognition, biometric identification, surveillance
- UK: UK policing minister pushes for greater use of facial recognition: The Financial Times reports that the UK policing minister ‘has pushed for facial recognition to be rolled out across police forces nationally in a move that would ignore critics who claim the technology is inaccurate and some of its applications illegal’, with The FT noting that ‘The use of facial recognition has faced widespread criticism and scrutiny over its impact on privacy and human rights. The European Union is moving to ban the technology in public spaces through its upcoming Artificial Intelligence Act.’
- US: FTC warns about misuse of biometric information and harm to consumers: The US Federal Trade Commission has ‘issued a warning that the increasing use of consumers’ biometric information and related technologies, including those powered by machine learning, raises significant consumer privacy and data security concerns and the potential for bias and discrimination.’
- US: Eyes on the poor: surveillance cameras, facial recognition watch over public housing: The Washington Post reports that ‘In public housing facilities across America, local officials are installing a new generation of powerful and pervasive surveillance systems, imposing an outsize level of scrutiny on some of the nation’s poorest citizens. Housing agencies have been purchasing the tools — some equipped with facial recognition and other artificial intelligence capabilities — with no guidance or limits on their use, though the risks are poorly understood and little evidence exists that they make communities safer.
- US: White House to Scrutinize Discrimination in AI Hiring Decisions and Problems of Automated Workplace Surveillance: Four federal agencies will scrutinise the use of AI in hiring decisions and automated workplace surveillance, pledging to ‘protect individuals’ rights regardless of whether legal violations occur through traditional means or advanced technologies’.
- France: More than 50 aerial surveillance operations authorised by prefects in one month: Le Monde reports that since the publication of a decree on 19 April, French prefects can authorise police and gendarmes to fly remotely piloted aircraft to monitor demonstrations. Several civil society organisations have protested against the decree. (in French)
- UK: Police to use facial recognition technology in Cardiff during Beyoncé concert: This week, police in the UK used facial recognition technology during a Beyoncé concert; ‘Daragh Murray, a senior lecturer of law at Queen Mary University in London, said the normalisation of invasive surveillance capability at events such as a concert was concerning, and was taking place without any real public debate.’
- US: Police Facial Recognition Technology Can’t Tell Black People Apart: In an opinion piece in Scientific American, Thaddeus L. Johnson, a former police officer and a senior fellow at the Council on Criminal Justice, and Natasha N. Johnson, a faculty member at Georgia State University and director of its M.I.S. program in Criminal Justice Administration, write that ‘Our research supports fears that facial recognition technology (FRT) can worsen racial inequities in policing. We found that law enforcement agencies that use automated facial recognition disproportionately arrest Black people. We believe this results from factors that include the lack of Black faces in the algorithms’ training data sets, a belief that these programs are infallible and a tendency of officers’ own biases to magnify these issues.’
AI, algorithms, and autonomy in the wider world
- International: How to deal with an AI near-miss: look to the skies: : Writing in the Bulletin of Atomic Scientists, Kris Shisak, senior fellow at the Irish Council for Civil Liberties, writes that ‘There is a growing realization around the world that AI systems need to be regulated. In the case of AI systems, the risks go beyond serious injury or death. Mundane use of AI systems such as navigation apps can have safety implications, but the harms don’t stop there. AI systems can harm the fundamental rights of people. AI systems have contributed to wrongful arrests (Hill 2020), enabled housing discrimination (US Department of Justice 2022), and racial and gender discrimination (BBC 2020). These harms can ruin the lives of people. Such harms should be treated as serious incidents.’
- International: No need to wait for the future: the danger of AI is already here: In a short ‘expert comment’, Oxford Internet Institute’s Professor Sandra Wachter and Associate Professor Dr Brent Mittelstadt write that ‘AI poses real risks to society. Focusing on long-term imagined risks does a disservice to the people and the planet being impacted by this technology today.’ A comment on twitter from Meredith Whittaker, CEO of Signal on Meet the Press also shares that ‘AI is being “shaped to serve” the economic interests of the “handful of companies” that can develop it.’
- International: WHO calls for safe and ethical AI for health: The World Health Organisation is calling for caution to be exercised in using artificial intelligence (AI) generated large language model tools (LLMs) to protect and promote human well-being, human safety, and autonomy, and preserve public health.’
- EU: Who killed the EU’s translators? Politico reports that ‘High-tech machines that can run through Eurocratic jargon at record speed have replaced hundreds of translators working for the EU, downsizing one of the largest and oldest departments among the multilingual Brussels institutions.’
- International: Why shareholders don’t trust Big Tech — and how to fix that: AccessNow reports that companies such as Amazon, Alphabet, and Meta ‘have clearly failed to earn shareholder trust regarding how they conduct risk management and mitigation. The author identifies three core themes that the tech companies need to address.
Other items of interest
- The Autonorms project is seeking a postdoctoral fellow to join their project on autonomous weapons systems and the use of force. Closing date 9th June.
- On 27th June, UNIDIR will hold its 2023 Innovations Dialogue on the impact of AI on future battlefields. The event will be held both in-person and online. Registration is free.
Sign up for our email alerts
"*" indicates required fields
We take data privacy seriously. Our Privacy Notice explains who we are, how we collect, share and use Personal Information, as well as how you can exercise your privacy rights, but don’t worry, we hate spam and will NEVER sell or share you details with anyone else.