Our news briefings are emailed to our newsletter subscribers every week. Sign up via the contact form below if you’d like to receive them.
Autonomous weapons systems, and AI and autonomy in military/defence
- CCW GGE on LAWS: Proposals Related to Emerging Technologies in the Area of Lethal Autonomous Weapons Systems: A Resource Paper (updated): This publication by the United Nations Institute for Disarmament Research (UNIDIR) lists and summarises proposals made by states at meetings of the Convention on Certain Conventional Weapons Group of Governmental Experts on Lethal Autonomous Weapons Systems.
- US: US Military Now Has Voice-Controlled Bug Drones: DefenseOne reports thatTeledyne FLIR, developer of the Black Hornet nano drone, has teamed up with AI startup Primordial Labs to add voice control to the system. According to Mick Adkins, who runs product and business development for Primordial Labs, the software could be used for ‘just about any’ kind of drone or system.
- Pakistan: Seminar on ‘Lethal Autonomous Weapon Systems’ held: Government representatives, members from civil society, academia and youth leaders convened this week to discuss the recent working papers of Pakistan and the State of Palestine that were submitted at the CCW GGE on LAWS during March 2023. Both papers include calls for a legally binding instrument on autonomous weapons systems, including clear prohibitions and regulations.
- International: Peter Thiel’s Palantir is seeing ‘unprecedented’ demand for its military A.I. that its CEO calls ‘a weapon that will allow you to win’: Palantir Technologies is seeing ‘unprecedented’ demand for its new AI platform, which the developers have said ‘can be used by militaries to tap the kinds of AI models that power ChatGPT to aid in battlefield intelligence and decision-making.’ This platform was recently reported on by Vice, noting that ‘While there is a “human in the loop” in the AIP demo, they seem to do little more than ask the chatbot what to do and then approve its actions.’
- US: AI solutions dominate on SOF Week exhibit floor: During the SOF Week 2023 exhibition, a number of defense companies showcased several systems, including uncrewed aerial systems and counter-UAS products that feature machine learning and AI software. Some of the products are equipped with ‘advanced target detection and high-fidelity tracking solutions.’
Regulation and enforcement
- EU: Facial-recognition ban gets lawmakers’ backing in AI Act vote: This week, members of the European Parliament in the internal market and civil liberties committees passed their compromise text for the EU’s AI Act, agreeing for a blanket ban on remote biometric identification in public venues. As noted by the EDRi, ‘This vote comes at a crucial time for the global regulation of AI systems’. A number of civil society organisations, such as the Border Violence Monitoring Network and Fair Trials also highlighted the importance of the vote and that it is ‘the first of its kind in Europe.’ More on the background to this week’s vote here, in The Guardian.
- France: Clearview fined again in France for failing to comply with privacy orders: TechCrunch reports that Clearview AI has been fined once again ‘over non-cooperation with the data protection regulator.’ Clearview has been found to ‘have breached a number of requirements set out in law — by France’s CNIL’ (an independent French administrative regulatory body whose mission is to ensure that data privacy law is applied).
Biometric identification and surveillance
- US: Neighborhood Watch Out: Cops Are Incorporating Private Cameras Into Their Real-Time Surveillance Networks: The Electronic Frontier Foundation reports that US law enforcement agencies are promoting the sale and use of surveillance cameras from a private company, which provides law enforcement agencies within the jurisdiction of the user and even beyond it with access to the camera. EFF has collated the use of these systems in over 150 jurisdictions. The piece raises various concerns about the expansion of police surveillance into the private realm.
- US: Bad Input: These three short films by Consumer Reports as part of its project titled ‘Bad Input’ maps and highlights effects of bias in medical devices, in the financial sector, and in facial recognition use.
- US: Lawsuit Alleges Coinbase Violated Illinois Biometric Data Law: CPO Magazine reports on a suit filed in Illinois against company Coinbase which requires users to authenticate using their face or fingerprint while logging in. The suit claims that the company does not comply with BIPA regulations on data handling and retention schedules.
AI & automated systems in the wider world
- International: Who’s afraid of AI? This Slate podcast features an interview with Meredith Whittaker, president of the Signal Foundation and co-founder of the AI Now Institute at NYU, in which Whittaker discusses the need for AI regulation.
- International: AI machines aren’t ‘hallucinating’. But their makers are: Naomi Klein, Professor of Climate Justice and Co-director of the Centre for Climate Justice at the University of British Columbia, writes in The Guardian that ‘lofty claims’ about what AI can do for humanity disguise ‘mass theft as a gift – at the same time as they help rationalize AI’s undeniable perils.’
- International: We shouldn’t regulate AI until we see meaningful harm’: Microsoft chief economist to WEF: Last week, while representing Microsoft at a World Economic Forum summit, Microsoft’s Corporate VP and Chief Economist Michael Schwarz argued that ‘We shouldn’t regulate AI until we see some meaningful harm that is actually happening — not imaginary scenarios’. Reactions to this statement include this thread from Toby Ord, Senior Research Fellow at Oxford University.
- US: A Radical Plan to Make AI Good, Not Evil: Wired reports on Anthropic’s chatbot ‘Claude’, which the company claims has a set of ethical principles built in that define what it should consider right and wrong.
- US: AI is moving fast. Washington is not: Vox reports on the need for comprehensive legislative action on artificial intelligence in the US, and warns against reliance on self regulation by the tech industry. While pointing to legal challenges to applications of AI such as automated driving and generative AI, the article lists legislative and policy measures globally on these developments.
Research and reports
- Research shows mobile phone users do not understand what data they might be sharing: This study by the University of Bath’s School of Management finds that there are many misconceptions on protection of privacy and data among smartphone users. These misconceptions point towards a lack of understanding on how applications track, collect data and what that data is used for.
Other items of interest
- Online event: How are new technologies impacting human rights?: 16th May 2023: This conversation, hosted by The Minderoo Centre for Technology and Democracy at the University of Cambridge, brings together AI and human rights experts, a forensic consultant, and an anthropologist of genocide and digital technologies to ask what does rights-promoting technology look like? Registration is free.
Sign up for our email alerts
"*" indicates required fields
We take data privacy seriously. Our Privacy Notice explains who we are, how we collect, share and use Personal Information, as well as how you can exercise your privacy rights, but don’t worry, we hate spam and will NEVER sell or share you details with anyone else.