A lengthy news briefing this week, with multiple stories on international discussions on military AI, on AI regulation, and on the use of remote and biometric surveillance systems, including worrying uses of facial recognition technologies. A reminder too that we launched our ‘weapons systems of concern’ monitor at the beginning of last month. We’ll be continuing to update and add to the monitor.
Autonomy in weapons systems, and AI in security and defence
- International: UN disarmament body calls for global action on autonomous weapons: Computer Weekly covers the recent UN First Committee resolution on autonomous weapons systems, which received 164 votes in favour, and quotes the UK Campaign to Stop Killer Robots and Drone Wars, both noting the importance of this resolution for the negotiation of legally binding rules and for meaningful human control over the use of force.
- International: The US and 30 Other Nations Agree to Set Guardrails for Military AI: This piece notes that 30 states have joined the U.S. nonbinding political declaration on military AI, which focuses on having military use of AI be “responsible” and “ethical”.
- International: We need hard laws on military use of AI — and soon: Branka Marijan, senior researcher at Project Ploughshares, writes that ‘The political declaration and the first-ever U.N. resolution on autonomous weapons are important steps forward, as is the expected bilateral agreement between the U.S. and China. But more governance, including hard laws and complementary processes on military AI and autonomous weapons, is needed.’
- China & US: The US Wants China to Start Talking About AI Weapons: Wired writes on US-China discussion on risk and safety of AI in weapons and military use, likely with a focus on AI in nuclear weapons. More here from Breaking Defense.
- International: The global race and repercussions of autonomous warfare: Aditya Sinha, Officer on Special Duty, Research, Economic Advisory Council to the Prime Minister of India, writes that ‘There is a palpable need for a globally recognised treaty on the minimal and responsible usage of AI in military and warfare’, noting the numerous risks raised by the development and use of autonomous weapons systems.
- South Africa: South Africa Calls For Parliaments To Work With Governments To Ban Uncontrolled Autonomous Weapons Systems: At the 147th international Inter-Parliamentary Union Assembly last month, held in Angola, South Africa ‘called for parliaments to work closely with governments to champion the establishment of a legally binding instrument that unequivocally bans the development, production and deployment of autonomous weapons systems not under human control or supervision’, noting that a ‘legally binding instrument must be firmly grounded in the principles of international humanitarian law, fundamental human rights and ethical considerations.’
- US: Tech Start-Ups Try to Sell a Cautious Pentagon on AI: This piece looks at the work of Shield AI, which makes auto-pilot software, describing it as ‘one of a handful of start-ups demonstrating the potential of cutting-edge technology to revolutionize war-fighting tools and help the United States keep its military advantage over China’, noting that ‘the company, and others like Anduril Industries, Autonodyne, EpiSci and Merlin Labs are developing new and more powerful ways for the Pentagon to gather and analyze information and act on it, including flying planes without pilots, creating swarms of autonomous surveillance and attack drones, and making targeting decisions faster than humans could.’
- US: Military leaders are already using AI tools to help make decisions: Defense One reports that ‘Artificial-intelligence technologies are already helping key military personnel streamline their operations, and officials are working to ensure that data quality and ethical uses guide the Pentagon’s implementation of them.’
Regulation and enforcement
- EU: EU’s AI Act negotiations hit the brakes over foundation models: Euractiv reports that the ‘whole legislation’ of the EU’s proposed AI Act is at risk, after large EU countries asked to retract the proposed approach for foundation models.
- International: AI: the world is finally starting to regulate artificial intelligence – what to expect from US, EU and China’s new laws: This piece provides an overview of recent regulatory efforts and developments on AI, in the EU, UK, China, and the US.
Facial recognition, biometric identification, surveillance
- International: UN rights chief: Remote biometric surveillance of protests brings ‘unacceptable risks’: The UN’s High Commissioner for Human Rights, Volker Türk, has recommended that the EU’s forthcoming AI Act ban systems that process biometric data to categorize people according to the color of their skin, gender, or other protected characteristics. In addition, a ban should be in place for AI systems that recognize emotions and predict crime, as well as untargeted scraping tools used for building facial recognition databases, reports Biometric Update.
- Palestine/Israel/China: How Chinese firm linked to repression of Uyghurs aids Israeli surveillance in West Bank: This report from The Guardian examines Israel’s use of Chinese firm Hikvision’s surveillance cameras (covered in previous news briefings, e.g. see here and here) in the Occupied Palestinian Territories, and quotes from Amnesty International’s Automated Apartheid report, which found dozens of Hikvision cameras around the old city in East Jerusalem. Further reporting on Israeli use of surveillance technologies, also noting Amnesty’s Automated Apartheid report, is available here.
- Aotearoa New Zealand: Facial recognition: Government rolls out new tech despite racial bias concerns: Government authorities will start to roll out ‘new facial recognition technology next week despite an “untested risk” around racial bias.’
- France: French police accused of using facial recognition software illegally: French national police have been illegally using facial recognition software since 2015, according to reports published earlier this week, even though ‘the use of facial recognition software by law enforcement authorities is prohibited in France.’
- US: Does A.I. Lead Police to Ignore Contradictory Evidence? In the New Yorker, Eyal Press writes on police use of facial recognition systems, and discusses cases of false matches and wrongful arrests due to police use of and reliance on facial recognition technologies, noting that ‘as with other forms of artificial intelligence, the use of facial-recognition software has outpaced a willingness to regulate it.’
- Ukraine: Ukraine’s ‘Secret Weapon’ Against Russia Is a Controversial U.S. Tech Company: This piece discusses Ukraine’s use of facial recognition technology from the U.S. company Clearview AI, despite the concerns raised by numerous Ukrainian civil society groups, particularly as ‘there are no signs that the Ukrainian government is eager to wind down its use of Clearview when the war is over.’
- US: AI ethics board in US DOJ would advise on facial recognition deployment: Biometric Update reports on the potential creation of an AI compliance board in the U.S. government.
- US: It’s Time to Oppose the New San Francisco Policing Ballot Measure: The Electronic Frontier Foundation writes on a new San Francisco ballot initiative which if approved, which would allow police to use any surveillance technology ;for a full year without any oversight, accountability, transparency, or semblance of democratic control.’
AI, algorithms and autonomy in the wider world
- International: OECD updates definition of Artificial Intelligence ‘to inform EU’s AI Act’: Euractiv reports on a new definition of AI that is set to be incorporated in the EU’s new AI rulebook. The new definition was adopted last week by the Organisation for Economic Co-operation and Development’s Council. The definition now reads as “An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that [can] influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment,”
- UK: Algorithms are deciding who gets organ transplants. Are their decisions fair?: Madhumita Murgia, the artificial intelligence editor of the Financial Times, brings to attention that algorithms are making life-or-death decisions. Murgia captures the experience of a patient ‘Sarah Meredith’ on her journey whilst on a liver transplant waiting list for years; where it was later discovered that ‘an algorithm determines who goes to the top of the waiting list’.
- International: What Are Human Rights in an A.I. World?: Rebecca Stropoli, writing for the Chicago Booth Review captures a range of views about the growing tensions about the use of A.I. surveillance in various global sectors. She expresses that ‘experts warn that without the right implementation, A.I. could have profoundly negative effects, such as diminishing worker privacy and facilitating wage and employment discrimination.’
- International: The Bletchley Declaration and the future of AI: In ‘pursuit of harmonised AI governance’, the Bletchley Declaration (BD) was adopted by 28 countries during the AI Safety Summit in early November 2023, in the United Kingdom. The author notes the significance of the BD and shares some critiques aimed to improve the BD ‘that are grounded in philosophical, ethical, and governance perspectives’.
- International: Eastern nations more receptive to AI, hints UN tech advisor: In this piece, Cybernews interviewed UN artificial intelligence advisor Neil Sahota who acknowledges the fears that have multiplied around AI and its potential misuse’ and also shares his optimism in this regard. Sahota also shares the view that Eastern countries are ‘actually wired to look for opportunities’ surrounding AI.
Sign up for our email alerts
"*" indicates required fields
We take data privacy seriously. Our Privacy Notice explains who we are, how we collect, share and use Personal Information, as well as how you can exercise your privacy rights, but don’t worry, we hate spam and will NEVER sell or share you details with anyone else.