Autonomy in weapons systems, and AI in security and defence
- International: Russia, China ditch US proposal, swap notes on the military use of AI: Noting that China and Russia have opted not to be involved in the United State’s Political Declaration on the Responsible Military Use of Artificial Intelligence and Autonomy, this piece reports that, according to a piece published by SCMP, ‘Russia and China have now agreed to set up coordination under the Group of Government Experts (GGE) Convention on Conventional Weapons. They will hold bilateral and multilateral discussions on LAWS within the framework of the GGE.’ However, it also highlights that ‘the statement from the Chinese side on these discussions did not refer to the military use of AI. Instead, it included an expanded scope of discussions about “outer space, biosecurity, and AI”, further noting that ‘This aligns with China’s long-held position of calling for a ban on autonomous weapon systems and the potential for their improvement with AI.’
- International: EDITORIAL: Global rules must restrict AI-driven weapons before they start to kill: ‘Experts say technologies needed to create AI-enabled autonomous weapons are already available, as demonstrated by the use of AI for military unmanned drones.’ This piece questions a number of matters on the future of AI and ‘the danger of AI malfunctioning or going out of control’ and the importance of achieving regulation. ‘Russia insists there is no need for new regulations, whereas China is open to a binding framework but demands a very strict definition of the weapons to be banned’.
- International: Eric Schmidt on Global Security in the Age of Artificial Intelligence:A few weeks ago it was reported that Ex-Google CEO Eric Schmidt is Working On A Secret Military Drone Project but no details of the project have been publicly revealed. This week NTI Co-Chair and CEO Ernest J. Moniz recently hosted Eric Schmidt, for the inaugural NTI Innovation Forum to share his thoughts. Schmidt says ‘in addition to the need for human control…we need ‘strong guardrails and robust monitoring and regulatory frameworks to mitigate threats, including large-scale “recipe-based” attacks. He concluded his remarks by stating that ‘It’s the beginning of a very long journey.’
- International: Australia, UK, US Demo AI in Autonomous Military Systems: A demonstration named as ‘Trusted Operation of Robotic Vehicles (TORVICE)’ was carried out by the Australian military, in partnership with the UK and the US to ‘showcase the operability of autonomous assets with artificial intelligence (AI).’ The unmanned robotic vehicles are said to be able to ‘complete missions through AI software while maintaining network connectivity in complex land-based scenarios.’ TORVICE ‘saw scientists employ electro-optical laser, electronic warfare, and navigation and timing challenges to validate the unmanned ground vehicles’ resilience.’
- International: AI chatbots tend to choose violence and nuclear strikes in wargames: The New Scientist reports that in ‘wargame simulations, AI chatbots often choose violence’. Anka Reuel at Stanford University in California notes that ‘understanding the implications of such large language model applications becomes more important than ever.’ The New Scientist notes that ‘these results come at a time when the US military has been testing such chatbots based on a type of AI called a large language model (LLM) to assist with military planning during simulated conflicts, enlisting the expertise of companies such as Palantir and Scale AI.’
Facial recognition, biometric identification, surveillance
- Canada: Ontario Privacy Commissioner Urges Transparency, Safeguards for Police FRT: This piece in Find Biometrics covers a report by the Ontario Privacy Commissioner in response to a ‘joint statement by Canadian privacy commissioners in May 2022, calling for a legal framework to govern the use of facial recognition by police due to concerns over privacy and fundamental rights.’ The document lists ‘ensuring lawful authority, adhering to guiding principles, conducting privacy impact assessments, and engaging the public transparently’ as crucial for police use of facial recognition technology.
- Palestine: West Bank Palestinians ‘exhausted’ by omnipresent Israeli surveillance: France24 reports on the use of automated facial recognition technologies for surveillance in Palestine. It elaborates on how the ‘blue wolf’ and ‘red wolf’ systems are used by the IDF at checkpoints. There are concerns raised about the constant anxiety of being surveilled and the dehumanization propagated by the process.
- South Africa: Surveillance and the state: South Africa’s proposed new spying law is open for comment – an expert points out its flaws: Jane Duncan, Professor of Digital Society at University of Glasgow, writes for The Conversation on South Africa’s proposed spying law. The proposed law opens avenues for bulk surveillance, and does not take into account international frameworks for protection of people’s privacy and rights.
- International: Face recognition technology follows a long analog history of surveillance and control based on identifying physical features: Sharrona Pearl, associate professor of medical ethics and history at Drexel University, writes on her book ‘Do I Know You? From Face Blindness to Super Recognition.’ She comments that despite the increase in accuracy of facial recognition technology through technological advancements, biases ‘remain deeply embedded into the systems and their purpose, explicitly or implicitly targeting already targeted communities.’
- International: Primer: Defending the rights of refugees and migrants in the digital age: Amnesty International has released this primer to highlight ‘pervasive and rapid deployment of digital technologies in asylum and migration management systems across the globe.’ The document places focus on technologies that can process ‘large quantities of data’ and the impacts this may have on human rights.
- Bahamas/ Guyana: Bahamas, Guyana eye facial recognition surveillance projects to fight crime: Biometric Update reports on a move by Bahamas and Guyana to incorporate facial recognition surveillance with law enforcement agencies.
AI, algorithms and autonomy in the wider world
- International: A culture of ethical AI research can counter dangerous algorithms designed to deceive: This piece by Professor Tshilidzi Marwala, seventh Rector of the United Nations (UN) University and UN under Secretary-General, in the Daily Maverick describes the abilities of AI systems to ‘deliver false information to gain an advantage’ and ‘strategic ambiguity.’ He covers the complex balancing of openness, explainability and accuracy to counter the possible ethical risks of these AI capabilities.
Sign up for our email alerts
"*" indicates required fields
We take data privacy seriously. Our Privacy Notice explains who we are, how we collect, share and use Personal Information, as well as how you can exercise your privacy rights, but don’t worry, we hate spam and will NEVER sell or share you details with anyone else.