Autonomy in weapons systems, and AI in security and defence
- International: On the warpath- AI’s role in the defence industry: BBC reports on the ‘dizzying array of AI-assisted tools under development or already in use in the defence sector, ’ with quotes from Alexander Kmentt, disarmament director of the Austrian Foreign Ministry, and Catherine Connolly, Automated Decision Research manager with Stop Killer Robots. Alexander Kmentt notes that ‘technology is developing much faster than the regulations governing it’, and that the goal of regulation should be to ensure that ‘there is an element of meaningful human control on the decision over life and death.’
- International: Killing By Algorithm: Matilda Byrne, National Co-ordinator of the Australian Stop Killer Robots campaign writes in Declassified Australia on the widespread research, development and deployment of autonomous capabilities through artificial intelligence by the Australian army. She raises concerns on systems such as Cerberus GLH, Warfighter Unmanned Ground Vehicle, and targeting systems to track, identify and select targets by Athena AI. Byrne argues that Australian policy ‘is lacking on limits to how autonomy is used in weapons and the amount of human control required, in particular over the ‘critical functions’ of selecting targets and deciding to attack.’
- International: Fill the blanks- putting gender into military AI: This brief by Shimona Mohan at Observer Research Foundation points out that conflict situations exasperate the efforts for gender equality and ‘women’s underrepresented and disadvantageous position vis-à-vis emerging technologies further narrows their prospects to progress towards equality.’ The brief suggests consistent and thorough efforts to debias data throughout its lifecycle, as well as to introduce a gendered lens into weapon reviews.
- International: Convergence- Artificial intelligence and the new and old weapons of mass destruction: This piece in Bulletin of the Atomic Scientists is by Emilia Javorsky, Director of the Futures Program, and Hamza Chaudhry, US policy specialist, at the Future of Life Institute. The piece elaborates on concerns expressed by Secretary General António Guterres on the ‘interaction between AI and nuclear weapons, biotechnology, neurotechnology, and robotics.’ The authors call for further research on these convergences, and their likely impact on the global security atmosphere, while also actively formulating policy to address the legal and ethical questions raised around the same.
- International: Future cockpits – flying with the blink of an eye: Paul E Eden writes for the Royal Aeronautical Society on the exploration of a ‘gamut of future fast jet cockpit technologies’ to ‘balance combat efficiency with pilot health and living space.’ The Future Combat Air System (FCAS) by BAE systems aims at providing unforeseen data to pilots, which could also increase the risk of ‘helmet fires’ in which a pilot is overwhelmed by the amount and interface of data being presented. To combat this, Jacob Greene, a human factors engineer at BAE Systems suggests autonomous capabilities which are ‘capable of determining which information is valuable and how to present it in a manageable form while considering the operational context.’
- Russia/ Ukraine: Russian Lancet drone destroys US-supplied Ukrainian AN/TPQ-36 counter-artillery radar: Army Recognition reports that ‘a Russian ZALA Lancet loitering munition targeted and destroyed a Ukrainian AN/TPQ-36 counter-artillery radar.’ These radars are ‘specifically engineered for rapid detection’ of sources for artillery fire and ‘formulates potential counter-fire strategies.’
- US: Don’t ditch soldiers for machines, combine them, Rainey says: TechNet Augusta 2023, took place last week in Georgia where participants shared their thoughts and explored the intricacies of the cyber domain. Given that the US ‘Defense Department is heavily investing in AI, machine learning and autonomy’, Gen. Rainey, head of US Army Futures Command presented his views and suggested that what needs to be done is to ‘is take the right combination of human beings and machines and build formations that optimize both.” Also noting that ‘we’ve got to make sure that we’re aiding commanders’ decision-making, not trying to make decisions for the commander because that will be disastrous.
- Australia: Uncrewed autonomous vehicle in weapons fire test: AuManufacturing reports that the Australian Army’s Robotic and Autonomous Systems Implementation & Coordination Office (RICO) recently conducted a demonstration and ‘test fired a weapon system from an autonomous uncrewed armoured vehicle.’ RICO is reported to have also ‘tested a range of emerging technology projects including drones, robots and optionally crewed combat vehicles fitted with remote weapon systems.’
Facial recognition, biometric identification, surveillance
- International: Proposed UN Cybercrime Treaty Threatens to be an Expansive Global Surveillance Pact: Delegates from UN Member States have convened at UN Headquarters for talks on the proposed UN Cybercrime Convention. The meeting commenced on 21 August and will continue until 1 September in New York. The Electronic Frontier Foundation (EEF), an organization defending civil liberties in the digital world will be participating and engaging throughout the talks. At this stage, the EFF has expressed that the current text may be ‘too broad, ambiguous’ and coupled with ‘nonspecific international cooperation measures with few conditions and safeguards.’ This could ‘potentially put basic privacy and free expression rights at risk.’
- UK: Scotland’s watchdog sends fresh warning over storing biometric data on Microsoft cloud: Brian Plastow, Scotland’s Biometrics Commissioner, was recently interviewed by the Sunday Post where he questions the adequacy of the Digital Evidence Sharing Capability (DESC) project. He warns that ‘storing sensitive data of UK citizens on a cloud server run by a United States company opens questions about U.S. government access to Scottish data.’
- Iran: Tech-enabled ‘Hijab and Chastity’ law will further punish women: ‘The Islamic Republic has a known history of using surveillance technology to aid repression against its population,’ notes this article. At a time when it is still unclear what type of technology is being used to identify and police women’s hijab and chastity, alarming concerns have been raised by Article 19, because Iran’s parliament is currently considering a draft of the ‘Chastity and Hijab’ bill. This Bill sets out a range concerning provisions that will have ‘huge implications’ for women in Iran. The commander in chief of the police force, Ahmadreza Radan, has confirmed that ‘individuals who breach the hijab requirement will be swiftly identified through the system and If this bill is approved and ratified.’
- International: Gov’ts debate AI rules but the results don’t command: Biometric Update takes note of a guide published by Reuters this week that provides a snapshot of regulations and plans that are underway from countries such as Australia, Ireland, Israel, the United States and the Group of Seven (G7). These countries are ‘seeking input on AI regulations.’ The guide also states that ‘China has implemented temporary regulations and Italy, Japan, Spain and France are investigating possible data and privacy breaches involving AI.’
AI algorithms and autonomy in wider world
- International: Bletchley Park to host AI safety talks in November: Tom Gerken, Technology reporter for BBC News highlights that the UK government announced this week that the dates have been confirmed for an important global event that aims to consider the risks of AI and map out how these can mitigated on a global scale through coordinated action. The event will be hosted on 1-2 November this year at Bletchley Park in Buckinghamshire, a significant location in the history of computer science development. CEO of the Bletchley Park Trust stated that ‘we are incredibly excited to be providing the stage for discussions on global safety standards, which will help everyone manage and monitor the risks of artificial intelligence.’
Sign up for our email alerts
"*" indicates required fields
We take data privacy seriously. Our Privacy Notice explains who we are, how we collect, share and use Personal Information, as well as how you can exercise your privacy rights, but don’t worry, we hate spam and will NEVER sell or share you details with anyone else.