Autonomy in weapons systems, and AI in security and defence
- International: 164 states vote against the machine at the UN General Assembly: On 1 November 2023, the First Committee of the UN General Assembly adopted the first-ever resolution on autonomous weapons, stressing the “urgent need for the international community to address the challenges and concerns raised by autonomous weapons systems”. Building upon the joint statement by 70 states delivered at last year’s First Committee, this first ever UN resolution on autonomous weapons systems demonstrates that states see a need for action. The text was developed by a cross-regional group of states and through consultations in Geneva and New York. It has provided a much needed infusion of energy and purpose into ongoing international discussions on autonomous weapons, and an avenue to bring this important issue to the attention of all UN members.
- International: ADR participates in two UNGA side events: Automated Decision Research (ADR) was at the UNGA this month, as part of the call to #VoteAgainstTheMachine. In support of a wider effort to garner support from states for the first-ever resolution on autonomous weapons. The ADR team participated in two side events hosted by Austria and Belgium where the panels ‘discussed the risks and challenges associated with autonomous weapons systems, and the importance of meaningful human control over the use of force.’ The ADR team also presented the state positions monitor on a legally binding instrument on autonomous weapons, which now indicates that over 100 states support the negotiation of a legally binding instrument. The weapon systems monitor was also launched.
- International: A humanitarian perspective on military AI: Last month the 10th Beijing Xiangshan Forum convened. The agenda of this forum aimed to ‘keep abreast of the current international security reality, which not only focus on the key and hot security issues of common concern but also highlighted the prioritised cooperation of the Global Security Initiative.’ Neil Davison, Senior Scientific and Policy Adviser of the International Committee of the Red Cross (ICRC), shared perspectives from the ICRC ‘on contemporary challenges in armed conflict, including in relation to emerging technologies.’
- US: Shield AI unveils V-Bat Teams drone swarm tech, with eye to Replicator: Shield AI has recently launched a new drone-swarming capability called V-Bat Teams. V-Bats are vertical take-off-and-landing unmanned aircraft systems. DefenseNews reports that ‘V-Bat Teams will operate with minimal instruction from human operators, beyond the point where the humans tell them what target or mission to pursue.’
- International: Column: We don’t know how Israel’s military is using AI in Gaza, but we should: This year in particular has seen a drastic increase in discussions about the role of AI in general as well as its role in military technologies. The author asks a crucial question, ‘to what extent is Israel relying on artificial intelligence and automated weapons systems to select and strike targets?…given the myriad practical and ethical questions that continue to surround the technology, Israel should be pressed on how it’s deploying AI.’ Paul Scharre, autonomous weapons expert and author has also said ‘militaries should be more transparent in how they’re assessing or approaching AI’.
Facial recognition, biometric identification, surveillance
- International: In this episode of the Good Robot Podcast, the hosts interview Dr. Matt Mahmoudi on ‘how AI is being used to survey Palestinians in Hebron and East Jerusalem.’ The podcast transcript is available here.
- US: ‘Wholly ineffective and pretty obviously racist’: Inside New Orleans’ struggle with facial-recognition policing: Last year in New Orleans, the city council voted in favour of the use of facial-recognition software by police. POLITICO has analysed records that indicate that ‘computer facial recognition in New Orleans has low effectiveness, is rarely associated with arrests and is disproportionately used on Black people.’
- International: Why Big Tech, Cops, and Spies Were Made for One Another: In this article, the author says ‘America has a privacy law deficit’ and commercial surveillance has increased in recent years. He asserts that we should be aware of the reality that surveillance capitalism exists and needs to be addressed.
- US: Adtech Surveillance and Government Surveillance are Often the Same Surveillance: The Electronic Frontier Foundation sheds light on a recent investigation conducted by the Wall Street Journal on the urgent need for comprehensive consumer data privacy legislation. ‘If companies harvest less of our data, then there will be less data for the government to buy from those companies.
- UK: Government creating ‘worrying vacuum’ around surveillance camera safeguards: The Biometrics and Surveillance Camera Commissioner, an independent monitoring body of the Home Office could be abolished. A study conducted by the Centre for Research into Information Surveillance and Privacy (CRISP) will be published soon and ‘warns that the plan to abolish biometrics and surveillance safeguards will leave the UK without oversight.’
- International/ US: Police love Google’s surveillance data. Here’s how to protect yourself: This piece by the Washington Post warns against the increasing legal precedent to allow for non-specific data requests for law enforcement from Google including information on location history, search histories and more. In this context, the piece calls for accountability on what information Google collects and stores and suggests practices and policies of retaining personal data from users.
- Australia: Australia’s biometric, ID verification systems have been operating illegally for 4 years: Biometric Update reports on Australia’s identity verification services functioning for the past four years in a legislative and policy vacuum, effectively making its functioning without any legal basis. The Australian Government is presently trying to push for legislation on this issue, while concerns have been raised that it might undermine simultaneous attempts to protect the privacy and human rights of Australian citizens.
Regulation and enforcement
- US: FACT SHEET: President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence: This executive order on Safe, Secure, and Trustworthy Artificial Intelligence creates new guidelines directed toward AI safety and security, privacy protections, equity and civil rights, consumers’ and workers’ rights, and innovation and competition. Provisions of the order also leverage the power of the Defense Production Act to require certain companies developing AI products that could impact national security, economic security or public health and safety, to regularly report to the government about training their models and security measures, as well as to share the results of all red-team safety tests.
- US: Guarding the AI frontier: A proposal for federal regulation: This opinion piece in C4ISR by Col. (ret.) Joe Buccino calls for the ‘development of a licensing structure for General-Purpose AI’ which ‘establishes ‘regulatory thresholds for computational prowess, developmental cost, and benchmark performance.’ Acknowledging that the Blumental-Hawley legislation does not offer any significant details, he suggests that ’concept must develop around regulation of the three key resources in AI development: computing hardware, talent, and data.’
- International: We need to focus on the AI harms that already exist: This excerpt from Unmasking AI: My Mission to Protect What Is Human in a World of Machines by Joy Buolamwini was posted by MIT Technology Review. Buolamwini mentions that the focus on ‘x-risk’ or ‘hypothetical existential risk posed by AI’ instead of the active structural violence of being ‘excoded’, in reality ‘shifts the flow of valuable resources and legislative attention.’ While supporting the prevention of creating ‘fatal AI systems’ or autonomous weapons, she suggests that states adopt the protections ‘long championed’ by Stop Killer Robots, noting that the campaign addresses military use of AI ‘without making the hyperbolic jump that we are on a path to creating sentient systems that will destroy all humankind.’
- UK: How the UK’s emphasis on apocalyptic AI risk helps business: The Guardian reports that the AI Safety Summit, with its focus on frontier AI (‘the term for the most advanced AI models’) and their existential threats, pull away from meaningful action and regulation to ‘mitigate the existing ills that AI tools can exacerbate, including the surveillance of marginalised groups, inequity in hiring and housing and the proliferation of misinformation.’ Damini Satija, Head of the Algorithmic Accountability Lab at Amnesty International has said that the summit did not address the ‘discriminatory nature and frequent misuse’ of systems that are presently being used and promoted. Warning against technical fixes to societal problems, Satija calls these to be in reality, ‘cost-cutting measures which exacerbate punitive policies against marginalised people.’
AI, algorithms and autonomy in the wider world
- Africa: ‘A goldmine at our fingertips’: the promise and perils of AI in Africa: This piece by The Guardian highlights the various opportunities and risks presented by the use and popularisation of AI in Africa. While the use of AI for disease surveillance, agriculture and disaster management have shown promise, the use of these very technologies for surveillance and policing present risks to human rights that are unprecedented in scale.
- International: Transcript: The Futurist Summit: The Battlefields of AI with Scale AI CEO Alexandr Wang: This transcript of a conversation between Dr Gerrit De Vynck and Alex Wang on artificial intelligence was published by the Washington Post. Alex Wang says that artificial intelligence ‘was one of the very few technologies that had the ability to impact the balance of power globally.’ While agreeing that there are dangers and possible misuses of these technologies, militaries across the globe are embracing AI to gain a geopolitical advantage.
- International: CDT, Civil Society Reps to UK AI Safety Summit Urge Focus on AI Risks to People’s Rights: In light of the UK AI Safety Summit 2023, the The Center for Democracy & Technology (CDT) has called for ‘regulatory action to ensure the current and future trajectory of AI serves the needs of the public.’ They further ‘call for governments to prioritise regulation to address the full range of risks that AI systems can raise, including current risks already impacting the public.’
Sign up for our email alerts
"*" indicates required fields
We take data privacy seriously. Our Privacy Notice explains who we are, how we collect, share and use Personal Information, as well as how you can exercise your privacy rights, but don’t worry, we hate spam and will NEVER sell or share you details with anyone else.