Autonomy in weapons systems, and AI in security and defence
- International/ France: France Should Commit to an International Treaty on Autonomous Weapons: This is a translation of an op-ed in Le Monde published in english by Human Rights Watch, signed by Amnesty International France, Handicap International, Human Rights Watch and l’Observatoire des armements, calls on France to be a driving force in the negotiation of an international treaty on autonomous weapons.
- International: OpenAI quietly removes ban on military use of its AI tools: CNBC reports that a previous policy by OpenAI, which prohibited the use of its models for activities ‘that have high risk of physical harm, such as weapons development or military and warfare’ has not been changed to include military uses of its models for ‘military use cases the company does agree with.’ Anna Makanju, OpenAI’s VP of global affairs clarified that the company’s policy still doesn’t allow their tools to be used to ‘harm people, develop weapons, for communications surveillance, or to injure others or destroy property.’
- China: Tech firm Baidu denies report that its Ernie AI chatbot is linked to Chinese military: ABC News reports on a post by South China Morning Post which cited an academic paper which claimed in an university affiliated with the People’s Liberation Army cyberwarfare division, The academic paper claimed that they had tested their artificial intelligence on, technology company Baidu’s chatbot, Ernie Bot. Baidu issued a statement refuting the claim to say that ‘they would have used the functions available to any user interacting with such AI tools.’
- EU: LEAK: European Defence Fund to fund next-gen helicopters, cargo planes: Euractiv reports that ‘‘the European Commission plans to invest €335 million in 16 research projects and €630 in 17 development projects for the year 2024, for a total of €935 million, according to a 2024 draft list of projects to be financed under the European Defence Fund.’ This includes a ‘project that aims to launch a programme for collaborative helicopter development by 2030’ and a ‘large array of projects focusing on autonomy and unmanned systems including developing unmanned collaborative combat aircraft (U-CCA) systems (€15 million), multipurpose unmanned ground systems (€50 million), functional smart system-of-systems for future naval platforms (€45 million), and autonomous heavy mine sweeping systems (€30 million).’
Regulation and enforcement
- EU/ International: 2024 holds more talk, more hope, more finger-pointing on AI regulation: Biometric Update reports on the prospects and forums for AI regulation in 2024, with a special focus on World Economic Forum. University of Toronto Law Professor Gillian Hadfield has said that governments need greater visibility into the whole of AI, which could be achieved through mechanisms such as a registry.
- International: The Urgent but Difficult Task of Regulating Artificial Intelligence: This piece by David Nolan, Hajira Maryam & Michael Kleinman, relates to issues surrounding the regulation of artificial intelligence. The authors emphasise that in the coming year, it is time for policy makers to ‘ensure that AI systems are rights-respecting by design, but also to guarantee that those who are impacted by these technologies are not only meaningfully involved in decision-making on how AI technology should be regulated, but also that their experiences are continually surfaced and are centred within these discussions.’
Facial recognition, biometric identification, surveillance
- Ireland: Civil rights groups warns of danger of facial recognition technology: This podcast by Senior Policy Officer Olga Cronin, representing the Irish Council for Civil Liberties claims that ‘legislation proposing the introduction of facial recognition technology lacks sufficient safeguards to ensure it complies with humanitarian law.’ ‘The organisation is calling for a complete ban on the use of facial recognition technology by the Gardaí’ because it is said to be an ‘inherently flawed system’.
- EU: AI Act threatens to make facial surveillance commonplace in Europe: EUreporter captures the views of Digital freedom fighter and Member of the European Parliament Patrick Breyer (Pirate Party) that ‘warns that the law paves the way for the introduction of biometric mass surveillance in Europe where EU governments decide to steer this course.’ The author concludes that ‘this law legitimises and normalises a culture of mistrust. It leads Europe into a dystopian future of a mistrustful high-tech surveillance state.’
- International: Google lawsuit for collecting biometrics without consent revived in Canada: Yeremia Situmorang in British Columbia filed a civil claim alleging that ‘pictures of his children taken on his Android phone were automatically uploaded to Google Photos without his consent. Situmorang argues that ‘facial biometric data is intrinsically sensitive personal information, akin to an individual’s DNA or fingerprints,’ and that collecting said data without consent constitutes a violation of privacy. A statement from Google says ‘it is not employing facial recognition, but merely using an algorithm to identify similar faces and create templates via its “face grouping” function, which can be opted out of at any time.’
AI, algorithms and autonomy in the wider world
- Australia: Action to help ensure AI is safe and responsible: ‘Australia has strong foundations to be a leader in responsible AI.’ The government recently took a step ‘to help ensure that AI is safe and responsible’ by releasing an interim response to ‘Safe and Responsible AI in Australia.’ The response is available here and covers perspectives from stakeholders as well as ‘how the government will ensure AI is designed, developed and deployed safely and responsibly.’
- International: Which Company Will Ensure AI Safety? OpenAI Or Anthropic: Tima Bansal writing for Forbes, says ‘AI presents a real and present danger to society.’ Bansal is concerned about ‘recent changes in OpenAI’s board’ and what that could possibly mean for the ‘company’s commitment to AI safety’. The author notes that ‘Anthropic’ (an AI safety and research company) is taking AI safety seriously by incorporating as a Public-Benefit Corporation and Long-Term Benefit Trust.’
- UK: ICO launches consultation on data protection law and artificial intelligence: The Information Commissioner’s Office (ICO) ‘has launched a consultation series on generative Artificial Intelligence (AI), examining how aspects of data protection law should apply to the development and use of the technology.’ The ICO has invited ‘a range of stakeholders, including developers and users of generative AI, legal advisors and consultants working in this area, civil society groups and other public bodies with an interest in generative AI’ to share their views by 1 March 2024.
- International: If 2023 was the year of AI hype, will 2024 be the year of AI governance and responsibility? Reflecting on the extensive publicity of AI during 2023, Theodora Lau, the founder of Unconventional Ventures asks a number of important questions related to AI hallucinations, deepfakes and ‘how then should we benefit from generative AI technologies? Where do we draw the lines, and when things go wrong, whose responsibility is it? And if these questions will be addressed this year.
- International: Big tech firms recklessly pursuing profits from AI, says UN head: The World Economic Forum (WEF) Annual Meeting 2024 is currently being hosted in Davos, Switzerland where UN secretary general, António Guterres addressed the forum and highlighted the risks AI posed and stressed that ‘the international community has no strategy to deal’ with the matter. He also warned that ‘big technology companies are recklessly pursuing profits from AI and urgent action is needed to mitigate the risks from the rapidly growing sector.’
- China: China issues draft guidelines for standardising AI industry: Reuters reports on a statement released this week by the Chinese on proposed AI guidelines that will ‘form more than 50 national and industry-wide standards for AI by 2026’ The statement also mentioned that ‘China aimed to participate in forming more than 20 international standards for AI by that time.
Sign up for our email alerts
"*" indicates required fields
We take data privacy seriously. Our Privacy Notice explains who we are, how we collect, share and use Personal Information, as well as how you can exercise your privacy rights, but don’t worry, we hate spam and will NEVER sell or share you details with anyone else.