Autonomy in weapons systems, and AI in security and defence
- US: US Air Force uses Mideast ops to test tech for potential China fight: ‘The U.S. Air Force is using the Middle East as a testbed to explore new technologies that the service might need in a future fight against China’, reports DefenseOne. Lt. Gen. Alexus Grynkewich, who leads Air Forces Central Command, said that Air Forces Central Command (AFCENT) is ‘using industry solutions to improve how it identifies targets it wants to strike, a process that’s historically been “cumbersome” and “slow.”’ He also said that the AFCENT is using ‘“artificial intelligence to help us enable target discovery, to look at a variety of classified and open source available feeds of information, and cue us to where particular targets might be,” and that with human oversight, AI could rapidly give kinetic and non-kinetic targeting options to senior leaders and automate the process.
- Ukraine: Ukraine Declares Hikvision and Dahua “Sponsors of War”: IPVM reports that ‘the Ukrainian Government’s National Agency for the Prevention of Corruption (NACP) announced adding both Hikvision and Dahua to its list of International Sponsors of War.’ This classification comes in light of both companies supplying ‘drones, thermal imagers, and anti-drone guns’ to Russia to be used against Ukraine. This classification is only reputational and does not hold any legal consequences. Hikvision has been covered in previous briefings, notably due to its advertising of ‘ethnicity recognition features’ in its security cameras.
- Israel: From rockets to recruitment, Israel’s military refocuses on AI: This Reuters’ report on the use of AI in the Israeli military quotes Colonel Eli Birenbaum, chief of the Israeli military’s operational data and applications unit, and notes that ‘around half of Israel’s military technologists will be focused on AI by 2028’.
- US/China: U.S. seeks talks with China on military AI amid tensions: Nikkei Asia reports that ‘The Biden administration plans to encourage China to work with the U.S. on international norms for artificial intelligence in weapons systems’ during a visit to China by U.S. Secretary of State Antony Blinken this week.
Regulation and enforcement
- EU: EU Parliament calls for ban of public facial recognition, but leaves human rights gaps in final position on AI Act: This week, the European Parliament ‘voted in favour of strong fundamental rights protections in their official position on the Artificial Intelligence Act’, with the vote also upholding ‘red lines against unacceptably harmful uses of AI, including decisively protecting people against live facial recognition and other biometric surveillance in public spaces, emotion recognition in key sectors, biometric categorisation, predictive policing and social scoring.’ However, the list of adopted prohibited practices ‘does not include the use of AI to facilitate illegal pushbacks, or to profile people on the move in a discriminatory manner.’ More on this from Amnesty Tech. As regards security and defence, the ADR has previously written on gaps in proposed European AI regulation .
- EU: With new legislation, Europe is leading the world in the push to regulate AI: The Los Angeles Times reports on the passing of the AI Act by the European Parliament, and quotes Kris Shrishak, a technologist and senior fellow at the Irish Council for Civil Liberties, who noted that ‘The fact this is regulation that can be enforced and companies will be held liable is significant” because other places such as the US, Singapore and Britain have merely offered “guidance and recommendations’.
Facial recognition, biometric identification, surveillance
- US: When your body becomes the border: This piece sheds further light on the plight of people seeking refuge in the US, and the ‘digital wall’ that is denying them access. With the CBPOne mobile application and as seen in the Alternatives to Detention’s e-monitoring program, the article suggests that surveillance of refugees and asylum seekers has moved beyond borders into an ever-present panopticon. .
- US: This Surveillance System Tracks Inmates Down to Their Heart Rate: Wired reports on a new tracking software being deployed in jails at Atlanta, Georgia. The technology includes sensors across the jail, wearables that would be on the person of the inmates at all times, and software that helps correctional officers to track and monitor movements.
- EU: Facial recognition use in public spaces under the microscope in EAB event: Biometric Update reports on a workshop organised by the European Association for Biometrics which focused on the DATAFACE project that ‘explores the impact of facial recognition deployed for surveillance purposes on rights to privacy and data protection, as enshrined in EU law.’ One of the speakers, Catherine Jasserand, said during the workshop that ‘impact assessments for the AI Act have not demonstrated that public deployments of facial recognition meet the criteria for necessity.’ She further added that facial recognition technologies ‘have not demonstrated a lack of viable alternatives, or the effectiveness of the technology for realising its stated purpose.’
- France: French Senate votes in favour of testing public facial recognition technologies: This piece reports on passed legislation in France which ‘aims to create a legal framework to test, for three years, the use of biometric recognition by judiciary investigators and intelligence authorities.’ The report also mentions that these provisions can be used for terror suspects by intelligence agencies, and by the judiciary in ‘extremely grave criminal cases.’
- UK: Campaigners urge London food banks to end use of face scans: Privacy advocates including the Big Brother Watch campaign group are calling on food banks to stop the use of facial recognition at food banks in London, arguing that such use violates users’ ‘privacy, dignity, and security’.
AI, algorithms and autonomy in the wider world
- International: UN chief backs idea of global AI watchdog like nuclear agency: Reuters reports that UN Secretary General Antonio Guterres ‘backed a proposal by some artificial intelligence executives for the creation of an international AI watchdog body like the International Atomic Energy Agency.’
- EU/ US: The harm from AI is already here. What can the US do to protect us?: This piece by The Guardian offers a comparative analysis between regulation of AI in the EU and the US. Simultaneously, it highlights the potential automated harms which have surfaced after automated decisions permeated sectors such as social media, banking, finance and social security.
- EU/ International: Why making AI safe isn’t as easy as you might think: BBC reports on challenges to creating safe AI in light of the upcoming vote on EU’s AI Act. The piece identifies characterisation of AI, global agreement on rules, building public trust, accountability and keeping pace with technologies as the major challenges to creating safe AI.
- International: AI must not become a driver of human rights abuses: In light of the recent call to put a moratorium on AI development and generative AI, Al Jazeera reports on human rights concerns raised by AI, and calls for a three pronged approach. This includes ‘implementing a rigorous human rights due diligence framework’; ‘proactively engage with academics, civil society actors, and community organisations, especially those representing traditionally marginalised communities’; and lastly ‘human rights organisations should take the lead in identifying actual and potential harm.’
Research and reports
- Jordan: Automated Neglect – How The World Bank’s Push to Allocate Cash Assistance Using Algorithms Threatens Rights: This report by Human RIghts Watch highlights automated harms caused by the Unified Cash Transfer Scheme in Jordan, supported by the World Bank. The report finds that ‘this algorithm is leading to cash transfer decisions that deprive people of their rights to social security.’ The piece further elaborates that ‘the problem is not merely that the algorithm relies on inaccurate and unreliable data about people’s finances’ but also that ‘its formula also flattens the economic complexity of people’s lives into a crude ranking that pits one household against another, fueling social tension and perceptions of unfairness.’ More on the report here.
Other items of interest
- UNIDIR 2023 Innovations Dialogue: The Impact of Artificial Intelligence on Future Battlefields: 27 June, 09:00-17:45 (CEST). This year’s UNIDIR Innovations Dialogue ‘will provide a platform for military, technical, legal, and ethical experts to explore the impact of AI, including but not limited to autonomous weapons, across traditional domains of warfare, namely land, naval, and air warfare as well as on new domains such as cyber, space, and cognitive.’ Attendance is both in-person and online, and registration, which closes on 25 June, is free.
Sign up for our email alerts
"*" indicates required fields
We take data privacy seriously. Our Privacy Notice explains who we are, how we collect, share and use Personal Information, as well as how you can exercise your privacy rights, but don’t worry, we hate spam and will NEVER sell or share you details with anyone else.