Autonomy in weapons systems, and AI in security and defence
- US: Kendall: Air Force studying ‘military applications’ for ChatGPT-like artificial intelligence: Breaking Defense reports on recent comments by Air Force Secretary Frank Kendall regarding generative AI and its military applications. He reportedly asked the Department of Air Force Scientific Advisory Board to look into possible applications, while also expressing that he finds ‘limited utility’ for this kind of AI in the military. He also said that the Defense Department’s interest in AI was focused on ‘pattern recognition, targeting, and sorting through a lot of intel functions.’
- US: The Army’s next armored troop transport will have AI target recognition: Kelsey D. Atherton reports on the the U.S. Army’s new XM30 Mechanized Infantry Combat Vehicle, which will ‘include waypoint navigation, Artificial Intelligent Target Recognition (AiTR), and Advanced fire control systems’, with the Army also ‘working on ways to develop software that is independent of hardware, enabling each side of the equation to be upgraded independently. If better targeting comes from better software on the same hardware, the XM30 should be able to incorporate that’.
- UK: Autonomous weapons can transform the nature of warfare but without regulation the risks are huge: This piece is by Lord Lisvane, chair of the UK’s AI in weapon systems Lords Select Committee. It explores pertinent questions on the characterisation and possible regulation of autonomous weapon systems, as well as the risks associated with a lack of such regulation. In the piece, he also gives updates on the Lords special enquiry committee, and apart from the final sessions of evidence, the committee ‘will be visiting research establishments in Cambridge, Glasgow and Edinburgh.’
- UK: UK has no intention of developing fully autonomous weapon systems, defence minister says: In response to a question in the house of commons, the minister for state defence procurement, James Cartlidge said that though there are plans to incorporate AI into weapon systems and highlighted the importance of context appropriate human involvement ‘in the identification, selection and targeting of potential threats.’ He also said that the UK ‘does not possess fully autonomous weapon systems and has no intention of developing them.’ At present, the UK does not support the negotiation of a legally binding instrument on autonomy in weapons systems.
- International: Organization of American States recognises the challenges and risks of autonomous weapons: Following recent presentations to the Organization of American States on the issue of autonomous weapons, the ICRC notes that thanks to a resolution led by Mexico and Costa Rica, the OAS member states have recognised the challenges and risks posed by autonomous weapons.
- US: How 5G Is Enabling Autonomous Military Inspector Drones in the Pacific: This piece by Defense One reports on the use of 5G bandwidth to augment autonomous military inspector drones. The drones could presently ‘cut down on the time and complexity of inspections’ to as short as 30 mins. The drones use image recognition and artificial intelligence to analyse the aircraft and refer identified problem spots to a human maintainer which is accurate 70% of the time, as compared to the accuracy rate of around 50% for human inspectors. The experimental efforts dubbed ATOM to use 5G using a virtual reality headset is underway.
Facial recognition, biometric identification, surveillance
- UK: The tech flaw that lets hackers control surveillance cameras: This report by BBC explores the susceptibility of cameras produced by Hikvision and Dahua to hacking and its implications. BBC’s Panorama worked with US based IPVM to which resurfaced a vulnerability in the system discovered in 2017. While BBC and IPVM hint at the possibility of this being deliberate, Hikvision denies it and says ‘it released a firmware update to address it almost immediately after it was made aware of the issue.’
- Serbia: Social Controls: China-Style Surveillance is Coming to Serbia: In this opinion piece, the Balkan Insight brings to attention that ‘Serbian security institutions have procured many digital surveillance tools, including the most intrusive equipment, capable of secretly penetrating and controlling users’ devices and analysing huge amounts of data in detail.’ The author notes that there are several reasons for this but ‘there is no legal ground for their introduction and use.’
- Cameroon: Cameroon launches video surveillance with live facial recognition in largest city: Last week, the government of Cameroon launched a project called Cameroon Intelligence Cities ‘which aims to deploy artificial intelligence-based technologies.’ Given that Cameroon has no specific legislation on personal data protection, ‘digital rights activists have raised concerns about data privacy and security.’ Activists have noted that ‘with the analytical processing of data by artificial intelligence, crowds will no longer be anonymous and could infringe on fundamental human rights.’
- Malta: Discredited biometric surveillance project in Malta closed, but is it?: Earlier this year, The Safe City Malta Project was discontinued after the contract with Huawei expired. Concerns have been raised that there is a contradiction here because according to a publication from The shift, ‘Malta Strategic’s directors and a company secretary have subsequently been appointed for another year until June 2024.’
AI, algorithms and autonomy in the wider world
- International: Stop talking about tomorrow’s AI doomsday when AI poses risks today: This editorial in Nature says that if tech companies ‘are serious about avoiding or reducing AI risks, they must put ethics, safety and accountability at the heart of their work’, and argues that ‘the doomsday talk dominating debates about AI ‘is problematic’, and ‘works to the advantage of tech firms: it encourages investment and weakens arguments for regulating the industry.’ Some suggestions on a way forward have been captured in this article. An example is that ‘tech firms must formulate industry standards for responsible development of AI systems and tools, and undertake rigorous safety testing before products are released’, and governments ‘must establish appropriate legal and regulatory frameworks, as well as applying laws that already exist.’
- International: Microsoft chief says AI is ‘not an existential risk’ to mankind, but human oversight needed: Microsoft’s president Brad Smith told Euronews that ‘We need safety brakes that will ensure that AI remains under human control’.
- US: Military AI’s Next Frontier: Your Work Computer: Wired registers concern about the use of spyware-type tools intended for security and defence purposes, now being employed for workplace surveillance, noting that these systems ‘can now be widely deployable by anyone able to pay’ and ‘American workers could be targeted.’
- International: AI Is a Lot of Work: This article authored by Josh Dzieza is a collaboration between New York Magazine and The Verge, exploring the ‘tedious and repetitive’ human labour behind much-hyped AI technologies. Dzieza explores the work of annotators – those who ‘process the raw information used to train’ AI, noting that ‘annotation remains a foundational part of making AI’ and that ‘annotation is never really finished. The author also shares a number of views from interviews and one being from Sonam Jindal, the program and research lead of the nonprofit Partnership on AI. She says that ‘human intelligence is the basis of artificial intelligence, and we need to be valuing these as real jobs in the AI economy that are going to be here for a while.’
- International: The nuclear governance model won’t work for AI: Expert commentary from Chatham House emphasises that we should take into account that ‘from early on in their development, nuclear weapons posed a known, quantifiable existential risk’ but when it comes to AI ‘many of the concerns remain hypothetical and are derailing public attention from the already-pressing ethical and legal risks stemming from AI and their subsequent harms.’
- US: New York City’s Automated Employment Decision Tool Law Fair Game For Enforcement Beginning July 2023: In July this year, the ‘New York City Automated Employment Decision Tool Law (AEDT) will be enforced for the first time.’ The law will make it ‘unlawful for employers to use automated decision-making tools to screen individuals for employment decisions unless the law’s requirements regarding notice, bias audits and disclosure are satisfied.’
Other items of interest
- Report: Digital exclusion: This report by the Communications and Digital Committee in the House of Lords elaborates digital access to people in an age of rapid shift of services to an online mode. The report found that ‘1.7 million households have no mobile or broadband internet at home. Up to a million people have cut back or cancelled internet packages in the past year as cost of living challenges bite. Around 2.4 million people are unable to complete a single basic task to get online, such as opening an internet browser. Over 5 million employed adults cannot complete essential digital work tasks. Basic digital skills are set to become the UK’s largest skills gap by 2030.’
Sign up for our email alerts
"*" indicates required fields
We take data privacy seriously. Our Privacy Notice explains who we are, how we collect, share and use Personal Information, as well as how you can exercise your privacy rights, but don’t worry, we hate spam and will NEVER sell or share you details with anyone else.