CARICOM conference on the human impacts of autonomous weapons
- Both the Trinidad & Tobago Guardian, and the Trinidad and Tobago Newsday, cover the CARICOM conference on the human impacts of autonomous weapons, noting the Trinidad and Tobago Attorney General Reginald Armour’s comments on the dangers of autonomy in weapons systems and the urgent need for new international law. Armour told the press that Trinidad and Tobago intends to ‘take a leadership role’ in order ‘to encourage an international treaty that will regulate the responsible use of these new weapon systems, which without regulation have the potential to do massive harm to our civilians and our human dignity.’
Autonomy in weapons systems, and AI in security and defence
- Ukraine: Ukraine is now using AI-powered drones with some amazing capabilities: The Kyiv Post reports on Ukrainian troops’ use of the SAKER SCOUT system, which allegedly ‘autonomously detects and pinpoints enemy equipment, even when concealed under camouflage, and instantly relays this critical information, including geolocation coordinates, to command posts for swift decision-making.’ The SAKER SCOUT system ‘comprises a flagship reconnaissance drone and several first-person-view kamikaze drones, which can be coordinated by the flagship drone’. Each drone in the system can carry up to four munitions.
- Sweden: Signal: Saab acquires Bluebear Systems to exploit AI swarm systems: Saab, Sweden’s largest defence supplier, has ‘acquired Bluebear, a British artificial intelligence enabled autonomous swarm system manufacturer’, in order to ‘benefit from Bluebear’s expertise in autonomy and swarming, as well as command and control systems.’
- US: Shield AI’s ‘Hivemind’ controlled 3 V-BATs as a team: Shield AI has successfully completed a “teaming demonstration” of three Northrop Grumman V-BAT unmanned aerial systems ‘to monitor and surveil simulated wildfires’. As a result of the test, Shield AI is now reportedly ‘on track to deploy V-BAT teaming capabilities in environments where GPS and communications may be limited within the following year.’ Hivemind can be customised for different missions and trained to perform various tasks, including counter-air operations and beyond-visual-range strikes.
- Ukraine: Cyber-teams fight a high-tech war on front lines: BBC News carries a story on the work of Ukraine’s ‘cyber-operators’, which includes using ‘AI visual recognition systems to analyse information gathered from aerial drones…to provide targets for the military’. The piece further notes that drones ‘have been at the leading edge of innovation in this conflict’, with Ukraine’s cyber-team deploying sensors to detect Russian drones, ‘so operators cannot just jam them but try to take control, sending commands to make them land.’
- International: What happens when machines can decide who to kill? The Red Cross Red Crescent magazine’s podcast has an interview with the ICRC’s Neil Davison on autonomy in weapons systems, covering the ICRC’s call for new rules on autonomous weapons, and the serious challenges posed by autonomy in weapons systems.
- US: Anduril acquires drone fighter maker Blue Force Technologies: Forbes reports on the acquisition of Blue force technologies by Anduril. Blue force technologies has been developing a next gen drone fighter called Fury. The Anduril press release indicates a correlation between the acquisition and the US DOD’s plan to rely on ‘large quantities of smaller, lower-cost, more autonomous systems.’
- US: Palmer Luckey says ChatGPT sold the Pentagon on AI weapons he’s now peddling: Vice reports on comments made by Palmer Luckey, Anduril’s founder, in which he claims that ‘ChatGPT has probably been more helpful to Anduril with customers and politicians than any technology in the last 10 years’, adding that ‘There’s so many people in the Pentagon and on Capitol Hill who have had that come to Jesus moment just because of the hype cycle around ChatGPT, which I am very happy to leverage in getting them excited about the future.’
- US: Pentagon’s new AI drone initiative seeks ‘game-changing shift’ to global defense: The Hill carries further remarks from U.S. Deputy Secretary of Defense Kathleen Hicks on the recently announced Replicator program (covered in last weeks news briefing), which Hicks says will mark a ‘game-changing shift’ in defence and security, arguing that autonomous weapons ‘can help a determined defender stop a larger aggressor from achieving its objectives, put fewer people in the line of fire and be made shielded and upgraded at the speed war fighters need without long maintenance.’
- US: Marines considering autonomous systems for ‘almost everything’: The U.S. Marines are reportedly ‘looking to push as many tasks as possible to autonomous systems as the service aims to operate across wide swaths of the Pacific’. On the issue of autonomous systems using lethal force, Lt. Gen. Karsten Heckl, Marine Corps deputy commandant for combat development and integration, said that ‘We’re going to have to have that discussion of man-in-the-loop and man-on-the-loop’.
- Aotearoa New Zealand: New Zealand faces dilemma over AUKUS ‘replicator’ drone swarm plans: Describing the pressures faced by Aotearoa New Zealand because of Australia, the U.K. and the U.S. (AUKUS) military alliance and its experimentation with autonomy in weapons systems and drone swarming, this piece notes that Aoteraroa New Zealand’s Ministry of Foreign Affairs and Trade has reiterated that the country advocates ‘for international, legally binding rules and limits on autonomous weapons systems.’
- International: From killer robots to AI-powered killer robots: A short piece in The Messenger notes concerns around autonomy in weapons systems, and highlights the UN Secretary-General’s call for new binding rules on autonomous weapons by 2026.
Regulation and enforcement
- International: G7 countries commit to AI code of conduct: Politico reports that the G7 countries have formulated a voluntary international code of conduct for artificial intelligence. As a part of these commitments, companies should aim to ‘stop potential societal harm created by their AI systems; to invest in tough cybersecurity controls over how the technology is developed; and to create risk management systems to curb the potential misuse of the technology.’
Facial recognition, biometric identification, surveillance
- UK: Home Office secretly lobbied for facial recognition ‘spy’ company: The UK’s Home Office ‘secretly lobbied the UK’s independent privacy regulator to act “favourably’ toward a private firm keen to roll out controversial facial recognition technologies’, according to emails seen by The Observer. Big Brother Watch, a UK civil liberties group, said that it is ‘unbelievable that a UK minister is batting for private companies selling dystopian mass surveillance tech’.
- UK/International: IBM promised to back off facial recognition — then it signed a $69.8 million contract to provide it: The Verge reports that despite writing a letter to the U.S. Congress in 2020 ‘announcing that the company would no longer offer “general purpose” facial recognition technology’, the company has signed a $69.8 million deal with the British Government to ‘develop a national biometrics platform that will offer a facial recognition function to immigration and law enforcement officials.’ Though the company claims that it has not deviated from its commitment in 2020, Matt Mahmoudi, PhD, tech researcher at Amnesty International has said that ‘there is no application of one-to-many facial recognition that is compatible with human rights law.’
AI, algorithms and autonomy in the wider world
- International: “You know what to do, boys”: Sexist app lets men rate AI-generated women: Vice reports on a website in which users are invited to rate AI-generated images of women, which ‘exhibit many of the telltale signs of the sexist bias common to image-based machine learning systems’.
- International: Google: Political adverts must disclose use of AI: Google has introduced new regulation that ‘require that political ads on its platforms let people know when images and audio have been created using artificial intelligence.’ This is in extension to existing policy banning the ‘manipulation of digital media to deceive or mislead people about politics, social issues, or matters of public concern.’
- International: UNESCO: Governments must quickly regulate Generative AI in schools: The UNESCO has issued guidance for regulations and teacher training to ensure a ‘human-centred approach to using Generative AI in education.’ The seven point guidance was released on 07 September during UNESCO’s Digital Learning Week which calls for an ‘age limit of 13 for the use of AI tools in the classroom.’
Sign up for our email alerts
"*" indicates required fields
We take data privacy seriously. Our Privacy Notice explains who we are, how we collect, share and use Personal Information, as well as how you can exercise your privacy rights, but don’t worry, we hate spam and will NEVER sell or share you details with anyone else.