Autonomy in weapons systems, and AI in security and defence
- Australia/US: Drones with AI targeting system claimed to be ‘better than human’: In New Scientist, David Hambling writes that the US military ‘could soon be equipped with an artificial intelligence that is claimed to be better than humans at identifying targets’, but notes that ‘the classified nature of the work makes it difficult to verify this claim.’ Athena AI, the Australian company developing the technology, said that it can ‘ check whether people are wearing a particular type of uniform, if they are carrying weapons and whether they are giving signs of surrendering’, and claimed that ‘We have worked extensively with military legal officers and gone through extensive scientific testing, which has shown that our system performed better than a human at identifying targets and non-targets in dynamic targeting scenarios, and resulted in a better legal outcome than human operators alone.’ The system will ‘assist human drone operators’, and is currently being trialled in Red Cat’s Teal-2 drone.
- UK: Coverage of House of Lords committee on AI and weapons systems: Experts warn that absence of human error in autonomous war weapons “is a myth”: Two articles this week provide coverage of the ongoing evidence sessions to the House of Lords select committee on AI and weapons systems, with witnesses debunking the narrative that autonomy in the battlefield will help reduce human errors, as human decisions and inadvertent human errors are present throughout the lifecycle of weapons systems. Witnesses also advocated for caution against an arms race to gain dominance in autonomous weapons. Further reporting on this mentions the risks associated with automation bias in algorithms, and how these risks are exacerbated if life and death decisions are made on their basis.
- International: First UN meeting on the threats of artificial intelligence to be held in the UK: As part of the United Kingdom’s presidency of the United Nations Security Council, it will host a meeting which will focus on ‘major risks that would arise from governments using AI to develop autonomous weapons or control nuclear weapons.’ While announcing this summit, Dame Barbara Woodward, UK’s ambassador to the UN, said ‘a multilateral approach to managing both the huge opportunities and the risks that artificial intelligence holds for all of us.’
- International: UN Secretary-General’s remarks to Shanghai Cooperation Organization: The United Nations Secretary General, Antonio Guterres addressed a virtual summit of the Shanghai Cooperation Organisation. He raised concerns that humanity is unprepared for artificial intelligence, noted that autonomous weapons are an issue on which ‘governance capacities are falling far behind.’ He proposed a global cooperation on these technologies, and announced the formation of a high level advisory group on AI.
- International: Loitering munitions: flagging an urgent need for legally binding rules for autonomy in weapon systems: This piece by Dr Ingvild Bode and Dr Tom Watts in the ICRC’s humanitarian law and policy blog, and based on their recently published report on loitering munitions, calls for legally binding rules on autonomous weapons while elaborating the risks and challenges they pose, using the example of loitering munitions. The piece discusses the issues of precision, certainty and human control over loitering munitions featuring autonomy and raises ethical issues with designing such systems capable of being used in populated areas.
- US: Future Of Artificial Intelligence Dominated Air Combat Showcased In New Air Force Video: This piece reports on Air Force Research Laboratory’s (AFRL) Autonomous Aircraft Experimentation (AAx) initiative on the Defense Visual Information Distribution Service (DVIDS). The report says that this initiative is ‘heavily tied to AFRL’s Skyborg advanced drone program, which the Air Force has said in the past is a key “technology feeder” for the Collaborative Combat Aircraft (CCA) program under the Next Generation Air Dominance (NGAD) initiative.
- Estonia/UAE: The military robots are coming – at some point: Defense News reports on trials for the UGV THeMIS 4.5. The THeMIS 4.5 is made by Milrem Robotics, a ten-year old company founded in Estonia and recently acquired by the UAE’s Edge Group. This piece notes some of the challenges involved in UGV development, and the progress being made in artificial intelligence and autonomy development in these systems. Further coverage of the trial here.
- US: US military calls for better weapons to fight artificial intelligence: Marine Corps Times reports that ‘the ongoing conflict in Ukraine has underscored just how effective smart tech accessible to anyone may be in neutralizing the elaborate tools of conventional warfare’, and as a result, ‘the U.S. undersecretary of defense for research and engineering published a need statement in May, calling on industry to help develop new weapons that can effectively meet the new threat from unmanned, autonomous and AI defense-driven systems.’
Government Regulation
- International: Council Of Europe Must Not Water Down Their Human Rights Standards In Convention On Ai: This week, a number of civil society organisations submitted a joint statement to The Council of Europe Committee on Artificial Intelligence (CAI) highlighting their concerns of the exclusion of ‘both civil society observers and Council of Europe member participants from the formal and informal meetings of the drafting group of the Convention.’ The letter expresses eight key objectives for the CAI drafting group to prioritize and invites organisations and individuals to endorse the joint statement before Monday, July 10, 2023.
- EU: AI Act: Spanish presidency sets out options on key topics of negotiation: On the 1st of July, the Spanish presidency of the EU Council of Ministers, disseminated a paper at the end of June that aims to ‘inform an exchange of views on four critical points of the AI rulebook.’ Later this month on the 18th of July a critical discussion with the EU Council, Parliament and Commission will be held.
- Japan: Japan leaning toward softer AI rules than EU -source: This week Reuters spoke to a Japanese government official on Japan’s’ plans to ‘to work out an approach for AI that will likely be closer to the U.S.’ The source notes that Japan is of the opinion that EU rules are ‘ a little too strict,”. At this stage the Japanese government official has remained undisclosed.
- US: Three things to know about how the US Congress might regulate AI: Chuck Schumer (U.S. Senate majority leader) recently delivered a speech where he outlined three key themes that ‘should help us understand where US AI legislation could be going.’ He also emphasised that ‘technology, and AI in particular, ought to be aligned with “democratic values.”’
Facial recognition, biometric identification, surveillance
- Russia: Russia illegally used facial recognition to arrest protesters, human rights court rules: Politico reports on a ruling passed by the European Court of Human Rights, stating that ‘The use of facial-recognition technology in the case of a protester in Moscow during 2019, has been incompatible with the ideals and values of a democratic society governed by the rule of law.’ Some brief expert commentary on the decision here (on twitter)
- Australia: Australian music venues criticised for use of facial recognition technology: CHOICE, a consumer advocacy group in Australia says ‘facial recognition technology carries privacy and human rights risks, amid a report of its use in major sport venues in Australia.’ Kate Bower, from CHOICE emphasises that ‘this is of particular concern given the sensitivity of the data being gathered. In fact, biometric data is already considered sensitive data under Australian privacy legislation.’ ABC News covers the interview here where Bower also points out that consumers are not aware of what the risks are and this raises questions of consent. She calls for regulation and an impact assessment on privacy and cyber-security risks.
- International: Russian Firms Push Surveillance Tech In Central Asia: This piece reports on an investigation by The New York Times of Russia’s efforts to ‘harness the internet and tighten control over internal dissent.’ These new technologies can reportedly ‘determine when individuals send data or speak over encrypted channels.’ They can also ‘capture passwords from unencrypted platforms and give authorities an improved ability to use cellphones to track users’ movements.’
- US: POV: The U.S. government is already buying citizens’ personal information. AI could make that process even easier: This piece reports on a recent declassified internal document Office of the Director of National Intelligence report, which revealed large scale private data purchases by various government departments of the US. These private data aggregators are able to create huge databases on various aspects of a person’s life including but not limited to video and audio recordings, purchase histories, health data and much more. There are laws and rules to protect against breach of privacy by governments, however the declassified report ‘warns that the increasing volume and widespread availability of commercially available information poses significant threats to privacy and civil liberties.’
- International: New Study: Data Practices and Surveillance in the World of Work: A report by Algorithm Watch interviews trade union representatives and academics to highlight practices of surveillance at workplaces. The report provides country profiles and a review of literature. The report recommends a participatory and transparent approach to the utilisation of surveillance, while imploring a reconsideration of the ‘far reaching effects of such pervasive surveillance.’ Another research project by CrackedLabs ‘examines and maps how companies use personal data on (and against) employees.’
- Israel: Automated Apartheid: How Israel’s occupation is powered by big tech, AI, and spyware: In a report published during May 2023, Amnesty International writes about ‘Automated Apartheid’ and ‘explores how facial recognition technology is used extensively by the Israeli authorities to support their continued domination and oppression of Palestinians.’ In this piece the author further links automated apartheid to the development of AI technology surveillance in oppressive regimes.
AI, algorithms and autonomy in the wider world
- US: TechScape: Self-driving cars are here and they’re watching you: Manufacturers of automated vehicles promote safety as a key component. However, when it comes to self-driving systems, ‘cameras play a crucial role.’ Albert Fox Cahn, an anti-surveillance activist and director of the Surveillance Technology Oversight Project says ‘for years we’ve had growing numbers of features that are turning our cars into policing tools.’ Privacy experts also warn that ‘surveillance technology and systems which collect user data that are vulnerable to law enforcement requests disproportionately harm marginalised groups and are a violation of constitutional rights to privacy.’
- US: Maxine Waters: ‘We don’t know the dangers of AI’: Jennifer Schonberger, reporting for Yahoo!finance interviews Maxine Waters, a senior Member of the House Financial Services Committee says ‘we really need to get started on making sure that we have regulations that will protect everyone’. Waters is extremely determined to play a role to ‘make sure AI doesn’t adopt biases that hurt certain types of consumers. A Congressional AI Task Force has raised ethical and legal concerns related to algorithmic bias and discrimination, particularly if automated programs do not perform as intended or have adverse effects on members of protected classes.’
- France: France wants to become Europe’s capital for AI: President Macron of France has strong ambitions to establish France as Europe’s hub for AI. TechMonitor notes that ‘the country certainly has plenty of technical talent, but it’s facing a challenging and costly market. Some, moreover, fear that the EU’s far-reaching regulatory framework might stunt the continent’s innovation, leaving European startups trailing in the dust of the US and China.
- International: Public-private engagement essential for AI regulation: Cisco executive Jeetu Patel: Jeetu Patel, executive vice president at Cisco, recently shared his views on regulating AI. He commented that ‘I think AI should be regulated. We should have public private policy, engagement…on a very deep level. It should be not just a national but a global conversation.’
- US: In NYC, companies will have to prove their AI hiring software isn’t sexist or racist: The Department of Consumer and Worker Protection (DCWP), began to enforce a law on July 5, 2023 that ‘under NYC law, anyone who wants to use an automated employment decision tool must conduct a bias audit first and notify job candidates.’
Sign up for our email alerts
"*" indicates required fields
We take data privacy seriously. Our Privacy Notice explains who we are, how we collect, share and use Personal Information, as well as how you can exercise your privacy rights, but don’t worry, we hate spam and will NEVER sell or share you details with anyone else.