Our news briefings are emailed to our newsletter subscribers every week. Sign up via the contact form below if you’d like to receive them.
Autonomous weapon systems, and AI and autonomy in security and defence
- International: Cluster Munition Convention Offers Roadmap for New Autonomous Weapons Treaty: In Just Security, Human Rights Watch’s Bonnie Docherty writes that ‘proponents of an autonomous weapons systems treaty can look to the Convention on Cluster Munitions for guidance and motivation. It shows that an effective, efficient, and inclusive process can lead to life-saving results.’
- US: US Army may ask defense industry to disclose AI algorithms: US Army officials are ‘considering asking companies to give them an inside look at the artificial intelligence algorithms they use to better understand their provenance and potential cybersecurity weak spots’, reports DefenseOne.
- US: ARC awarded US Air Force contract for scalable training capability: The US Air Force has awarded Armaments Research Company (ARC) a $15 million contract ‘to provide a human-machine teaming solution to optimize airman training. The project seeks to fuse data from miniaturized artificial intelligence (AI) enabled computing sensors integrated with other battlefield data sources, such as unmanned autonomous systems. Insights gathered from this setup will be consolidated and relayed in near real-time to develop customized training approaches.’
- US: Army charts path ahead for Project Linchpin AI/ML initiative, while wary of its scope: Reporting on the U.S. Army’s Project Linchpin, Breaking Defense covers a recent industry technical exchange meeting on the project, which is ‘aimed at developing the service’s artificial intelligence and machine learning (AL/ML) operations pipeline.’
- International: Regulating artificial intelligence is a 4D challenge: In an op-ed in The Financial Times, John Thornhill, Innovation Editor at the same publication, writes that ‘Incorporating AI into lethal autonomous weapons systems (LAWS), or killer robots, is a terrifying prospect. The principle that humans should always remain in the decision-making loop can only be established and enforced through international treaties.’
Regulation and enforcement
- US: Federal Trade Commission says Ring employees illegally surveilled customers, failed to stop hackers from taking control of users’ cameras: The U.S. Federal Trade Commission has ‘charged home security camera company Ring with compromising its customers’ privacy by allowing any employee or contractor to access consumers’ private videos and by failing to implement basic privacy and security protections, enabling hackers to take control of consumers’ accounts, cameras, and videos.’ More on this decision here in a twitter thread from The Bureau of Investigative Journalism’s tech reporter, Niamh McIntyre.
- Australia: Australia considers ban on ‘high-risk’ uses of AI such as deepfakes and algorithmic bias: The Australian government is ‘considering a ban on “high-risk” uses of artificial intelligence and automated decision-making, warning of potential harms including the creation of deepfakes and algorithmic bias’, reports The Guardian.
- US: The White House AI R&D Strategy Offers a Good Start – Here’s How to Make It Better: Tech Policy Press offers an overview of the White House’s research and development strategy, with some recommendations as to how the strategy could go further.
- EU/US: EU-US Terminology and Taxonomy for Artificial Intelligence: The European Union has released an initial draft of AI terminologies and taxonomies prepared by a group of experts: ‘A total number of 65 terms were identified with reference to key documents from the EU and the U.S.’ Some commentary on this here from Andrew Strait, Associate Director at the Ada Lovelace Institute, noting that civil society organisations should be consulted on the document.
- China: China’s Xi Jinping calls for greater state control of AI to counter ‘dangerous storms: The Guardian reports that ‘Chinese leader Xi Jinping and top officials have called for greater state oversight of artificial intelligence as part of work to counter “dangerous storms” facing the country’.
Facial recognition, biometric identification, surveillance
- India: Indian police to share fingerprint data, more biometrics introduced in prisons: ‘India’s central jails across the country are currently setting up biometric systems for attendance and security monitoring.’ This is related to an increase in gang violence in some of the prisons. The aim is for ‘law enforcement agencies to access and share biometric fingerprint data with the country’s National Automated Fingerprint Identification System.’
- UK: BID shines a light on life under surveillance: The author profiles a film that captures the experiences of some immigrants that have been forced to wear a surveillance tag, created in collaboration with charities Bail Immigration Detainees and Privacy International. The film exposes the ‘emotional and social impacts of the Home Office’s GPS surveillance on non-British citizens on immigration bail, including asylum seekers and many born or raised in the UK.’
AI, algorithms, and autonomy in the wider world
- International: New and emerging technologies need urgent oversight and robust transparency: UN experts: UN experts today ‘called for greater transparency, oversight, and regulation to address the negative impacts of new and emerging digital tools and online spaces on human rights. “New and emerging technologies, including artificial intelligence-based biometric surveillance systems, are increasingly being used in sensitive contexts, without the knowledge or consent of individuals,” the experts said ahead of the RightsCon summit in Costa Rica from 5 to 8 June 2023.’ They also stressed that ‘Specific technologies and applications should be avoided altogether where the regulation of human rights complaints is not possible’.
- US: Cop Out: Automation in the Criminal Legal System: Georgetown Law’s Centre on Privacy and Technology have released a new website ‘which explores the ways algorithmic technologies increasingly inform decisions made in the criminal legal system’. More on this new site here, from the centre’s twitter account.
- US: Eating disorder helpline disables chatbot for ‘harmful’ responses after firing human staff: As covered in last week’s news briefing, executives at the U.S. National Eating Disorders Association (NEDA) had ‘decided to replace hotline workers with a chatbot named Tessa’ after workers decided to unionise. It has now been reported by Vice that ‘The National Eating Disorder Association (NEDA) has taken its chatbot called Tessa offline, two days before it was set to replace human associates who ran the organization’s hotline’, following ‘a viral social media post displaying how the chatbot encouraged unhealthy eating habits rather than helping someone with an eating disorder.’
- International: Tech layoffs ravage the teams that fight online misinformation and hate speech: CNBC reports that a number of tech companies, including Meta, Amazon, Alphabet and Microsoft, have cut the jobs of ‘wide swaths of people tasked with protecting the internet’s most-populous playgrounds’, noting that ‘The slashing of teams tasked with trust and safety and AI ethics is a sign of how far companies are willing to go to meet Wall Street demands for efficiency’ on the basis of budget constraints and the push to generate higher profit margins.
- International: ‘I do not think ethical surveillance can exist’: Rumman Chowdhury on accountability in AI: In an interview with The Guardian, Dr. Rumman Chowdhury, a pioneer in the field of applied algorithmic ethics and an expert on creating cutting-edge socio-technical solutions for ethical, explainable and transparent AI, shares a number of views on how ‘humans are not taking responsibility for the products that we build’ and that there is a clear lack of accountability, arguing that ‘only through collectivism can proper regulation and enforcement occur.’
- International: Sci-fi writer Ted Chiang: ‘the machines we have now are not conscious’: In an interview with The Financial Times, the sci-fi author Ted Chiang, ‘one of the most lauded’ of his generation, discusses his fears about AI: ‘His fear isn’t about a doomsday scenario, like researchers predict, where AI takes over the world. He is far more worried about increasing inequality, exacerbated by technologies such as AI, which concentrates power in the hands of a few.’
Other items of interest
- Recording: How are new technologies impacting Human Rights? The University of Cambridge’s Minderoo Centre for Technology and Democracy recently hosted a discussion on the impact of new technologies on human rights, with speakers including Damini Satija (Head of Algorithmic Accountability Lab and Deputy Director, Amnesty Tech, Amnesty International) and Anjali Mazumder (AI and Justice and Human Rights Theme Lead, Alan Turing Institute). The full discussion is available to watch at the link above.
Sign up for our email alerts
"*" indicates required fields
We take data privacy seriously. Our Privacy Notice explains who we are, how we collect, share and use Personal Information, as well as how you can exercise your privacy rights, but don’t worry, we hate spam and will NEVER sell or share you details with anyone else.