Autonomy in weapons systems, and AI in security and defence
- UK: AI weapons pose risk to future of humanity and should be banned, expert dossier warns: Evidence from an expert contributor, granted anonymity by the House of Lords AI in weapon systems committee, has said that the use of AI in weapons ‘presents significant dangers to the future of human security and is fundamentally immoral.’ They also warn against the delegation of the decision to use force to a machine.
- Iran/ Russia/ Ukraine: Iran helping Russia build drone stockpile that is expected to be ‘orders of magnitude larger’ than previous arsenal, US says: The US Defense Intelligence Agency has warned against attempts by Russia to manufacture drones in-country to be used in its war against Ukraine. This is said to be undertaken in collaboration with Iran, which has reportedly supplied Shahed 131 and Mohajer drones to Russia. The US agency has cautioned defence manufacturers to make changes in their supply chain to resist proliferation.
- China: China curbs drone exports over ‘national security concerns’: CNN reports that with effect from 01 September, China will impose export controls on drones and drone equipment. The piece mentions that these controls would ‘require vendors to seek permission to export certain drone engines, lasers, imaging, communications and radar gear, and anti-drone systems.’
- Multiple reports on the showcased unmanned systems and weapons featuring autonomy at the IDEF 2023: Breaking Defense reports on the IDEF defence exhibition at Istanbul showcased a number of manned and unmanned ground vehicles. Turkish company Otokar showcased Alpar, a ‘heavy-class tracked and armoured unmanned ground vehicle mounted with guns and electro-optical systems with a laser designator.’ Defense News reports on the Pakistani conglomerate Global Industrial and Defense Solution’s Shahpur III drone which is a Group 4+ drone capable of carrying various payloads and equipped with ‘electro-optical/infrared, synthetic aperture radar, communications intelligence and signals intelligence.’ The exhibition also featured Promavzer, which ‘enables soldiers to transform their standard rifles into mobile remote-controlled weapons’ which can be operated with wire or wirelessly in a 100m range. The system is equipped with a ‘day camera, a thermal camera, and a laser range finder’ while other compatible cameras can also be added.
- US: How Silicon Valley is helping the Pentagon in the AI arms race: This piece by the Financial Times is on the increasing investment of venture capitalists in the defence manufacturing sector, which has increased from $16 Billion to $33 Billion since 2019. Technological advances in the field of automation and AI have pushed the paradigm of military development of technology to military procurement of commercial technology, which the institution seems ill prepared for amd hence depends heavily on venture capital or billionaire founders. One source of frustration is the ‘ rigid planning, programming, budget and execution buying framework, known as PPBE, used to allocate resources across the military’ which is ‘unsuitable for the kind of software that is set to revolutionise future warfare.’
Facial recognition, biometric identification, surveillance
- Kenya: Worldcoin suspended in Kenya as thousands queue for free money: The BBC reports that the Kenyan government has ‘ordered cryptocurrency project Worldcoin to stop signing up new users, citing data privacy concerns.’ Worldcoin is a company founded by US tech entrepreneur Sam Altman, which ‘offers free crypto tokens to people who agree to have their eyeballs scanned.’ Kenya has warned its citizens to be ‘cautious about giving their data to private companies.’
- UK: Home Office secretly backs facial recognition technology to curb shoplifting: The Guardian reports on a ‘covert strategy’ that is said to have taken place ‘during a closed-door meeting on 8 March’ between policing minister Chris Philp, senior Home Office officials and the private firm Facewatch.’ The crux of the matter has to do with the installation of facial recognition in shops, with one reason being to try and curb theft. Mark Johnson, advocacy manager of the campaign group Big Brother Watch, said: “Government ministers should strive to protect human rights, not cosy up to private companies whose products pose serious threats to civil liberties in the UK.’
- US: As Biometrics Technologies Evolve, Consumer Risks Follow, Warns FTC: A few months ago, the Federal Trade Commission released a policy statement that has ‘warned of several consumer data privacy risks related to the increasing commercial use of biometrics technologies.’ The statement is a culmination of several years of work from the Commision on the ‘guidance of biometrics and best practices for facial recognition technology.’
- International: North Africa a ‘testing ground’ for EU surveillance technology: This piece by Middle East Eye speaks on the various ways in which surveillance technology is being tested on immigrants, asylum seekers and countries that are geopolitically disadvantaged in comparison to the EU and UK. Instances of these include the EU Emergency Trust Fund for Africa (EUTF for Africa) and now the Neighbourhood, Development and International Cooperation Instrument which seek to bolster law enforcement agencies and border security agencies with these technologies in northern Africa.
- US: The everyday surveillance of undocumented immigrants: This piece by Princeton University Press features a book by Asad L. Asad titled ‘Engage and Evade: How Latino Immigrant Families Manage Surveillance in Everyday Life.’ This book features the myriad ways in which ‘undocumented immigrants live within a tangled web of institutional surveillance that both threatens and maintains their societal presence.’
- China: More reports that Hikvision, Nvidia still involved in China ethnic surveillance: Biometric Update reports on a recent publication by IPVM revealing a $6 million contract of surveillance software and hardware reportedly for the ethnic surveillance of Uyghurs in China. Though the companies involved have distanced themselves from the contract and the ethnic surveillance, the contract is said to be for ‘Hikvision’s DS-1F0100-AI analysis software, noting that the algorithm can differentiate Uyghurs specifically.’
- China/International: Dahua confirms, defends ‘black’, ‘white’, ‘yellow’ skin color analytics: The Chinese firm Dahua, one of ‘the world’s largest video surveillance manufacturers’, has confirmed that it sells skin colour analytics as a ‘basic feature’ of its surveillance tools, with Dahua cameras listed in three different European countries (Germany, France and the Netherlands) also including ‘skin colour analytics’ in their specifications. More on this here, with Charles Rollet, the journalist who wrote the story noting that the American Civil Liberties Union has stated that such technology ‘will practically guarantee’ racial discrimination.
- India: Multiple reports of biometric records being collected from ‘illegal migrants’ in Manipur and other states of north-east India: The Ministry of Home Affair in India has directed the National Crime Records Bureau (NCRB) to assist in collecting biometrics from ‘illegal immigrants’ amid rising communal and ethnic tensions in the state of Manipur. There have also been similar instructions issued in other states in the region.
AI, algorithms and autonomy in the wider world
- International: Unregulated AI Will Worsen Inequality, Warns Nobel-Winning Economist Joseph Stiglitz: In an interview with Scientific American, Joseph Stiglitz, a Nobel prize-winning economist, said that ‘AI is at the point where it can be trusted on its own.’ Though the claim has been that AI would bring about an increased productivity, Stiglitz opined that it would reduce white collar jobs and create place-based inequality depending on geographical locations of these tech industries.
- International: Turns out there’s another problem with AI – its environmental toll: The Guardian reports on the environmental impact of AI technologies which use graphic processing units (GPUs) and tensor processing units (TPUs). These processors consume more electricity than their predecessors. Apart from electricity, these processors also use millions of litres of freshwater to cool servers during the training and use of these technologies.
- International: New AI systems collide with copyright law: Several artists have ‘filed a lawsuit against Stability AI, the company behind Stable Diffusion, Midjourney, and DeviantArt.’ This comes at a time where some artists are feeling violated because there are AI tools that now have the ability to replace the careers of artists. The lawsuits aim to ‘create legislation and regulation to protect copyright holders and artists from predatory AI companies.’
- International: Chatbots sometimes make things up. Not everyone thinks AI’s hallucination problem is fixable: Matt O’brien, writing for ABC News reports on generative AI systems that have been said to fabricate text. A number of companies are looking into how they can improve their AI systems. Daniela Amodei, co-founder and president of Anthropic, maker of the chatbot Claude 2 says, ‘I don’t think that there’s any model today that doesn’t suffer from some hallucination.’
- US: A.I. is on a collision course with white-collar, high-paid jobs — and with unknown impact: Last month, the PEW Research Center published results from a study on the impact of AI on the American workforce. For example the study revealed that ‘19% of U.S. workers are in jobs with high exposure to AI. The study uses the term “exposure” because it’s unclear what AI’s impact — whether positive or negative — might be.’
- US: Researchers Highlight Ethical Issues for Developing Future AI Assistants: The Georgia Tech News Centre reports that ‘next-generation smart assistants aren’t on the market yet, but the research necessary to create them is underway now.’ According to AI-CARING researchers, ‘when a person relies on an AI system, that person becomes vulnerable to the system in unique ways’ and they have noticed that there are gaps because AI system designers should ‘prioritise the user’s well-being. The author also points out that ‘designers should also consider issues such as ‘trust, reliance, privacy, and a person’s changing cognitive abilities.’ Another issue raised is the potential inaccuracy of AI systems that could for example ‘forget to tell the user to take medication or it may inform the user to take the wrong medication.’
- International: Beware of FraudGPT, the rogue AI chatbot: This piece reports on the bots such as ‘FraudGPT’ and ‘WormGPT’ which have surfaced in the dark web and claim to be bots unhindered by ethical and legal considerations regarding the prompts given to them. Their possible uses include automating scamming, phishing, and the production of malware.
- US: Washington rushing to put guardrails on AI – fast enough?: This piece outlines the various efforts to regulate AI in the US and the urgent need for tangible progress on regulating this technology. The piece says ‘the technology raises grave concerns about everything from data privacy and election integrity to autonomous weapons and new biological threats.’
Sign up for our email alerts
"*" indicates required fields
We take data privacy seriously. Our Privacy Notice explains who we are, how we collect, share and use Personal Information, as well as how you can exercise your privacy rights, but don’t worry, we hate spam and will NEVER sell or share you details with anyone else.