- The Autonomous Weapons Newsletter
- Posts
- Welcome back to The Autonomous Weapons Newsletter!
Welcome back to The Autonomous Weapons Newsletter!
Military AI is in the news more than ever. Here's the latest.
This is Anna Hehir, FLI’s Head of Military AI Governance, and Maggie Munro, Communications Strategist, here with the The Autonomous Weapons Newsletter. We’re excited to bring you the news on autonomous weapons systems (AWS) at a pivotal moment, as the world comes to terms with whether algorithms should make life and death decisions (spoiler alert: most people are terrified).
With this publication, we’re keeping our audience - primarily consisting of policymakers, journalists, and diplomats - up-to-date on the autonomous weapons space, covering policymaking efforts, weapons systems technology, and more.
That being said, if you have no idea what we’re talking about, check out autonomousweapons.org for our starter guide on the topic.
If you’ve enjoyed reading this, please be sure to subscribe and share as widely as possible.
What’s happening with military AI?
If you’ve caught a glimpse of the news over the past month, you may know that a LOT has happened in the world of military AI.
We’ve provided an in-depth dive below, but if you’re short on time or attention, here’s the TLDR:
A dispute between Anthropic and the US Department of War over how Anthropic’s systems would be used brought to the fore a critical discussion on the risks surrounding autonomous weapons. The general public’s opposition towards autonomous weapons crescendoed to the point of consumer boycotts of OpenAI, who swooped in to forge their own contract with the US government. The norms around what is acceptable in autonomous weapons were further solidified by the 1,000+ employees from Google and OpenAI who signed an open letter in support of Anthropic. As the public asked for possible solutions, major news outlets such as The Financial Times and The Guardian came out declaring support for a global legally binding instrument on autonomous weapons. The use of military AI (both autonomous weapons and decision support systems) in the current Iran conflict further entrenched attention, interest and backlash towards how military AI systems can be used.
If you’re interested in a deeper dive, keep reading.
Anthropic vs. Department of War
American AI company Anthropic faced the wrath of the US Department of War following their refusal to allow their systems to be used for mass domestic surveillance and fully autonomous weapons. Labelled a “supply-chain risk” by Defense Secretary Pete Hegseth, with US federal agencies ordered to cease all use of Anthropic technology, Anthropic is now challenging the supply-chain risk designation in a lawsuit.
While government backlash was swift and harsh, Anthropic’s commitments to their redlines - as outlined in this statement - has been widely celebrated by the public, and even other tech company employees who stated in an open letter, “We stand together to continue to refuse the DoW's current demands for permission to use our models for domestic mass surveillance and autonomously killing people without human oversight”. Anthropic has since seen a surge in Claude downloads, going from #131 on the Apple App Store to #1 over one weekend.
Meanwhile, OpenAI, who stepped in and announced a DoW contract of their own after Anthropic’s deadline to agree to the Pentagon’s demands passed, has faced intense backlash: over four million users have pledged to boycott ChatGPT according to quitgpt.org, alongside celebrities publicising their own boycotts.
What are Anthropic's actual red lines on military AI and domestic mass surveillance?
Anthropic has said they oppose their systems used as “fully autonomous weapons” as they are not reliable enough and require proper oversight enacted by Congress. The DoW sought to impose the clause “for all lawful use”. Anthropic did not deem current US law and directives to be sufficient and provide proper oversight.
Anthropic is not opposed to fully autonomous weapons used in the future. They are also not opposed to their systems used in many of the life cycle functions that make up an autonomous weapon system, such as automated targeting and decision recommendations.
Anthropic is reported to have submitted a proposal to compete in a Pentagon prize challenge to produce technology for voice-controlled, autonomous drone swarming.
Dario Amodei has publicly positioned Anthropic's systems to be used in support of democracy and not authoritarian regimes. He has not publicly shared his views on the reported use of Claude in the US missions in Venezuela and Iran. Claude's constitution does explicitly prohibit the “planning to seize or retain power in an unconstitutional way (e.g., in a coup)”.
Anthropic supports the use of AI for lawful foreign intelligence and counterintelligence missions, but is opposed to use of AI for mass domestic surveillance, even if it’s legal under US law.
Their claim is that in cases such as incidental collection, the law has not yet caught up with the rapidly growing capabilities of AI. For example, it is legal for the government to purchase detailed records of Americans’ movements, web browsing, and associations from public sources without obtaining a warrant. Advanced AI systems can now allow states to use such data to develop personal profiles on Americans in a way that wasn't previously possible.
What are OpenAI's actual red lines on AWS and domestic mass surveillance?
OpenAI's current contract (including the amendments from 3rd March) with the Pentagon permits any autonomous weapons allowed by the law (“for all lawful use”). This clause leaves the possibility for the two red lines to remain unrestricted.
There are concerns and commentary that OpenAI's language and contract clauses convey a widely more permissive approach to these red lines, that is not in line with Anthropic's assessment of current US government laws.
AI Warfare in the Spotlight
Recent military operations in Iran have brought AI-enabled targeting systems into active use at unprecedented scale. The US and Israel claim to have conducted 4,000 strikes in four days of Operation Epic Fury - exceeding the first six months of anti-ISIS bombing campaigns - reportedly with plans to eventually achieve 1,000 strikes per hour.
As we’re beginning to see alleged uses of AI weapons systems in the war in Iran, the Gulf, and Lebanon, we have seen this question repeatedly asked: How can we tell if AI systems are used?
The answer is that we cannot fully know the extent to how AI systems are being used without confirmation from the user - in this case, a military. Even if a manufacturer chimes in to say their systems are used, we'd need the military's confirmation as to what levels of autonomy were applied.
This cuts to the heart of the accountability and transparency gap. Victims, civilians, combatants and eye-witnesses cannot fully know if they are facing an AI weapon-enabled attack, and whether a human exercised control over the kill chain.
There are a narrow range of clues that we can look towards as external observers. The first major clue is the scale of attacks. AI-enabled targeting systems and decision support systems now allow militaries to conduct unprecedented numbers of strikes that would not be possible without AI. Swarms of systems can be a strong indicator of autonomous navigation, targeting or engagement. Another clue is if there’s no stable data-link or GPS signal, then there is highly probable autonomous navigation and engaging happening.
We are hearing 'human in the loop' pop up in the current discourse. A 'human in the loop' does not mean there is meaningful human control. Here are the underlying principles for meaningful human control to be exercised:
Informed and adequate moral and legal assessments and responsibility
Predictability, reliability, traceability, and explainability
Limiting the types of targets, duration, geographical scope, and scale of the operation
Preventing changes in systems without informed human judgement and review
Our Take
When reading the fine print, it emerges that Anthropic is not opposed to fully autonomous weapons used in the future, nor are they opposed to their systems used in automated targeting and other key parts of the kill chain. To date, Dario Amodei has not publicly shared his views on the reported use of Claude in the US missions in Venezuela and Iran.
Despite the public posturing of Anthropic, we cannot rely on a single private company to define what is acceptable for military AI use. Governments must set good governance both through robust national laws and through international frameworks that convey broad multilateral buy-in.
CCW Group of Governmental Experts (GGE) Meeting on Lethal Autonomous Weapons Systems
While military AI was dominating the news cycle, states met at the UN to discuss a potential international treaty on autonomous weapons at the Convention on Certain Conventional Weapons in Geneva, from 2-6 March. For a full readout, including analysis of each days’ discussions, check out Reaching Critical Will’s report on this latest GGE session.
Despite the acute geopolitical challenges of this particular moment in history, more than 70 states have expressed support for moving to negotiations based on the GGE’s rolling draft text on possible elements for a treaty. On top of this, there are now 130 states who support the concept of an international treaty on AWS in their national positions.
As the use of military AI in targeting and the use of force continues to play out before our eyes, the pressure on governments to set governable rules has never been stronger.
What we’re reading and watching:
📚 Katrina Manson from Bloomberg has just published a book, Project Maven, detailing just how the US shifted towards military AI use. It’s a deeply-researched and timely read.
📚 While everyone’s attention was focussed on Iran, Airwars and The Independent uncovered the first confirmation from a government of a civilian killed in an AI-enabled attack, back in 2024 in Iraq. This is a critical read to understand how difficult it can be to attribute accountability for AI warfare.
📺 Take a break from reading and watch this interview on Amanpour and Company with Dr. Heidy Khlaaf, Chief AI Scientist at the AI Now Institute, explaining the basics of AI use in the war in Iran.
📺 Don’t miss the documentary Click to Kill: The AI War Machine from Channel 4, released April 2nd. We’ve watched the trailer and will be tuning in.
Contact Us
For tips or feedback, don’t hesitate to reach out to us at [email protected].