Anthropic’s Claude artificial intelligence system—embedded in Palantir’s Maven Smart System on classified military networks—is being used by the US military to identify and prioritize targets in the criminal war of aggression against Iran launched by the United States and Israel on February 28. The Washington Post reported Tuesday that Claude generated approximately 1,000 prioritized targets on the first day of operations alone, synthesizing satellite imagery, signals intelligence and surveillance feeds in real time to produce target lists with precise GPS coordinates, weapons recommendations and automated legal justifications for strikes.
This represents the first large-scale deployment of generative AI in active US warfighting operations. It is being used to wage a war that has already killed 787 Iranians, according to Amnesty International, including an estimated 150 schoolchildren in a missile strike on a school in the southern city of Minab on March 1, which UNESCO described as “a grave violation of humanitarian law.”
As the World Socialist Web Site previously reported, last week the Trump administration blacklisted Anthropic and designated it a “supply chain risk to national security” after CEO Dario Amodei refused Pentagon demands for unrestricted access to Claude, insisting on two narrow contractual restrictions against mass domestic surveillance of Americans and the use of fully autonomous weapons.
On February 28, just hours before the war on Iran began, Trump signed an executive order directing agencies to phase out Claude, giving the military six months to complete the transition. This renders the entire spectacle of the blacklisting functionally meaningless. While Trump publicly punishes Anthropic for maintaining two narrow technical restrictions, the same administration is using Anthropic’s technology to select targets in an illegal war. As one military source told the Washington Post, “We’re not going to let [Amodei’s] decision-making cost a single American life.”
Amodei has not publicly opposed the use of Claude in the Iran war. His silence is revealing but not surprising. His stated “red lines” against domestic surveillance and fully autonomous weapons were never directed at the functions Claude is actually performing in Iran: target identification, intelligence assessment, weapons selection and battle simulation. These operations fall entirely outside his stated restrictions.
Amodei himself declared in a public statement last week, “We have never raised objections to particular military operations,” confirming that Anthropic also raised no objection to the January 3 assault on Caracas, Venezuela, which killed between 83 and 100 people.
The military-AI kill chain
Claude’s deployment in Iran is the product of a massive military-AI apparatus constructed over years with bipartisan support. Project Maven—the Pentagon’s flagship AI warfare program, now operated by Palantir under a contract that has grown to nearly $1.3 billion—serves over 25,000 users across every US Combatant Command. Anthropic itself placed Claude on these classified networks through its November 2024 partnership with Palantir and Amazon Web Services, followed by the launch of “Claude Gov” for national security agencies in June 2025. The company pursued military integration aggressively. It cannot now plausibly claim surprise that its technology is being used for exactly what military AI systems are designed to do.
The template for AI-driven mass murder was established in Gaza. As 972 Magazine documented, Israel’s “Lavender” AI system flagged approximately 37,000 Palestinians for assassination. The systematic shift from human target selection to algorithmic target generation with human rubber-stamping is now being deployed at scale against Iran, with Claude generating hundreds of AI-generated targets daily. As The New Republic observed, “Meaningful human control becomes a bureaucratic fiction rather than a genuine safeguard when hundreds of AI-generated targets are processed daily with inconsistent verification across military units.”
Defense Secretary Pete Hegseth’s January 9 “Artificial Intelligence Strategy for the Department of War” made the trajectory explicit, committing the Pentagon to becoming an “AI-first warfighting force,” requiring frontier AI models deployed to soldiers within 30 days of public release and mandating demonstrations of autonomous drone swarms and AI-driven battle management later this year. The Iran war is the first major test of this doctrine.
Hours after the Anthropic blacklisting, OpenAI CEO Sam Altman announced an expanded deal to deploy ChatGPT on the Pentagon’s classified networks. The contract language stipulates: “The Department of War may use the AI System for all lawful purposes,” precisely the formulation Anthropic refused to accept.
Altman told OpenAI staff at an internal all-hands meeting, “You do not get to make operational decisions,” informing his own employees that OpenAI has no say over how the Pentagon uses its technology. He later admitted to CNBC that the deal “looked opportunistic and sloppy.”
Even OpenAI’s nominal “safeguards”—prohibitions on “unconstrained monitoring of US persons’ private information” and on autonomous weapons—are riddled with loopholes. The word “unconstrained” means any limitation, however minimal, satisfies the prohibition. The term “private information” is undefined; the Defense Intelligence Agency and the National Security Agency already purchase bulk location and browsing data from commercial brokers without warrants.
OpenAI’s own Katrina Mulligan acknowledged the obvious: “We can’t protect against a government agency buying commercially available data sets.” These are not safeguards. They are public relations instruments designed to provide the appearance of ethical constraints while granting the military functionally unrestricted access.
The need to organize opposition to AI militarism
The public response to these developments reflects genuine popular hostility to the use of AI for mass surveillance and militarism. Across social media, there have been thousands of comments praising Anthropic for not completely caving in to the Pentagon and denunciations of OpenAI for doing so. The “We Will Not Be Divided” open letter, which calls on OpenAI to defend the same provisions that Anthropic did, has grown from roughly 650 to nearly 900 signatories from OpenAI and Google. Since last Friday, ChatGPT uninstalls have spiked 295 percent as a result of OpenAI’s brazen subservience to the Trump administration, while Claude rose from 42nd to 1st on the Apple App Store.
But this opposition has not taken the form of independent working-class political action. No strikes, protests or work stoppages have been reported at any AI company. The open letters appeal to corporate executives—the same executives who signed military contracts—to voluntarily adopt restrictions. The #QuitGPT movement channels opposition into consumer choices: switch apps, cancel subscriptions, sign petitions.
Deep popular opposition to the war exists. A University of Maryland poll found only 21 percent of Americans favored the attack on Iran, while 49 percent opposed it. A YouGov survey recorded 34 percent approval, the lowest for any US military action in modern history.
This mass opposition must be given conscious political expression. It will not find a vehicle in either the Democratic Party—which joined Republicans to pass the $901 billion defense budget funding these operations—or the Republican Party, or the trade union bureaucracies, or the pseudo-left organizations that function as political auxiliaries of the Democrats. It can only be organized as an independent movement of the international working class, fighting to put an end to imperialist war, mass surveillance and the threat of fascism.
Artificial intelligence is a revolutionary technology with the potential to advance human knowledge, eliminate drudgery and raise the material and cultural level of the entire world. Under capitalism, it is being transformed into an instrument of imperialist mass killing, a tool for the construction of a surveillance police state and a mechanism for the wholesale elimination of jobs and the further concentration of obscene wealth. The answer to this is the building of a revolutionary socialist movement of the working class to take political power and place this technology—along with the means of production as a whole—under public ownership and democratic control.
The World Socialist Web Site has developed Socialism AI—a unique application of artificial intelligence to the political education and preparation of the working class for this fight. Tech workers, who confront the daily transformation of their labor into instruments of war and repression, should use Socialism AI, study the history of Trotskyism and the Fourth International and take up the struggle for the independent political mobilization of the working class against imperialist war and the capitalist system that produces it.
Read more
- Trump blacklists Anthropic, orders all federal agencies to cease use of AI firm’s technology
- Pentagon gives Anthropic 3 days to drop AI safeguards or face blacklisting
- Palantir Technologies: A “CIA-backed startup”
- Google says it will not renew Project Maven—but collaboration with Pentagon will continue
