Pentagon Says Anthropic's AI Safety Limits Make It An 'Unacceptable' Wartime Risk

Pentagon Says Anthropic's AI Safety Limits Make It An 'Unacceptable' Wartime Risk
Source: Forbes

The U.S. Department of War said in a March 17 court filing that Anthropic's safety and usage restrictions make the company an "unacceptable risk" to national security, sharply escalating a dispute that began as a contract negotiation and has now become a litmus test for whether an AI vendor can place enforceable limits on how the U.S. military uses its models.

In its 40-page opposition to Anthropic's request for a preliminary injunction in federal court in San Francisco, the government argued that Anthropic's conduct during negotiations over military AI contracts raised doubts about whether it could be a "trusted partner" in warfighting environments.

"Throughout the negotiations, Anthropic's behavior more generally caused the Department to question whether Anthropic represented a trusted partner with whom the Department was willing to contract in this highly sensitive area," the filing stated.

"AI systems are acutely vulnerable to manipulation, and Anthropic could attempt to disable its technology or preemptively alter the behavior of its model either before or during ongoing warfighting operations, if Anthropic -- in its discretion -- feels that its corporate 'red lines' are being crossed. DoW deemed that an unacceptable risk to national security."

In other words, the filing says that because AI systems require ongoing tuning and can be modified by the vendor, Anthropic's insistence on preserving its own policy "red lines" created a risk that it could disable or alter model behavior during active operations if it believed the Pentagon had crossed them.

That, the government wrote, would introduce "unacceptable risk" into Defense Department supply chains.

The legal theory rests on 10 U.S.C. § 3252, a statute that allows the Pentagon to take procurement action when a source poses a "supply chain risk" to covered national security systems. The law defines such risk broadly, including the possibility that a system's integrity or operation could be sabotaged, degraded or otherwise subverted.

In a statement on March 5, Anthropic CEO Dario Amodei called the statute "narrow."

"It exists to protect the government rather than to punish a supplier; in fact, the law requires the Secretary of War to use the least restrictive means necessary to accomplish the goal of protecting the supply chain. Even for Department of War contractors, the supply chain risk designation doesn't (and can't) limit uses of Claude or business relationships with Anthropic if those are unrelated to their specific Department of War contracts."

The case is one of the most prickly public collisions so far between an AI company and Washington's insistence on full operational control.

Anthropic sued on March 9 after Defense Secretary Pete Hegseth designated the company a supply chain risk, a move that bars it from certain national security procurements.

"Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic," Hegseth wrote on X.

Anthropic says the government is retaliating against the company for objecting to uses such as mass surveillance of Americans and autonomous lethal weapons, and has argued that the designation violates the First Amendment, the Administrative Procedure Act and due-process protections.

"We do not believe, and have never believed, that it is the role of Anthropic or any private company to be involved in operational decision-making -- that is the role of the military," Amodei stated. "Our only concerns have been our exceptions on fully autonomous weapons and mass domestic surveillance, which relate to high-level usage areas, and not operational decision-making."

The government's answer is: this is not a speech case but a procurement one. Citing long-settled precedent that the government can decide whom it does business with and on what terms, Justice Department lawyers argued that Anthropic's refusal to accept an "all lawful use" clause was commercial conduct, not protected expression. Had Anthropic accepted that contractual term, the filing says, the challenged actions "would not have occurred."

"The Government did not, for example, object to Dr. Amodei's 60 Minutes interview in 2025 about the potential dangers of AI. The Presidential Directive and the Secretary's exercise of section 3252 authority thus were not borne out of 'a desire to punish Anthropic for its views,' but rather out of concern that a private vendor is seeking to dictate the military's use of AI in the national security realm."

The backdrop is also a January push by the Trump administration to make the military more aggressively "AI-first."

According to the government's filing, the Secretary of War directed the department on January 6 to incorporate standard "any lawful use" language into AI service contracts -- part of a broader strategy to ensure the military uses AI models "free from usage policy constraints that may limit lawful military applications." The filing also cites President Donald Trump's January 23, 2025 executive order on sustaining U.S. AI dominance for national security.

Anthropic had already become deeply embedded inside Defense workflows. The government says Claude was available to the department through a Palantir contract and a separate agreement with Anthropic, and that it was the department's "most widely deployed AI model" and the only one embedded in classified systems.

That operational footprint is part of what makes this standoff so intriguing: the filing says replacing Anthropic cannot happen instantly, which is why the administration has allowed a 180-day transition period while agencies migrate to alternatives.

The administration's core claim is not simply that Anthropic objected to certain uses, but that the company retained too much practical leverage over systems the military might rely on in combat. In memoranda attached to the filing, Pentagon officials argue that large language models differ from traditional software because they are probabilistic, continuously updated systems whose integrity depends heavily on vendor trustworthiness. That made Anthropic's continued access, in the department's view, a live operational risk rather than merely a policy disagreement.

The filing goes further, suggesting the Pentagon's concerns intensified during recent negotiations and wartime conditions. Government lawyers say officials became "alarmed" that Anthropic had questioned use of its technology in warfighting systems during active operations, and feared that any interruption or modification could endanger military missions and lives. Anthropic has publicly disputed the implication that it sought operational control, with chief executive Dario Amodei saying in late February that military authorities, not private companies, make military decisions and that the company had not attempted to limit use "in an ad hoc manner."

Anthropic has warned that the supply-chain-risk label could imperil relationships with more than 100 commercial customers and cost it billions in revenue because the designation carries reputational consequences far beyond Washington procurement circles. Civil liberties groups and parts of the tech industry have shown support for Anthropic, including reported amicus support from Microsoft and researchers affiliated with OpenAI and Google.

What makes the case all the more intriguing is the government's use of a supply-chain-risk tool that outside legal observers say has historically been associated with foreign-adversary concerns -- not disputes with a U.S. AI lab over model-governance terms. A legal analysis by law firm Willkie Farr said the move "appears unprecedented" for an American company and noted that Section 3252 has previously been used in contexts involving firms such as Russia's Kaspersky Labs and China's Huawei.

"While exceptional and unprecedented, the implication of Section 3252 was itself apparently a step back from the U.S. Government's threat to invoke the Defense Production Act."
"Resort to the DPA would have theoretically given the President the authority to compel the use of Claude on the government's own terms in the name of national security."

An immediate question before Judge Rita F. Lin is whether Anthropic can win a preliminary injunction blocking the government from implementing the presidential directive and the secretary's determination while the case proceeds. The court set expedited briefing on the motion, pointing to how quickly the issue has moved from a procurement dispute into a referendum on whether frontier model companies can preserve safety constraints once their tools enter the national security stack.

Brass tacks, the government is arguing for a simple principle with enormous implications: in war, it says, the military cannot depend on AI systems whose maker reserves the power to second guess lawful uses. Anthropic's case is important: that an AI company does not lose its constitutional rights, or its ability to draw ethical boundaries, simply because the Pentagon wants access to its models. However the court rules on the injunction, the case is one to closely track as one of the first major judicial tests of who gets the last word when AI safety policy collides with state power in increasingly unpredictable times.