Anthropic on shaky ground with Pentagon amid feud after Maduro raid

Anthropic on shaky ground with Pentagon amid feud after Maduro raid
Source: The Hill

Anthropic has increasingly found itself at odds with the Pentagon over how its AI model Claude can be deployed in military operations following disclosure about its use in the raid that captured Venezuelan President Nicolas Maduro last month.

The AI company is one of several firms that secured a massive contract with the Defense Department (DOD) last summer amid a broader push by the Trump administration to boost AI adoption.

But the latest dispute has left its relationship with the Pentagon on shaky terms, threatening the future of the contract and sparking a broader ethics battle over the safe application of AI models in warzones increasingly dominated by cyber attacks, robots and drones.

Sarah Kreps, the director of the Tech Policy Institute in the Cornell Brooks School of Public Policy, said that Anthropic has been "going in the direction of enterprise, and that, I think, does open them up to these questions of how enterprises are using these models, and it makes it much more difficult to maintain control once you've handed these models over to that kind of enterprise-level use."

"What I've seen in other tech sectors, which is when you grow so quickly, which they have, even if your goal is not to move quickly and break things, things are going to break. And then you have to figure out what to do," Kreps said in an interview with The Hill.

The Pentagon confirmed Monday that it was reviewing its relationship with Anthropic amid a long-brewing dispute over the AI firm's usage policy, which bars the use of Claude to conduct mass surveillance or weapons development.

"Our nation requires that our partners be willing to help our warfighters win in any fight. Ultimately, this is about our troops and the safety of the American people," chief Pentagon spokesperson Sean Parnell said in a Monday statement.

Anthropic was one of several leading AI companies that scored a contract with DOD for up to $200 million last July, alongside Google, OpenAI and Elon Musk's xAI.

As the Pentagon has continued its push to embrace the technology, it has also brought several models onto its new bespoke AI platform, GenAI.mil. Google and xAI joined the platform in December, while OpenAI announced last week that ChatGPT would also be amde available. Anthropic notably has not been added to the platform.

The company's relationship with the Defense Department has hit a major snag in recent weeks.

A key point of contention between the two sides was the Trump administration's Venezuela raid in early January, which resulted in Maduro's capture and indictment in the U.S. on charges related to drug trafficking.

Following the raid, a senior Anthropic executive got in touch with a senior Palantir executive, asking whether Anthropic's software was used in the operation, according to the senior DOD official.

The Palantir executive ultimately told the Pentagon about the exchange because he was alarmed the question was raised in a way that would indicate that Anthropic might disapprove of the use of its model, the official noted. Palantir did not respond to a request for comment.

An Anthropic spokesperson said in a statement Wednesday that the company has not discussed the use of Claude for specific operations with the Pentagon or expressed concerns to "any industry partners outside of routine discussions on strictly technical matters."

A senior DOD official told The Hill on Wednesday that due to Anthropic's behavior, many senior DOD officials are starting to view the company as a supply chain risk -- a designation typically reserved for foreign adversaries -- and the Pentagon might require all of its vendors and contractors to certify that they do not use any Anthropic models.

In a statement Monday, an Anthropic spokesperson said it is engaged in "productive conversations, in good faith, with [the Department of War] on how to continue that work and get these complex issues right."

"Anthropic is committed to using frontier AI in support of US national security. That's why we were the first frontier AI company to put our models on classified networks and the first to provide customized models for national security customers,"

the spokesperson added.

Morgan C. Plummer, a senior policy director at Americans for Responsible Innovation, said the tangle over Claude's use might be one of those "weird cases" where both sides have a point.

The Pentagon can argue that its needs are different than the rest of the market and that it could use models in "new and unique" ways to protect the country, he said, while Anthropic can contend it has a certain ethos that drives the company and red lines around its use.

"Both positions, to me, are perfectly reasonable positions,"

Plummer said in an interview with The Hill, adding that the clash highlights why the two sides should have come to an agreement over the use of technology before it was procured.

Amid this standoff, Emelia Probasco, a senior fellow at Georgetown's Center for Security and Emerging Technology, suggested it would be a "massive loss" if the two sides failed to reach an agreement and the Pentagon no longer had access to Anthropic's tools.

"One of the top AI labs in the world is trying to help the government, and there are warfighters who are using this today who are going to be harmed if all of sudden their access is taken away without some very clear technical explanation of what's going on,"

she said.

Claude is currently the only large language model that can operate on fully classified systems, a priority for the Pentagon.

The senior DOD official said Wednesday that other AI companies are working in "good faith" with the Pentagon to ensure their models, including ChatGPT, Grok and Gemini, can be utilized for "all lawful purposes" and have all agreed to this in the military's unclassified systems.

They added that one company, which they declined to name, agreed to have the model used across all systems and the Pentagon is "optimistic the rest of [the] companies will get there on classified settings in the near future."

Anthropic, through its policy views and links to Democrats, has at times clashed with the Trump administration.

Anthropic CEO Dario Amodei criticized President Trump in the lead-up to the 2024 election and recently swiped at the administration over its sales of AI chips to China at the World Economic Forum. The firm has also supported state-level AI regulations, while the administration has argued that state laws could stifle innovation.

The AI giant has brought on several former Biden administration officials, including Ben Buchanan, the former Biden AI advisor, and Tarun Chhabra, an ex National Security Council official. But the company also added Chris Liddell, who was the deputy White House chief of staff during Trump's first term, to its board of directors earlier this month.

Last month, Defense Secretary Pete Hegseth said he is forming an "AI-first, war-fighting force" and appeared to take a swipe at Anthropic.

"We will not employ AI models that won't allow you to fight wars,"

the Pentagon chief said during a speech in Texas.