Hundreds of employees at Google are pressing the company's CEO to reject any deal with the Pentagon to use the company's artificial intelligence in classified settings, warning of the same risks as Anthropic before it was banned in military work earlier this year.
The letter, signed by more than 600 employees at Google DeepMind and Cloud, comes nearly two months after Anthropic was dropped from the Department of Defense after requesting guardrails around AI used for domestic mass surveillance or autonomous weapons.
The letter, sent Monday to Google CEO Sundar Pichai, argues Google does not have a way at this point to guarantee the company's tools would not risk unmonitored harm.
"As people working on AI, we know that these systems can centralize power and that they do make mistakes," employees wrote in the letter, a copy of which was shared with The Hill. "We feel that our proximity to this technology creates a responsibility to highlight and prevent its most unethical and dangerous uses."
The Information reported earlier this month Google is negotiating an agreement with the Pentagon to deploy its Gemini AI models in classified settings. The agreement reportedly would allow the Pentagon to use Google's AI for all lawful purposes.
The parties also discussed language to prevent its AI from being used for mass surveillance or autonomous weapons without human control, The Information reported, but signatories on Monday's letter argued enforcing these provisions in practice is not possible.
"The only way to guarantee that Google does not become associated with such harms is to reject any classified workloads," the letter said.
Google currently has a contract with the Pentagon to use its AI models on non-classified workloads through its genAI.mil platforms. The employees warned any approval of Google's AI in classified work could cause "irreparable harm to Google's reputation, business and role in the world."
"A lot of it comes down to what technical safeguards companies can put in place; but the DoD specifically prohibits any controls," one of the letter's organizers said in a press release. "If leadership is truly serious about preventing downstream harms, they must reject classified workloads entirely for now."
The Hill reached out to the Pentagon and Google for comment.
The issue of the Pentagon's use of AI was thrown into the spotlight earlier this year after the defense agency labelled Anthropic a supply chain risk when it would not agree for its models to be used for any lawful purpose. Anthropic has sued the Trump administration over the designation, which is usually reserved for foreign adversaries.
Hundreds of Google and OpenAI employees signed a letter in support of Anthropic at the time.
Within hours of the designation late February, OpenAI, the maker of ChatGPT, struck a deal with the DOD. The move quickly drew backlash, and CEO Sam Altman later said the company asked for additions to the contract regarding domestic surveillance, admitting the deal "looked opportunistic and sloppy."