The White House is scrambling to find its footing on AI policy, as the development of new, more powerful models forces the Trump administration to rethink its strategy on AI safety.
From the outset of President Trump's second term, the White House has promoted a pro-innovation, light-touch stance on AI regulations, prioritizing the U.S.'s competitive standing against other countries. AI battles in the White House and Congress have focused largely on efforts to preempt state AI laws deemed overly restrictive as a result.
But the release of Anthropic's Mythos, the company's newest model capable of spotting decades-old security vulnerabilities, has shaken the administration's commitment to its typical hands-off approach, prompting discussions about heavier government involvement in new model rollouts.
Conflicting messages from administration officials and reports this week of a potential executive order on AI vetting has sparked panic from the tech industry and backlash from critics of strict AI regulation.
"The flip-flopping nature of the administration's tech respond signal that there is no clear direction or leader driving the agenda," a former Trump White House official told The Hill Friday. "The whiplash distracts from the work we are doing to address the risks on AI today."
The back-and-forth began earlier this week when The New York Times reported the White House is considering vetting AI models before they are released. Politico reported a day later the White House floated an order creating a "vetting regime" that would require AI companies to be approved by the government before releasing models.
National Economic Council Director Kevin Hassett hinted at something similar Wednesday.
"We're studying possibly an executive order to give a clear roadmap to everybody about how this is going to go and how future AIs that also potentially create vulnerabilities should go through a process so that, you know, they're released in the wild after they've been proven safe, just like an FDA drug," Hassett said on Fox Business's "Mornings with Maria."
The comment immediately sparked concerns from AI industry players, many of whom argued that a FDA-like approval process aligned more with the Biden administration's cautious approach to AI rather than Trump's deregulatory posture.
"From day one, the Trump administration rejected the Biden-Harris approach to AI," Neil Chilson, head of AI policy at the Abundance Institute, and Adam Thierer, resident senior fellow with the technology and innovation team at the R Street Institute, wrote in a joint post Thursday.
"That's why Hassett's comments have caused a stir," they continued. "Adopting an FDA-style regulatory regime for AI would represent a shocking policy reversal by the Trump administration, and a major about-face on how America has approached software, online speech, and digital commerce."
Asked about these reports, a White House official said policy announcements will come directly from Trump and that "discussion about potential executive orders is speculation."
"There is no shifting messaging -The White House continues to balance advancing innovation and ensuring security in our AI policymaking," the official added in a statement.
Some in the tech policy space warned such pre-approval systems could give federal officials a "kill switch" to quash speech and stifle innovation.
"Requiring pre-launch approval was criticized as heavy-handed and anticompetitive when included in the Biden administration's executive order on AI," Jennifer Huddleston and Juan Londoño with the libertarian think tank Cato Institute, said Thursday.
Hours after Hassett's comment, White House Chief of Staff Susie Wiles appeared to try to quell some of these fears, writing on X that the administration is "not in the business of picking winners and losers."
"This administration has one goal; ensure the best and safest tech is deployed rapidly to defeat any and all threats," Wiles wrote. "We appreciate the effort being made by the frontier labs to ensure that goal is met."
This is not the first time Trump has abruptly shifted his stance on major technology policy issue. During his first term, Trump criticized TikTok and pushed for banning the social media app in the U.S. But by the start of his second term he cut a deal to preserve the app.
He made a similar switch on cryptocurrency, dismissing it as a "scam" in his first term before welcoming its political and financial benefits during the 2024 campaign.
The White House's shift comes at a moment when Americans are increasingly worried about AI and its potential impacts on society. In a Quinnipiac poll released in late March, 80 percent of U.S. adults said they were concerned about the technology.
While the prospect of a mandatory vetting process stoked alarm, AI firms have agreed, at least voluntarily, in the past to share their models with the government for evaluation ahead of a release. The Center for AI Standards and Innovation (CAISI), housed within the National Institute of Standards and Technology, has evaluated models from OpenAI and Anthropic since 2024.
Amid this week's rumors, NIST announced three leading AI companies -- Google DeepMind, Microsoft and xAI -- agreed to also share their models for government testing ahead of release.
While it's unclear where the Trump administration will ultimately land on a vetting process, AI safety has become a more prominent issue for the White House, seemingly in part due to Mythos.
The AI model, which Anthropic did not release publicly, can help institutions spot and patch security vulnerabilities more quickly. But it may also be a double-edged sword, empowering hackers to find and potentially exploit these flaws.
"What we had in the last month was a step change in the power of one large language model," Treasury Secretary Scott Bessent told Fox Business' "Sunday Morning Futures," adding he expects to see the same from other companies.
Bessent echoed the administration's typical emphasis on the U.S. staying ahead but added it was not mutually exclusive with safety.
"Imagine if China or some non-state actor were ahead of us," Bessent continued. "So what we're determined to do is work with our AI companies to allow them to continue to innovate."
"But our charge in the U.S. government is maintaining safety," he added. "And there is a very important calculus here between innovation and safety. And the US government, we're going to make sure that things stay safe."
The Trump administration's shifting approach to AI comes as Bessent and other officials take the reins of the issue after David Sacks, Trump's AI and crypto czar, departed the White House earlier this year.
Sacks, an early PayPal executive and prominent venture capitalist, largely favored a hands-off strategy for AI regulation.
"It looks to me like that was not a...well-considered whole of government, really thoroughly endorsed position on how to do AI regulation," Helen Toner, interim executive director at the Georgetown Center for Security and Emerging Technology, said during the AI+Expo in Washington Thursday."but just he was the person in the room who had a lot of thoughts,having strong views on AIandthat sortofcarriedtheday."
In an April call, Vice President Vance told major AI leaders "we all need to work together" on the issue,the Wall Street Journal reported Thursday.
In a sign of how seriously the administration is taking Mythos, Anthropic CEO Dario Amodei met with White House officials in mid-April, less than two months after Trump directed federal civilian agencies to stop using the company's technology following a dispute between the AI firm and the Pentagon.
The Defense Department labeled Anthropic a supply chain risk, a designation typically reserved for foreign adversaries. Even as the White House appears to extend an olive branch, the Pentagon has shown little interest in reconciliation.