The Problem with AI Secrecy | Opinion

The Problem with AI Secrecy | Opinion
Source: Newsweek

Not the physics, but the belief that a handful of governments could control the future by withholding the bomb from others.

In the spring of World War II, Danish physicist Niels Bohr traveled through darkened London, then Washington, with a proposal so simple it sounded naïve: tell the Soviets the bomb exists; build assurances around that admission; prevent an arms race. Secrets, Bohr said, are porous where nature is concerned. What one country's scientists could figure out, other countries' could as well. As the historian Richard Rhodes wrote in The Making of the Atomic Bomb, Bohr realized that "nuclear fission and thermonuclear fusion are not acts of Parliament; they are ... beyond the power of men to patent or to hoard."

But the American decision after the war moved the other way. Congress declared atomic knowledge "born secret." Britain, cut off, built its own bomb. The Soviet Union detonated its first device four years after Hiroshima. The hydrogen bomb and China followed. American physicist Leo Szilard and others had predicted this outcome in the Franck Report: "If no efficient international agreement is achieved, the race for nuclear armaments will be on in earnest."

After the war, J. Robert Oppenheimer tried to phrase the paradox. By easing tensions and building trust, fewer restrictions on knowledge would make the world a safer place, not a more dangerous one. But without such trust, countries engaged in an arms race were like "two scorpions in a bottle," risking mutual annihilation.

Eighty years later, the "AI model" is the newest secret. This algorithm, unlike a bomb, cannot be contained within physical boundaries. It's cheap and moves at the speed of light. It copies itself and proliferates. Nuclear deterrence relies on access to materials, but code relies on just a password and motivated programmers.

Today the United States and China are engaged in an AI race to hoard the best minds, GPUs, and algorithms for themselves. Export controls are restricting chips and model weights. But already the controls are fraying. Meta's model leaked just two years ago and seeded an ecosystem of imitations and refinements that moved faster than any licensing system. China's open-source DeepSeek model replicated frontier American models' capabilities cheaper while using older chips.

Until now the response has been to keep piling on restrictions, with the hope that there is some magic tipping point where we finally keep our technology from leaking or being copied. But we forget our postwar lessons: exclusion acts as stimulant and total secrecy acts as accelerant. Restricting American technology to China has only made them double down on developing the technology themselves.

We propose a system of "partial sharing" that will minimize ruinous competition and maximize mutual benefits. Let's begin with candor about capability. Bohr did not want blueprints on the table. While there are legitimate secrets that sovereign nations need to safeguard, the public has a right to know what systems can do and how they fail. Model builders should describe the relevant domains, the boundaries of compute capability, and the conditions of breakdown. None of this requires disclosing the weights or the particular data that gave the model its power. It does require a discipline of precise description -- the opposite of the hype we see so often.

Next, create a place to look together. The nuclear age invented inspections and counting rules; the digital age can invent neutral testbeds. Models that cross agreed thresholds would be examined on common infrastructure by international observers who can reproduce claims, write down incidents in a format others can read, and pick up a phone when something dangerous appears. As with John F. Kennedy and Nikita Khrushchev's hotline, installed after the Cuban Missile Crisis, the phone should ring across borders.

Third, scrutinize the right levers. The mechanisms that enable replication -- the model weights themselves, the precise optimization subroutines, the engineering tricks that enhance model capabilities -- deserve tight custody. The mechanisms that enable governance -- the testing methods, risk summaries, and conditions under which systems are allowed or delayed -- deserve daylight. Governments could require that high‑risk systems publish safety guidelines before wide release.

Finally, encourage open-source models so everyone works off a common foundation. Secret systems give bad actors a place to hide. DeepSeek did the right thing by being open-source. America can become the global leader and set standards by building open-source models that everyone uses.

Secrecy buys time in the way a sandbag buys time against a rising river. Use it, but do not pretend it is a dam. Oppenheimer's plea for openness was a definition of the stakes. We can choose to rely on brittle secrets and feel temporarily secure. Or we can say plainly what exists, how it is tested, and when it is dangerous -- all while keeping the blueprints in the drawer. That is not idealism. It is the least that realism requires when knowledge multiplies.

Ash Jogalekaris is scientist in residence at the Oppenheimer Project, where he works on emerging technology risks. Trained as a chemist, he has been a researcher in the fields of biotechnology and computational chemistry for 15 years.

Charles Oppenheimer is founder and CEO of the startup Oppenheimer Energy, focused on nuclear energy deployment, and the nonprofit Oppenheimer Project, which supports J. Robert Oppenheimer's vision of international cooperation and peaceful use of fission. He spent 20-plus years in Silicon Valley with roles ranging from software programmer to CEO.