Much of the debate still treats it as a contest over chips, capital and models, usually framed as a rivalry between the United States and China. Those inputs matter. But they no longer determine whether AI can actually take hold across society. What matters now is something governments cannot simply buy or import: the ability to manage AI's real-world risks before they trigger backlash and stall deployment.
Those risks are already visible. They include AI-driven job disruption; fraud powered by synthetic voices; deepfakes that corrode trust in information; opaque automated decisions in sensitive public services; and the energy and environmental costs of AI infrastructure. Most people will never encounter AI as a "model." They will meet it as a scam call, a layoff notice or a higher utility bill. If voters think they are paying the costs while others capture the gains, the rollout slows. Mismanage these risks and the sequence is predictable: pushback, pause, slowdown.
That sequence is no longer theoretical. Indonesia temporarily blocked Elon Musk's Grok after it generated sexualized content, then allowed it back only under supervision. In Germany, voice actors have protested contracts that would let studios use their recordings for AI training. In Europe, the EU AI Act now prohibits, in principle, the use of real-time remote biometric identification in publicly accessible spaces for law enforcement, allowing only narrow exceptions under strict safeguards. In the United States, data centers have become political flashpoints over electricity demand, water consumption and tax incentives. Illinois Gov. J.B. Pritzker recently proposed suspending state incentives for new data centers, arguing that households should not bear the costs of AI expansion. Different risks, same political dynamic. Geopolitical shocks can amplify these domestic pressures -- consider the recent drone attacks that damaged data centers in the Gulf, adding to supply chain and resilience risks.
Risk management is not a side issue. It is what makes adoption politically sustainable. Put simply, AI needs legitimacy at scale: the willingness of people to accept it in everyday life, especially when it fails.
This is where techlash becomes decisive. Techlash is not irrational fear of innovation. It is a judgment that institutions cannot -- or will not -- control the downsides. Once that belief takes hold, the public debate shifts from "How should we use this technology?" to "Why should we allow it at all?" That is what stalls deployment.
At Davos this year, and at the AI Impact Summit in New Delhi, a recurring message was adoption at scale. But speed is not the same as staying power. Countries can sprint in the lab and still stumble in society if legitimacy collapses. The real contest is not who deploys AI first, but whose system can absorb repeated shocks without losing public support.
The biggest near-term flashpoint is likely to be work. AI will not wipe out entire professions overnight, but it will erode tasks, weaken bargaining power and destabilize career paths. It is expanding what I have called the AI precariat class -- workers facing persistent insecurity without clear protection or recourse. Managing that transition requires more than upskilling slogans. Governments may need portable benefits that follow workers across jobs; retraining financed by gains from AI deployment; social insurance systems functioning for gig/contract workers as well as traditional employees.
Public anxiety about AI cannot be ignored. It shapes what governments can implement and what companies can deploy. To better understand where legitimacy is most fragile, I have developed a prototype AI Anxiety Index at New York University. It combines public trust in government; workforce exposure to generative AI; polling on fears related to AI-driven disruption across major economies.
A pattern is emerging. Western countries -- including the United States, the United Kingdom and France -- cluster at the higher anxiety end of the spectrum. Several Asian economies appear to register lower levels of anxiety despite significant workforce exposure. The point is not to rank political systems, but to identify where AI-related controversies create more friction. In higher anxiety environments, each incident is more likely to trigger lawsuits, regulatory pauses, procurement bans and political backlash. In lower anxiety environments, governments often face fewer veto points and can test, adjust and keep scaling.
This points to a divergence in how countries manage AI. Some governments will move faster on risk management -- mandating audits, restricting harmful uses and enforcing data standards -- and they will scale sooner. Smaller, high-capacity states may have an advantage because coordination is easier. But execution speed and legitimacy are not the same thing. Low anxiety can reflect effective risk management -- or simply that costs are less visible until they suddenly are. The deeper question is durability: can any system sustain AI adoption when returns disappoint, costs become visible or failures accumulate?
The countries that succeed will treat risk governance as infrastructure, not optics. This would include independent evaluation capacity; mandatory incident reporting; clear accountability and liability; procurement rules that reward safety; credible labor transition policy; and serious planning for AI's energy and environmental footprint.
The AI race is no longer just about who builds the most powerful systems. It is about who can scale them without triggering the backlash that halts progress. Managing the backlash is now the price of scale.