For a country intent on winning the global race for artificial intelligence leadership, critics say the United States is about to make a costly mistake.
When the House passed the One Big Beautiful Bill Act (OBBBA) earlier this year, the sweeping budget package included a 10-year federal ban preventing states from enacting their own AI regulations. Supporters called it a crucial safeguard — a way to prevent 50 different regulatory regimes from smothering a technology still in its infancy.
But the Senate, responding to coordinated pressure from governors and state attorneys general, voted 99–1 to strip the ban from the final bill. It was a single amendment in a thousand-page package, yet many in the tech world say that decision could prove pivotal.
“The single biggest risk to AI innovation isn’t from foreign competitors,” said Will Rinehart, a senior fellow at the American Enterprise Institute. “It’s from poorly designed state laws that undermine the innovation they claim to protect.”
State-level enthusiasm for AI regulation has surged. According to the National Conference of State Legislatures, 45 states introduced AI-related bills in 2024 and 31 enacted at least one. By mid-2025, 47 states had considered legislation, and more than 30 had passed new statutes regulating the use or development of AI.
For companies, this trend represents more than a legal headache — it’s a direct threat to the speed and scale of innovation.
If state laws diverge widely, tech firms could find themselves navigating dozens of overlapping compliance systems. Instead of investing in new talent or more computing power, companies may be forced to hire lawyers, build redundant reporting tools, and tailor their models to satisfy a mosaic of rules.
In the high-velocity world of AI, even modest delays can be costly. China — America’s chief AI rival — is not slowing down.
Some state lawmakers argue they have little choice but to act. They cite consumer protection, privacy, and concerns over automated decision-making as reasons to move ahead without waiting for Washington.
Nicole Turner Lee, director of the Brookings Institution’s Center for Technology Innovation, says that while state-level regulation of AI is not ideal, it’s necessary until a comprehensive federal regulatory scheme can be developed.
“If federal legislators could come up with a plan that protected both innovation and consumer protection, that would be a win-win,” says Lee. “But based on what’s coming from (the Trump) White House, consumer rights are not a priority. AI is developing so fast that a 10-year moratorium on state regulation could have had bad consequences (for consumers.)”
Still, even state leaders who support regulation worry about the consequences of going it alone.
When Colorado Gov. Jared Polis (D) signed the country’s first comprehensive AI law in 2024, he issued an unusually blunt warning. Patchwork rules, he said, could “hamper innovation and deter competition in an open market.” The state has since delayed implementation until June 2026 over concerns about its impact, and Polis has asked legislators to rework the law.
California has recently moved in the opposite direction. Gov. Gavin Newsom (D) signed the Transparency in Frontier Artificial Intelligence Act, which requires the largest AI developers to disclose safety protocols and assess potential “catastrophic risks.” Supporters say the measure adds needed oversight; critics argue it adds another layer to an already tangled regulatory environment.
Can states regulate without slowing progress? Some scholars believe they can. Cary Coglianese, a law and political science professor at the University of Pennsylvania, says state regulation could ultimately strengthen the system by encouraging experimentation.
“The mark the states will leave on AI regulation is adaptability,” he said. “If that continues, it shouldn’t present a burden.”
Coglianese predicts states will borrow from one another, refining their laws as they learn what works. That iterative process, he argues, could produce a more resilient regulatory model.
But industry advocates say that optimism overlooks the core problem: innovation does not thrive when development happens under 50 separate rulebooks.
“Many of the concerns raised by AI can be addressed by applying existing laws — including privacy, employment, data security, and torts — to the companies that (improperly) deploy AI, not the developers of the technology,” Rinehart said. What we should avoid is imposing new restrictions on developers before the technology has matured.
Whether Congress eventually steps in with a unified national approach remains unclear. But to critics, the window is narrowing.
AI models are advancing at a speed that once seemed unthinkable. Venture capital is pouring into startups. American companies still lead the world in generative AI — but for how long depends, in part, on how quickly U.S. policymakers resolve the regulatory tug-of-war.
If state rules proliferate unchecked, many experts warn, the U.S. could lose its edge not because of Chinese competition, but due to state-level overregulation.
