Last summer, the Republican-controlled Congress failed to include a provision in its budget reconciliation bill — the “One Big Beautiful Bill” — that would have banned state-level artificial intelligence regulation for 10 years. It was removed as an “extraneous matter” in the proposed legislation. 

Even after a workaround effort to protect children and online privacy, presented by Sen. Ted Cruz, R-Texas, for each state government to regulate, the AI provision fell short and was voted (99-1) to be removed from the Senate bill. In the fall, House Republicans considered adding a blanket AI state-level moratorium to the chamber’s annual National Defense Authorization Act, but Republicans pulled back on this effort.

However, state legislatures were active in 2025. The Future of Privacy Forum, a non-profit organization, in its October report “The State of State AI: Legislative Approaches to AI in 2025,” found that 210 bills were introduced in 42 states that could directly or indirectly affect private-sector AI development and deployment.

The FPF report identified three approaches to regulating private-sector AI: use and context-specific regulations targeting sensitive applications; technology-specific regulations; and a liability and accountability approach that uses, clarifies or modifies existing liability regimes’ application to AI. The FPF report concluded that state lawmakers moved from sweeping frameworks regulating AI toward narrower, transparency-driven regulatory approaches. 

Arguing that AI companies must be free to innovate without burdensome regulations established in 50 states (thus removing barriers to U.S. AI global leadership), the Trump administration has had a long-stated goal of national-level regulation — not state-level — of AI technologies.

To this end, in December, President Trump signed an executive order calling for the development of a national AI legislative framework preempting state AI laws, and directing the attorney general to establish an AI Litigation Task Force to challenge unlawful state AI laws that harm innovation. It also directs the secretary of commerce to publish an evaluation of state AI laws that conflict with national AI policy priorities and withhold non-deployment Broadband Equity Access and Deployment funding from such states. 

Other federal agencies are directed to consider whether to make an absence of similar laws, or a policy of enforcement discretion with respect to any existing such laws, a condition of relevant discretionary grant programs. The executive order further instructs the Federal Trade Commission and Federal Communications Commission to take actions that will limit states’ ability to force AI companies to deceive consumers and consider whether to adopt a federal reporting and disclosure standard for AI models.

Trump’s executive order established the foundation for federal and state legislative and judicial actions to follow in 2026. For example, there may not be a significant body of legal authority to enforce some of the provisions of the order, including Commerce’s authority to block states from funding from the BEAD program, and the authority of the Justice Department to challenge state laws because they violate the Constitution by interfering with interstate commerce. The state attorneys general — from Democratic and Republican states — will surely be filing legal challenges to the constitutionality of the order’s state AI legislation pre-emption measure.

Also Trump’s special adviser for AI and crypto, David Sacks, and Michael Kratsios, assistant to the president for science and technology, are directed to engage Congress on developing and passing federal legislation to establish a “uniform federal policy framework for AI” that would preempt state AI laws, with the exception of those state AI laws relating to child safety protections, AI computer and data center infrastructure (other than generally applicable permitting reforms), state government procurement, and “other topics as shall be determined.”

This AI policy framework should also “ensure that children are protected, censorship is prevented, copyrights are respected and communities are safeguarded.”

Nevertheless, states will continue to pass AI legislation in 2026. Scott Babwah Brennan, the director of New York University’s Center on Technology Policy, notes that “what really harms innovation is regulatory uncertainty, rather than more complicated compliance regimes.” 

This executive order potentially creates legal uncertainty for AI firms over which state AI laws will face legal challenges and which not. The best negotiated AI regulatory outcome? Consensus AI federal policy framework legislation recognizing states’ rights to regulate in prescribed areas, which allows the tech industry to adapt (with certainty) to state law in these areas.

Thomas A. Hemphill is David M. French Distinguished Professor of Strategy, Innovation and Public Policy in the School of Management at the University of Michigan-Flint. He wrote this for InsideSources.com.