I’m a technology optimist, so when I see a sweeping piece of legislation like the “Big, Beautiful Bill,” which proposes a 10-year federal freeze on state-level AI regulation, I like to sit back, watch, wait and hope for the best.

If you’re a healthcare executive planning for the next decade of AI-powered care delivery, you need to consider the implications of this bill. The hard truth is that AI is evolving faster than current legal and regulatory frameworks can keep pace. If this bill passes, it could be a turning point that forces us to streamline AI regulations — assuming federal agencies establish a thoughtful and consistent standard. 

If federal guidelines are well-designed, they could preempt state-by-state chaos; if done poorly, they could leave local governments powerless and hinder healthcare AI innovation.

AI regulation is a gnarly problem. Although clinical decision support tools, such as diagnostic algorithms, have been around for years. We are now layering on more complexity with large language models generating patient outreach, voice agents and even AI nurses, which will be transformative to the healthcare experience.

As AI capabilities grow, states may scramble to define “responsible AI” rules, possibly without aligned technical or clinical expertise, which could lead to a fragmented web of regulations. Picture a provider network operating in California, New York and Florida. California may require model transparency standards that exceed federal standards. New York could propose complex licensing requirements for AI systems. Florida might wave through the same technology with little to no scrutiny. This isn’t regulation, it’s gridlock.

This isn’t a hypothetical concern. We’ve seen the consequences of fragmented, state-level regulation in healthcare before. Consider the early evolution of telemedicine. From the 1990s through the mid-2010s, telemedicine policy was shaped by a patchwork of inconsistent, state-driven rules.

Cross-state care was nearly impossible until the Interstate Medical Licensure Compact in 2017. Texas, Georgia and Alabama required in-person visits before any virtual care could occur. Alaska permitted asynchronous communication, such as email or image uploads, while most states prohibited it. These disparities created barriers and delayed the adoption of telemedicine, which is now considered essential.

It took a global pandemic to prompt temporary federal flexibilities and eventual standardization. When foundational healthcare rules are built state by state, they could unintentionally entrench inefficiencies for decades.

We already have regulatory frameworks in healthcare for legacy AI, and we need to update them. When a company wants to bring a new stent, insulin pump or digital therapeutic to market, they are bound by the FDA processes and guidelines for regulated medical devices and clinical decision support tools. Can you imagine the regulatory mess if medical device manufacturers had to seek state-level FDA approval?

Because the underlying technology is machine learning or statistical, instead of metal and plastic, it doesn’t change the risk profile. A poorly designed AI triage model can cause just as much harm as a faulty device, misdiagnosing symptoms, delaying care, or introducing bias. If it touches clinical outcomes, it needs to be treated like any other medical product.

No AI expert in healthcare recommends that AI go unregulated. What the healthcare industry needs is consistent and enforceable oversight. The FDA has already published guidance around legacy AI machine learning-based software as medical devices.  Under these guidelines, the FDA has approved 900 devices that incorporate artificial intelligence or machine learning, with the majority being used in radiology and imaging. For the FDA to update AI guidelines for large language models, it will need to engage computer scientists, ethicists and domain experts who can understand the new risk profiles.

Centralized, consistent and straightforward regulation may be the only way to avoid potential chaos and confusion in an industry that is moving and changing rapidly.

It’s on us, the health IT industry, which is quickly becoming the health AI industry, to help define what that looks like. This isn’t about who’s in the White House but about creating a healthcare future with member-centric intelligence at its center.

Anmol Madan is the CEO of RadiantGraph. He wrote this for InsideSources.com.