Artificial intelligence has positioned itself as one of the most promising emerging technologies, opening the doors to tremendous growth and economic prosperity fueled by automation and productivity gains.
However, for all of AI’s potential, it has also sparked fears of potential bias and abuse. These fears have led to a growing wave of proposals to regulate the technology.
The latest proposal by Texas Republican Rep. Giovanni Capriglione would introduce heavy-handed auditing and permitting regimes. It would also set up an AI-specific bureaucracy that will undoubtedly lead to government micromanaging the industry. This bill follows the trend of states like Colorado and California, aiming at introducing AI governance bills at the state level. This should be a cause for concern. Creating a patchwork of state-level regulations is a recipe for disaster.
Under most circumstances, states can prove to be “laboratories for democracy,” where state and local governments can pass laws, highlighting for the nation which are most effective. However, this dynamic has not translated well with digital services.
Often, determining “who” the user is or which local regulations apply is nearly impossible. The nature of these digital services usually transcends frontiers, where the service providers (and their infrastructure) can be located in one state, and the end user (or users) is (are) in one or two other locations.
Additionally, users could be masking their location by using a virtual private network, which would indicate to the service provider that the end user is in a location that might differ from where they are physically located.
These dynamics add a tremendous amount of complexity to state-level regulatory compliance. This compliance can be costly. The experiment of regulating data privacy at the state level has already shown this. Studies estimate that the privacy “patchwork” has cost the American economy $112 billion annually. These costs have led to calls for a federal privacy standard that preempts these state-level regulations, providing the industry with a single resource and standards, lowering compliance costs and reducing regulatory risk.
Unfortunately, various states have seemingly decided to repeat this approach with AI governance. In fact, California and Colorado passed comprehensive AI bills (albeit the California bill was vetoed by its governor). More than 700 lower-scale AI bills have been introduced nationwide. To make matters worse, most of these bills aim to recreate European-style regulations that would empower governments to micromanage the industry, an approach that has proved unsuccessful.
These bills would force companies to conduct assessments twice yearly, submit near-constant risk reporting whenever changes to the AI system occur, alongside other mandates, and force companies to hire fewer engineers and more compliance officers. It would also create an AI-specific bureaucracy that will surely lead to government micromanaging the industry.
Ultimately, the state-level push for AI regulation recreates two models that have failed when implemented. That is not a winning strategy for a technology that is becoming one of the most critical battlegrounds in decades. While the United States was able to emerge quickly as the global leader in the first digital revolution, it now faces increased competition from nations that aim to claim that spot in this potential second digital revolution.
Policymakers at the state and federal levels should not rush to create AI-specific regulations before existing laws prove insufficient to address the harms they are concerned with. Whatever regulation may be necessary should be at the discretion of Congress. The United States cannot afford a costly patchwork of regulations that do nothing but increase compliance costs and legal uncertainty.
Such a scheme would make it near impossible for small AI businesses and startups to prosper, as a costly mistake in compliance with just one of the 50 different statutes can result in a costly mistake that could throw them straight into bankruptcy.
Policymakers nationwide must be aware of this and ensure that the regulatory environment sets the American AI sector on the right track.