For an alternate viewpoint, see “Counterpoint: Evidence Is Lacking That Algorithmic Pricing Is Leading to AI Collusion.”

Artificial Intelligence is developing faster than many of us can imagine and is now becoming an integral part of everyday life. So far, businesses are the primary catalysts for this deployment. Studies show that in one year of introducing a new type of AI, one-third of respondents reported their organizations were using the technology in some form, and 40 percent expected to up their usage investments.

As we saw with the development of computers, once the workforce gets a taste of advantageous technology, it’s likely irremovable. And that is why our policymakers must get off the sidelines in regulating AI.

Despite AI’s burgeoning usage, Americans are just starting to warm up to the technology in many ways because of the “unknown” factor. In the same vein, they are skeptical that elected officials can get enough of a grasp on the gravity of AI capabilities to regulate. Recent polling found that 57 percent of voters said they were extremely or very concerned with the government’s ability to regulate AI to promote innovation and protect citizens effectively. While these voters are rightfully concerned and calling for action, it’s important our lawmakers approach this issue delicately, not applying too broad a brush but also not letting perfect be the enemy of good.

Some legislators in Washington are catching onto this. Sen. Todd Young, R-Indiana, is a part of the bipartisan Senate AI working group to create rules to protect Americans from AI while promoting its potential uses. At an event in October, Young said the United States should use a “light touch” when regulating AI. He emphasized the importance of harnessing the power of AI, stating that “our bias should be to let innovation flourish.”

He also argued that the government must safeguard Americans from the risks that come with AI. He says this will involve carefully filling the gaps within existing laws.

There are several possibilities for how to go about regulating AI. In October, the White House issued an executive order regarding AI that will require safety assessments, research on labor market effects, and consumer privacy protections. The Department of Defense also announced a Generative AI Task Force. In Congress, there is the Senate working group, while House Democrats have also announced an active group to enact legislation authorizing more programs and policies geared toward harnessing the power of AI.

No matter what form it takes, the regulation of AI will involve a tightrope walk that balances protecting national security without choking off potential innovation. Regardless of what future legislation and rulemaking will look like, most voters have indicated they are concerned with the government’s actions so far. While discussing how we could potentially regulate AI is continuing, tangible movement is just beginning.

Americans, regardless of their political perspectives, are concerned about inaction. The ball is in Congress’s court to take these concerns and enact meaningful, bipartisan legislation to keep pace with the development of AI by regulating its risks and capitalizing on its rewards.