The stories about bad advice from AI keep on coming. One eating disorder treatment chatbot gave weight loss tips. Another AI offered teenagers advice on self-harm. A third, during testing, told a person with a substance use disorder to do methamphetamine. 

As capable as AI systems have become, they remain deeply imperfect givers of advice and sometimes put their users in danger. In that light, it’s not surprising that psychological professionals and policymakers would try to act. So far, however, some of the laws appear likely to have effects opposite the laudable safety-related goals their authors profess. In fact, some of them seem to ban the participation of the very professionals needed to improve AI safety.

Exhibit A is a new state of Illinois law — the state’s “Wellness and Oversight for Psychological Resources Act.” While allowing licensed mental health professionals to use AI for administrative tasks, the law outright bans AI “directly interacting with clients in any form of therapeutic communication.” 

The law’s definition of “therapeutic communication” is broad; it includes any guidance, emotional support or behavioral feedback aimed at promoting psychological growth. This might make sense to many at first blush: a well-trained, competent counselor, psychiatrist or psychologist can create human connections and draw on lived experience the way no machine can. AI cannot and should not entirely supplant the psychological profession. Yet, the law’s approach flips safety on its head.

Here’s why: Because it provides a blanket ban, the law makes no distinctions between safe and hazardous uses of AI. Under the law’s plain language, if a therapist wanted to include even limited, controlled therapeutic AI interaction in a treatment plan — for example, a use as a patient’s journaling assistant, role-play for social skills, or structured anxiety exercises — that plan would be illegal under the law as written, no matter how carefully the therapist monitored it. 

Indeed, under the law, a licensed professional can’t legally oversee a study where people with mental health needs interact with an AI tool in a therapeutic context, even if the sole purpose is to improve the tool while catching and fixing potentially dangerous advice before release.

Illinois residents will continue to use them on their own or — under the law’s explicit safe harbors — with pastoral counselors, peer supporters, life coaches and commercial wellness apps. Incredibly, under Illinois law, a life coach with no mental health license can guide a client through an AI chatbot session, but a licensed psychologist cannot, even for research or safety testing.

Nevada has also banned AI therapy — and it’s a bad law, too. Like Illinois, it blocks licensed professionals from using AI in therapeutic contexts, eliminating supervised uses that could help clients. Its prohibition is a little narrower, allowing some educational applications that Illinois’ sweeping definition of “therapeutic communication” would forbid. That makes Illinois’ overreach worse, but Nevada’s approach is still poor policy. Both states bar professionals from tools that, with proper oversight, could make care safer and more effective.

Total bans are not only backward but unnecessary. Consumers had many protections before Illinois enacted its current standards. Under licensing rules nationwide, mental health professionals would face serious sanctions for letting AI work unsupervised, billing for unreviewed AI output, or using AI in a therapy plan without informed consent. AIs cannot legally prescribe drugs or make authoritative diagnoses. Utah and New York have strengthened these safeguards with disclosure requirements and other guardrails. Further clarifications and standards make sense.

Companies that develop AI models should also investigate ways that they could directly connect to emergency services when a person gives signs of being a serious threat to themselves or others. For this, some therapeutic modalities might come to reject any use of AI. Some use of AI for “therapeutic communication” will take place whether or not licensed professionals take part. It’s far better to allow them to have a role than ban them. Governments and professional standards bodies must determine how to establish appropriate safeguards.

Well-intentioned or not, laws banning AI in therapy make consumers less safe while stifling innovation. Illinois and Nevada should amend their laws. And policymakers should look at both laws as cautionary tales of public policies that accomplish the opposite of their authors’ intended goals.