The cut-off of Character.ai chatbot accounts to youth, resulting in them “saying tearful goodbyes to their AI companions,” has brought the issue of youth chatbot and AI access back into public attention.  

A variety of concerns have been raised regarding generative artificial intelligence systems such as chatbots. A Character.ai teen suicide drew significant attention to the potential harms that youths’ unsupervised access to chatbots can have. This has caused some to suggest that AI use should be restricted to adults, and now Character.ai has taken action.

While the death of any youth — particularly if preventable in any way — is tragic, restricting AI (or some forms of AI to adults) is not a good solution. Instead, we should educate youth about what AI chatbots are and provide mechanisms for anyone — chatbots and their developers included — to issue urgent calls for help, and ensure that resources are in place to respond to every alert.

Age-restricting AI poses several challenges. The first is that it is likely to be ineffective. Youth have shown an incredible tenacity in bypassing most age-restriction systems, whether by using a fake ID at a bar or by using parents’ credit cards and IDs to access online content. Adult access regulations may allow system operators to presume that minors are not using their service, relieving them of a duty of care for them, without actually stopping minors’ use.

As alcohol use has shown, an age restriction may not have the desired effect.  While studies have shown positive benefits from a minimum drinking age of 21, it has not eliminated all underage alcohol use as “drinking is fun.”  A study, at the time of Florida’s drinking age change, showed that the new law didn’t decrease alcohol use. Instead, it changed how and where alcohol was used. Use restrictions on similarly “fun” chatbots and AI tools may turn obtaining them into aspirational goals, as some see fake ID use as a rite of passage.

In addition to likely being ineffective at preventing the targeted harms, AI age restrictions may reduce system operators’ concern with — and implementation of — mechanisms to aid minors, as they can argue that the use is illegal. Preventing access also prevents youth from benefiting from the positive aspects of AI systems and learning about them. Most problematically, delaying AI use until 18 may shift the same challenges to a few years later, when young adults have less of a support system to help them learn about and address any negative consequences of AI use.

Education, not regulation, is the key to this problem. What AI is (and isn’t), how it works, and what it can and cannot do are part of coming of age in the 2020s.  Students need to learn about AI in the same way they are taught about online misinformation and similar topics. This education can and should cover how AI can help students — in formal education and beyond — its limitations (such as  AI hallucinations), and what is and isn’t real about it. These topics can be covered at age-appropriate levels throughout K-12 education and, for those who choose to pursue it, collegiate and career education. Libraries, senior centers and other public education providers can help to educate those outside of formal education.

In addition, the government plays a key role in solving this issue without AI industry regulation. A national, easy-to-use and liability-limited risk reporting system should be developed to allow AI developers to report concerns that trigger an immediate response. The system should leverage cellular phone providers and other internet service providers to locate a potential individual in distress — whether a teenager, young adult or older — quickly and provide aid.  

Everything provided to this system should be treated as confidential medical-type information and not be used against the individual or the reporting system operator, to encourage use and complete disclosure while also maintaining users’ privacy. A mental health first responder can assess the situation and take appropriate actions.  The capability for intervention already exists (albeit with resource limitations, in many areas). However, a reporting mechanism can link chatbots and other AI systems to this existing resource.

AI is poised to help society tremendously through improving medicinehelping the paralyzedcombating crimehelping grow crops , and making food, among many other uses. Education can help youth learn to use it effectively, as opposed to regulation attempting (likely ineffectively) proscribing their AI use. The deployment of a national mental health “call for help” capability can turn AI systems into “good Samaritans” that can identify when their users may need help and connect them with immediate support.

Jeremy Straub is the director of the North Dakota State University’s Institute for Cyber Security Education and Research. He wrote this for InsideSources.com.