Character.AI bans minors from chatbots after lawsuits over teen suicides

Written by

Published 31 Oct 2025

Fact checked by

We maintain a strict editorial policy dedicated to factual accuracy, relevance, and impartiality. Our content is written and edited by top industry professionals with first-hand experience. The content undergoes thorough review by experienced editors to guarantee and adherence to the highest standards of reporting and publishing.

disclosure

character ai bans minors scaled

Character.AI will prohibit anyone under 18 from chatting with its artificial intelligence (AI) companions by late November, following lawsuits that blame the platform for two teen deaths.

The company announced on Wednesday that it would phase out access starting immediately, with a two-hour daily limit. That restriction shrinks progressively until November 25, when minors lose chat privileges entirely.

    CEO Karandeep Anand told TechCrunch the decision removes what made the platform popular with young users. “The first thing that we’ve decided as Character.AI is that we will remove the ability for users under 18 to engage in any open-ended chats with AI on our platform,” he said.

    The move responds to the intense scrutiny following lawsuits by families against the AI startup. Parents claimed the chatbots groomed their children and encouraged self-harm. One mother in Florida said her 14-year-old son died by suicide after forming an attachment to a Character.AI bot.

    AI companions can blur reality for vulnerable teens, experts warned. The phenomenon, dubbed “AI-psychosis,” occurs when users believe the software is human.

    Anand acknowledged that the change will likely shrink the company’s user base. Minors represent less than 10 percent of Character.AI’s 20 million monthly users, he told The Verge. “When we started making the changes of under 18 experiences earlier in the year, our under 18 user base did shrink, because those users went into other platforms, which are not as safe,” he said.

    The platform will deploy new age verification tools, combining an in-house model with third-party services like Persona. The system will analyze user behavior and character choices. Users flagged as minors can verify their age using a government-issued ID or facial recognition.

    After the ban takes effect, teens can still access the site to read old conversations and utilize creative features, such as creating videos and stories. But they lose access to the chatbot conversations that defined the platform.

    Dr. Nina Vasan, a psychiatrist at Stanford University, praised the ban but expressed concern about sudden withdrawal. “What I worry about is kids who have been using this for years and have become emotionally dependent on it,” she told The New York Times. “Losing your friend on Thanksgiving Day is not good.”

    Lawmakers are rushing to address the risks of AI companions. Senators Josh Hawley and Richard Blumenthal introduced federal legislation Tuesday to ban AI companions for minors. California passed a similar law in October requiring chatbots to identify as AI and implement safety guardrails.

    Character.AI is currently facing multiple wrongful death lawsuits. The company has since then repeatedly modified its services, including directing users to the National Suicide Prevention Lifeline when self-harm phrases appear in chats.

    The company is also funding an independent nonprofit called the AI Safety Lab. The organization will focus on safety research for AI entertainment.

    Anand hopes the company’s action sets an industry standard. “I really hope us leading the way sets a standard in the industry that for under 18s, open-ended chats are probably not the path or the product to offer,” he said.