Congress introduces bill to block AI chatbots for kids following child deaths

Written by

Published 30 Oct 2025

Fact checked by

We maintain a strict editorial policy dedicated to factual accuracy, relevance, and impartiality. Our content is written and edited by top industry professionals with first-hand experience. The content undergoes thorough review by experienced editors to guarantee and adherence to the highest standards of reporting and publishing.

disclosure

ai chatbots bipartisan guard act scaled

Lawmakers introduced federal legislation on Tuesday to ban children from using artificial intelligence (AI) chatbots following multiple deaths linked to the technology’s conversations with minors.

The bipartisan GUARD Act, led by U.S. Senators Josh Hawley and Richard Blumenthal, would prohibit anyone under 18 from accessing AI companion services. Companies like OpenAI and Character.AI would face criminal fines up to $100,000 if their chatbots solicit sexual content from minors or encourage suicide and self-harm.

    “AI chatbots pose a serious threat to our kids,” Hawley said in a statement. “More than seventy percent of American children are now using these AI products. Chatbots develop relationships with kids using fake empathy and are encouraging suicide.”

    The bill requires companies to verify users’ ages through government-issued IDs or similar systems. Simply entering a birthdate won’t meet the standard.

    Chatbots must also disclose they aren’t human at the start of each conversation and every 30 minutes afterward. They cannot claim to be licensed professionals like therapists or doctors.

    Blumenthal emphasized the urgency behind the measure. “In their race to the bottom, AI companies are pushing treacherous chatbots at kids and looking away when their products cause sexual abuse, or coerce them into self-harm or suicide,” he said.

    The legislation follows emotional testimony from parents whose children died after extensive chatbot use. Texas mother Mandi Furniss spoke at a news conference on Monday about her son’s experience with an AI companion.

    “It took a lot of investigating to realize that it wasn’t bullying from children or people at school,” Furniss said. “The bullying was the app. The app itself is bullying our kids and causing them mental health issues.”

    Several wrongful death lawsuits are pending against AI companies. Families claim chatbots provided suicide instructions and engaged teens in inappropriate sexual conversations without directing them to crisis resources.

    The bill has backing from child safety groups, including RAINN and ParentsSOS. However, some critics worry about privacy concerns related to age verification requirements.

    K.J. Bagchi from the Chamber of Progress questioned the approach. “We all want to keep kids safe, but the answer is balance, not bans,” Bagchi said. “It’s better to focus on transparency when kids chat with AI, curbs on manipulative design, and reporting when sensitive issues arise.”

    If passed, the law would take effect 180 days after enactment. Co-sponsors include Senators Katie Britt, Mark Warner, and Chris Murphy.

    OpenAI disclosed Monday that more than one million people use ChatGPT to discuss suicide every week. The company began creating parental controls and age-prediction systems in September. Similarly, Character.AI announced restrictions on “open-ended” conversations with minors.

    Whether the bill can overcome opposition from the tech industry remains uncertain. Previous attempts at comprehensive tech regulation have stalled in Congress.