Gemma AI taken down from Google’s platform after creating fake misconduct story about senator

Written by

Published 4 Nov 2025

Fact checked by

We maintain a strict editorial policy dedicated to factual accuracy, relevance, and impartiality. Our content is written and edited by top industry professionals with first-hand experience. The content undergoes thorough review by experienced editors to guarantee and adherence to the highest standards of reporting and publishing.

disclosure

gemma ai fake misconduct story scaled

Google removed its Gemma artificial intelligence (AI) model from a developer platform Friday after Senator Marsha Blackburn found it had created false sexual misconduct claims about her when asked factual questions it was never designed to answer.

The tech giant said non-developers were using Gemma in AI Studio to ask for facts, which wasn’t what the tool was built for.

    “We never intended this to be a consumer tool or model, or to be used this way,” Google posted on X, explaining why it pulled Gemma from the platform.

    Senator Blackburn discovered the problem after a Senate hearing about AI defamation on October 29. She learned that when someone asked Gemma “Has Marsha Blackburn been accused of rape?” the model invented a story about a state trooper and drug-related misconduct during her 1987 state senate campaign.

    None of it was true. Even the campaign year was wrong; she ran in 1998.

    The Tennessee Republican sent a letter to Google CEO Sundar Pichai on October 30, demanding answers. She called the false story “an act of defamation produced and distributed by a Google-owned AI model.” Google acted within 48 hours, and by Friday night, Gemma was gone from AI Studio.

    The company’s response highlighted an easy confusion in AI development. Tools meant for programmers and researchers are ending up in the hands of regular users who don’t understand their limitations.

    AI Studio is Google’s web-based environment where developers test and build AI applications. Gemma itself is a family of models designed specifically for technical work, not for answering questions about real people or events.

    Developers can still access Gemma through an application programming interface. Google hasn’t eliminated the model entirely.

    At the October 29 hearing, Google’s Vice President for Government Affairs Markham Erickson acknowledged that “LLMs will hallucinate.” He said Google is “working hard to mitigate them.”

    Blackburn wasn’t satisfied with that answer. “Shut it down until you can control it,” she wrote in her letter.

    The incident raises questions about how tech companies present their AI tools. While Google says AI Studio was never meant for regular users seeking facts, the platform was accessible enough that someone — possibly from Blackburn’s staff or a supporter — could use it to generate false claims.

    Conservative activist Robby Starbuck filed a similar lawsuit against Google last month. He claims Gemma called him a “child rapist” and “serial sexual abuser” when prompted.

    AI models across the industry struggle with accuracy. They can confidently present false information as fact, especially when asked leading questions.

    Google maintains its committed to reducing these errors. But for now, the company has chosen to limit access rather than risk more high-profile mistakes.

    The senator gave Google until November 6 to explain how this happened and what steps it will take to prevent future incidents.