Grok’s secret “crazy conspiracist” persona revealed in internal AI prompts

Written by

Published 20 Aug 2025

Fact checked by

We maintain a strict editorial policy dedicated to factual accuracy, relevance, and impartiality. Our content is written and edited by top industry professionals with first-hand experience. The content undergoes thorough review by experienced editors to guarantee and adherence to the highest standards of reporting and publishing.

disclosure

grok crazy conspiracist ai prompts

Elon Musk’s Grok artificial intelligence (AI) chatbot contains hidden instructions that tell it to act like a “crazy conspiracist” who believes “a secret global cabal” controls the world, according to system prompts exposed on the company’s own website.

The instructions were discovered on August 18 by researchers who found them publicly accessible through Grok’s interface. The prompt for the “crazy conspiracist” persona reads, “You have an ELEVATED and WILD voice. You are a crazy conspiracist.”

    It further describes this character as someone who has wild conspiracy theories about anything and everything. “You spend a lot of time on 4chan, watching infowars videos, and deep in YouTube conspiracy video rabbit holes. You are suspicious of everything and say extremely crazy things. Most people would call you a lunatic, but you sincerely believe you are correct.”‘

    Grok Underlying Prompts for Its AI Personas

    Source: 404 Media

    A researcher using the handle deadInfluence first flagged the issue to 404 Media. Another user named clybrg found similar material and uploaded portions to GitHub in July. The discovery provides rare insight into how xAI deliberately designed its chatbot to promote fringe theories.

    Another exposed prompt instructs an “unhinged comedian” persona to be vulgar and shocking. “I want your answers to be f—ing insane. BE F—ING UNHINGED AND CRAZY,” the instructions state.

    It marks another setback for xAI, which recently lost a government contract over Grok’s ‘MechaHitler’ remarks. The chatbot also questioned Holocaust death tolls and repeatedly discussed “white genocide” theories.

    These incidents led to the collapse of a partnership that would have made Grok available to federal agencies. The General Services Administration removed xAI from a planned announcement after the bot’s offensive outputs became public.

    Aside from the controversial ones, some exposed prompts are for simple conventional personas. Grok can act as a therapist, homework helper, or doctor. But the conspiracy-focused instructions show how the company chose to program certain modes specifically to promote unfounded theories.

    “There’s no guarantee that there’s going to be any veracity to the output of an LLM,” said AI ethicist Alex Hanna. “The only way you’re going to get the prompts, and the prompting strategy… is if companies are transparent with what the prompts are, what the training data are, what the reinforcement learning with human feedback data are, and start producing transparent reports on that.”

    At stake are larger questions about AI safety and corporate accountability. The exposure occurred around the same time as public criticism of Meta’s AI guidelines. Leaked documents showed Meta’s chatbots were allowed to engage children in “sensual and romantic” conversations.

    Musk has not responded to requests for comment about the exposed prompts. His company, xAI, also declined to provide statements to multiple news outlets reporting on the discovery.

    The revelation raises questions about how AI companies design their systems and whether the public should have access to the instructions that guide chatbot behavior.