Two severe security weaknesses found in OpenAI’s ChatGPT Atlas within a week of launch

Written by

Published 28 Oct 2025

Fact checked by

We maintain a strict editorial policy dedicated to factual accuracy, relevance, and impartiality. Our content is written and edited by top industry professionals with first-hand experience. The content undergoes thorough review by experienced editors to guarantee and adherence to the highest standards of reporting and publishing.

disclosure

openai chatgpt atlas security weaknesses scaled

Two critical vulnerabilities in OpenAI’s ChatGPT Atlas browser were discovered within a week of its October 2025 release. Security researchers revealed that attackers can hijack the artificial intelligence (AI) assistant and plant persistent malicious instructions.

The flaws allow cybercriminals to manipulate Atlas through fake URLs and memory infections.

    NeuralTrust discovered the first vulnerability on October 24. Their researchers found that Atlas treats malformed web addresses as trusted user commands rather than blocking them.

    NeuralTrust demonstrated that typing what appears to be a URL into Atlas’s address bar can actually execute hidden instructions. A string starting with “https” followed by embedded commands tricks the browser into treating malicious code as trusted user input.

    “Because omnibox prompts are treated as trusted user input, they may receive fewer checks than content sourced from webpages,” said Martí Jordà, security researcher at NeuralTrust. “The agent may initiate actions unrelated to the purported destination, including visiting attacker-chosen sites or executing tool commands.”

    The most recent vulnerability, called “Tainted Memories,” exploits ChatGPT’s memory feature through cross-site request forgery. LayerX researchers demonstrated how clicking a malicious link injects instructions that persist indefinitely.

    “Once an account’s Memory has been infected, this infection is persistent across all devices that the account is used on,” explained Or Eshed, CEO of LayerX. “This makes the attack extremely ‘sticky,’ and is especially dangerous for users who use the same account for both work and personal purposes.”

    The flaws affect ChatGPT users across any browser, but Atlas users face a heightened risk because they are logged in to ChatGPT by default.

    On top of that, LayerX tested the browser against 103 real-world phishing attacks and found that Atlas blocked only six of them. Chrome and Edge stopped roughly half of the same threats, making Atlas users nearly 90% more vulnerable.

    Both firms disclosed their findings to OpenAI in accordance with their responsible disclosure procedures. The company has not publicly addressed the specific vulnerabilities.

    OpenAI’s Chief Information Security Officer, Dane Stuckey, has previously acknowledged prompt injection as an ongoing challenge in a social media post. He called it “a frontier, unsolved security problem”.

    The company claims it has implemented safety measures, including model training to ignore malicious instructions and user controls to restrict site access. Yet apparently, the newly discovered vulnerabilities bypass these protections.

    Similar AI browsers like Perplexity, Comet, and Opera Neon have shown similar security weaknesses in recent weeks. Researchers warn that these issues stem from fundamental design flaws in how agentic browsers handle the distinction between user input and untrusted web content.

    The discoveries raise questions about whether AI-powered browsers are ready for widespread use, particularly in workplace environments where a single compromised account could expose sensitive company data.