Anthropic is drawing criticism for using a potentially misleading pop-up design to collect user consent for its new data training policy, which could trick millions into sharing their private conversations.
The artificial intelligence (AI) company behind Claude announced this week that it will start training AI models on user chat transcripts. Users must decide by September 28 whether to opt out. Those who don’t will have their conversations stored for up to five years instead of the current 30 days.
Now critics say the company’s interface makes it too easy for users to accidentally agree to data sharing. The pop-up shows a large black “Accept” button while hiding the actual data sharing choice in smaller text below. A toggle switch for training permissions comes pre-set to “On.”

Source: Anthropic
The change affects users of Claude Free, Pro, and Max plans. Business customers using Claude for Work, government, and education services remain protected from the policy.
Anthropic previously stood apart from competitors by not using consumer data for training. The company now states that the data will help enhance model safety and accuracy.
“By participating, you’ll help us improve model safety, making our systems for detecting harmful content more accurate and less likely to flag harmless conversations,” Anthropic stated in its blog post.
Legal experts are concerned that the design may violate European privacy rules. These interface tricks, known as dark patterns, are considered unlawful under the General Data Protection Regulation (GDPR) and by the European Court of Justice when used to obtain consent for data processing. Pre-checked boxes do not count as valid consent under these rules.
The policy mirrors broader industry trends as AI companies face pressure to find high-quality training data. OpenAI uses similar default settings for ChatGPT users, though it implemented this approach from the start.
Privacy advocates note the timing seems driven by competitive pressures. “Like every other large language model company, Anthropic needs data more than it needs people to have fuzzy feelings about its brand,” editor Connie Loizos from TechCrunch observed.
Users who want to opt out can toggle the switch when the pop-up appears or adjust their settings later through the Privacy Settings. The policy only applies to new conversations started after users make their choice.
The company emphasizes it won’t sell user data and uses automated tools to filter sensitive information. Still, any data collected before users opt out cannot be retrieved once model training begins.
Anthropic has until late September to address concerns about its interface design before the deadline forces all users to make a choice.