Racist and antisemitic videos created with Google’s Veo 3 video generator have gained millions of views on TikTok since May, prompting platform removals and exposing critical failures in artificial intelligence (AI) safety measures.
Videos created with Veo 3 reached 14.2 million views on TikTok before removal. Media Matters published these findings on July 1. The AI-generated content showed Black people as monkeys and criminals while promoting harmful ideas about multiple racial groups.
Media Matters identified the videos through “Veo” watermarks and eight-second clip limits. These features match the tool’s specifications. Researchers found content targeting Black Americans, Asian people, Jewish communities, and immigrants across TikTok, Instagram, and YouTube.
“It’s both disgusting and disturbing that these racial tropes and images are readily available to be designed and distributed on online platforms,” said Nicol Turner Lee, director of the Center for Technology Innovation at the Brookings Institution, to WIRED.
Google launched Veo 3 in May 2025, marketing it as a text-to-video generator with built-in safety protections. The company’s policies explicitly prohibit the creation of content that promotes hate speech or harassment. Those safeguards apparently failed when users employed indirect prompts that avoided human-targeted filters.
TikTok removed the identified accounts after the report’s publication. Many had already been banned before Media Matters completed its investigation.
“We proactively enforce robust rules against hateful speech and behavior,” said TikTok spokesperson Ariane de Selliers.
The videos used animal images to bypass content filters while promoting racist ideas. One clip showed police shooting a “Black one” and got over 14 million views. Another showed monkeys in a restaurant with watermelon and fried chicken, reaching 600,000 views.
Comments showed viewers understood the racist messages. “Bro even AI has black fatigue,” one viewer wrote on a monkey airplane video.
Some creators monetized the hateful content by selling $15 courses teaching others how to generate similar videos using Veo 3. The tutorials walked students through prompting techniques that circumvented Google’s safety measures.
The situation extends beyond individual videos to reveal systemic problems with AI development and social media governance. Content moderation systems struggle to find AI-generated hate speech. This is especially hard when it uses coded language and visual symbols.
Google has not responded to requests for comment about the Veo 3 safety failures. The company plans to add the video generator to YouTube Shorts. This could expand the reach of similar content.
The incident follows a pattern of AI tools reproducing societal biases despite technical protections. Previous research shows AI systems consistently show racial bias. They assign negative traits to speakers of African American English and continue historical stereotypes.
TikTok’s enforcement challenges reflect broader industry struggles with AI-generated content. The platform removes less than 1% of published videos for policy violations, according to its own reports. Short video formats make contextual moderation particularly difficult, while algorithmic distribution can amplify harmful content.
The viral spread shows how AI tools can weaponize old propaganda techniques. Racist imagery that once required certain skills can now be generated instantly through text prompts, lowering barriers to creating and spreading hate speech.