University of California, Riverside researchers have developed an artificial intelligence tool that detects manipulated videos by analyzing backgrounds and motion patterns, rather than just faces, thereby addressing the rapid proliferation of sophisticated fake content creation tools.
The system, designated UNITE (Universal Network for Identifying Tampered and synthEtic videos), represents a strategic advancement in synthetic media detection. Working with Google, the researchers developed technology that scans entire video frames for subtle inconsistencies that reveal manipulation.
“Deepfakes have evolved,” said Rohit Kundu, the UC Riverside doctoral student who led the project. “They’re not just about face swaps anymore. People are now creating entirely fake videos — from faces to backgrounds — using powerful generative models. Our system is built to catch all of that.”
The breakthrough comes as video manipulation has become surprisingly easy. Text-to-video and image-to-video platforms now allow almost anyone to create convincing fake content with basic computer skills.
“It’s scary how accessible these tools have become,” Kundu explained. “Anyone with moderate skills can bypass safety filters and generate realistic videos of public figures saying things they never said.”
Traditional detection methods focus almost entirely on faces, making them useless when videos show no faces or alter backgrounds instead of people.
“If there’s no face in the frame, many detectors simply don’t work,” Kundu said. “But disinformation can come in many forms. Altering a scene’s background can distort the truth just as easily.”
UNITE solves this problem through comprehensive frame analysis. The system uses a technique called “attention-diversity loss” that forces the AI to watch multiple parts of each video frame at once. This prevents the technology from getting stuck examining only one element, like a person’s face.
This comprehensive analysis catches different categories of deepfakes. Simple face swaps, complex background changes, and completely artificial videos all trigger the system’s detection capabilities.
Professor Amit Roy-Chowdhury, who supervised the research, acknowledged the ongoing challenge. “Developing tools to detect fake content is a cat-and-mouse game,” he said in an earlier interview. “No system is perfectly secure.”
The team presented their findings at the 2025 Conference on Computer Vision and Pattern Recognition. Google’s partnership gave researchers access to massive datasets needed to train the AI on various types of synthetic content.
Social media companies, fact-checkers, and news organizations could eventually use UNITE to stop manipulated videos from spreading online. The tool remains in development but addresses urgent concerns about synthetic media’s impact on elections and public trust.
The researchers emphasized the broader implications of their work. As AI-generated content becomes more sophisticated and accessible, detection tools must evolve to match the threat.
“People deserve to know whether what they’re seeing is real,” Kundu said. “And as AI gets better at faking reality, we have to get better at revealing the truth.”