Elementary and high school students across Japan are using artificial intelligence (AI) tools to create fake sexual images of their classmates, prompting over 100 police reports in the past year as authorities struggle with laws that don’t cover digitally generated content.
The perpetrators range from elementary school children to high schoolers. They scrape photos from school events, graduation albums, and sports team groups to feed into easily accessible AI websites.
Police received the reports throughout 2024, with most victims being junior high and high school students. In many cases, the creators knew their victims personally.
“I used generative AI to edit a picture of the face of a girl I like,” an elementary school boy in eastern Japan told police after allegedly creating a fake nude image of a junior high school girl from his sports class.
The National Police Agency plans to investigate about 10 image-generating websites and apps that are popular among students. Officials will use their findings for criminal cases and education programs targeting young people.
Current Japanese law creates a dangerous gap. The country’s child pornography prevention law, passed in 1999, only covers real, identifiable children. AI-generated images fall outside this protection unless the child can be clearly identified.
“The current law was designed to protect real children, but generative AI has blurred the line between real and fake,” said Takashi Nagase, a lawyer and professor at Kanazawa University.
Several serious cases have led to criminal charges. In central Japan’s Tokai region, prosecutors charged a male junior high student with defamation. He created a fake nude image using a female classmate’s photo and shared it with friends.
Police report that harassment targeting same-sex peers drives some cases. The images often spread through social media platforms like LINE and Instagram.
Other countries have moved faster to close legal loopholes. South Korea passed a bill in September 2024 that punishes creating, viewing, or possessing deepfake images. The United States enacted federal laws in May requiring social media platforms to remove such content within 48 hours. Britain and Australia also regulate sharing sexually explicit deepfakes.
Only Tottori Prefecture has taken local action in Japan. Officials there revised an ordinance in August to ban creating sexually explicit fake images using children’s photographs, though it carries no criminal penalties.
“The psychological burden is significant, so deepfake images should be treated similarly to child pornography, and legal regulations should be considered,” said Masaki Ueda, an associate professor of criminal law at Kanagawa University.
The Internet Watch Foundation reported a 400 percent surge in AI-generated child abuse webpages globally in the first half of 2025. Most content was so realistic it had to be treated under the law as actual footage.
Police warn that known cases represent just “the tip of the iceberg” as AI tools become cheaper and easier to use.