AI ‘poverty porn’ is spreading through global aid campaigns

Written by

Published 21 Oct 2025

Fact checked by

We maintain a strict editorial policy dedicated to factual accuracy, relevance, and impartiality. Our content is written and edited by top industry professionals with first-hand experience. The content undergoes thorough review by experienced editors to guarantee and adherence to the highest standards of reporting and publishing.

disclosure
ai poverty porn aid campaigns scaled

Source: Freepik (AI-Generated)

Major aid organizations are using artificial intelligence (AI) to create fake images of starving children and sexual violence survivors for fundraising campaigns, sidestepping years of ethical progress in how charities represent poverty.

Researchers have collected more than 100 AI-generated “poverty porn” images used by aid agencies on social media in an article published in Lancet Global Health. The pictures replicate harmful stereotypes that charities spent years trying to eliminate from their work.

    “The images replicate the visual grammar of poverty – children with empty plates, cracked earth, stereotypical visuals,” said Arsenii Alenichev, a researcher at the Institute of Tropical Medicine in Antwerp.

    Organizations, including the World Health Organization (WHO), Plan International, and the United Nations, have all used AI-generated imagery in recent campaigns. In 2023, the WHO published an anti-tobacco campaign depicting a suffering child of presumed African heritage. That same year, Plan International released videos with AI-generated images of pregnant, abused adolescent girls forced into marriage, gaining more than 300,000 views.

    Last year, the UN posted a YouTube video featuring AI-generated avatars re-enacting testimonies from survivors of conflict-related sexual violence. The video was removed after The Guardian contacted the organization for comment.

    Noah Arnold, who works at Fairpicture, a Swiss organization promoting ethical imagery, said organizations are using synthetic images “because it’s cheap and you don’t need to bother with consent and everything.”

    The researchers warn the practice creates “poverty porn 2.0.” When AI learns from historically biased imagery online, it amplifies those stereotypes in new content.

    The images appear most often in campaigns by smaller organizations in low- and middle-income countries. These groups face pressure to create content matching donor expectations from wealthy nations.

    Behind it all, stock photo companies are profiting from the trend. Adobe and Freepik host AI-generated images with captions like “Photorealistic kid in refugee camp” and “Asian children swim in a river full of waste.”

    Adobe charges roughly $60 for some images.

    The AI-generated images are overwhelmingly racialized. They predominantly depict Black and Brown bodies in states of vulnerability. When researchers attempted to invert stereotypes by prompting AI to show Black African doctors treating White suffering children, the AI kept making the patients Black instead.

    Joaquín Abela, CEO of Freepik, deflected responsibility to consumers. “It’s like trying to dry the ocean,” he said. “If customers worldwide want images a certain way, there is absolutely nothing that anyone can do.”

    Kate Kardol, an NGO communications consultant, said the images frightened her. “It saddens me that the fight for more ethical representation of people experiencing poverty now extends to the unreal,” she said.

    Research shows people are less likely to donate when they know an image is AI-generated. But another recent study shows that most humans can barely identify racial bias, even in AI training data. Currently, there are no enforceable guidelines regulating the use of AI-generated imagery in global health communications.