Deepfake Pornography: A Growing Concern Amidst the AI Race

To shared

Artificial intelligence (AI) imaging has revolutionized various industries, from art to fashion, and advertising. However, experts are increasingly concerned about the darker side of easily accessible AI tools: nonconsensual deepfake pornography, which disproportionately harms women.

Deepfakes are videos and images that have been digitally created or altered using AI or machine learning. The issue first gained widespread attention when a Reddit user shared clips that superimposed the faces of female celebrities onto porn actors. Since then, deepfake creators have targeted online influencers, journalists, and others with public profiles, with thousands of videos now available across various websites. Some platforms even offer users the ability to create their own deepfake images, allowing anyone to turn individuals into sexual fantasies without their consent or use the technology to harm former partners.

The problem has worsened with the development of generative AI tools that are trained on vast amounts of internet data and can generate novel content using existing data. «The reality is that the technology will continue to proliferate, will continue to develop and will continue to become sort of as easy as pushing the button,» says Adam Dodge, founder of EndTAB, a group that provides trainings on technology-enabled abuse. «And as long as that happens, people will undoubtedly… continue to misuse that technology to harm others, primarily through online sexual violence, deepfake pornography, and fake nude images.»

Victims of deepfake pornography, such as Noelle Martin from Perth, Australia, have experienced the devastating consequences. Martin discovered deepfake pornographic images of herself 10 years ago when she searched for images of herself on Google out of curiosity. To this day, Martin does not know who created the fake images or videos of her engaging in sexual acts that she later found. She suspects that someone likely took a picture from her social media page or elsewhere and doctored it into porn.

Martin’s efforts to have the images taken down from various websites were largely unsuccessful. Some websites did not respond, while others took the content down only for it to reappear later. «You cannot win,» Martin said. «This is something that is always going to be out there. It’s just like it’s forever ruined you.»

In addition to the emotional toll, victims of deepfake pornography often face victim blaming and shaming. Some people have told Martin that the way she dressed and posted images on social media contributed to the harassment, effectively blaming her for the images instead of holding the creators accountable.

In response to the growing concern, some AI models and social media platforms have implemented measures to curb the spread of explicit deepfake images. For example, OpenAI has removed explicit content from the data used to train its image generating tool DALL-E, which limits the ability of users to create such images. The company also filters requests and blocks users from creating AI-generated images of celebrities and prominent politicians. Another AI model, Midjourney, blocks the use of certain keywords and encourages users to flag problematic images to moderators.

Some startups, such as Stability AI, have also rolled out updates to their image generating tools to prevent the creation of explicit images. However, it is worth noting that users can still manipulate the software and generate illicit content since the company releases its code to the public. Stability AI’s spokesperson, Motez Bishara, emphasized that the company’s license strictly prohibits the misuse of its technology for illegal or immoral purposes.

Social media companies have also been tightening their rules to better protect their platforms against harmful content, including deepfake pornography. For instance, TikTok recently announced that all deepfakes or manipulated content that show realistic scenes must be labeled to indicate that they are fake or altered. The platform also prohibits deepfakes of private figures and young people.


To shared