Oxford Internet Institute have released a study into the rise of publicly accessible "deepfake" image generators. As highlighted in the study, 35,000 models for creating synthetic sexual content were downloaded nearly 15 million times— illustrating that the abuse of people’s likenesses is not only common, but far exceeding prior expectations.
Academic research such as this is vital to shed light on the industrial-scale proliferation of synthetic non-consensual intimate imagery (NCII), but we must use it to enable us respond effectively and work towards stronger protections against this abuse.
Combatting AI Models
SWGfL operates the Revenge Porn Helpline, which has provided specialist support to thousands of individuals since its inception in 2015. We’ve seen first-hand how the landscape of intimate image abuse is constantly shifting. What once required technical skill can now be executed by anyone, anywhere, in minutes.
The Helpline has responded to a significant uptick in cases involving synthetic media over the past few years. What is particularly troubling is how AI models, especially those trained on only a handful of images from public social media accounts, are lowering the barrier towards abuse. This synthetic sexual content doesn’t just distort someone’s image—it undermines autonomy, destroys reputations, and can lead to long-term emotional and psychological harm.
Restriction of “Deepfake” Websites
We welcomed the news this week that one of the most prolific exploiters of this technology, “Mr. Deepfake”, was taken offline. This represents a milestone, a signal that those who profit from or propagate NCII are not beyond reach. However, while that takedown is significant, it is only one small brick removed from a towering wall. The OII study shows that the majority of these tools are not hidden, they’re widely distributed on reputable platforms that are failing to enforce their own terms of service.
While the inclusion of synthetic sexual images under the Online Safety Act and the proposed criminalisation of their creation under the Crime and Policing Bill is in progress, regulation alone is not enough. We must continue to push for:
- Mandatory requirement for platforms hosting media-based content to adopt StopNCII.org.
- Proactive moderation and takedown systems on platforms where NCII is hosted and distributed.
- Greater technical countermeasures to detect and flag synthetic imagery.
- Cross-sector partnerships to keep pace with this fast-moving threat.
- Enhanced education for the public, about the risks and rights associated with intimate image abuse.
The reality is that the technology to create synthetic sexual content is more prolific and normalised than ever—but the behaviour is still abuse. We remain committed to supporting victims, advocating for systemic change, and working towards a digital world where protection is non-negotiable.
To understand more about deepfakes and the impact of synthetic sexual content, our website provides advice and guidance.