On 30th June, Ofcom published a new set of proposals under the Online Safety Act, designed to strengthen how platforms prevent illegal and harmful content from appearing and spreading online. These proposals build on earlier requirements related to the safety of children and the prevention of criminal content.
A New Phase of Online Protection for Victims of Image-Based Abuse
For the first time, Ofcom has explicitly recognised non-consensual intimate image abuse (NCII) as a form of illegal content that platforms must proactively address under the Online Safety Act. The new consultation sets clear expectations that services should deploy hash-matching technologies to prevent NCII content from being uploaded or re-circulated.
This represents a significant shift from reactive content moderation toward preventing harm at the point of upload, especially for highly personal, distressing content such as intimate image abuse or AI-generated sexual deepfakes. It also aligns with survivor-led models where the individual remains in control of what is protected, such as the approach used by StopNCII.org.
A Focus on Prevention, Not Just Response
Crucially, Ofcom promotes the use of perceptual hash matching to stop illegal content from appearing on platforms in the first place. In a landmark statement, Ofcom explicitly names StopNCII.org, operated by SWGfL, as the leading global example of this approach:
“The most well-known service of this kind is StopNCII.org, operated by SWGfL. This model enables proactive prevention of NCII distribution without platforms or third parties needing to view the image. We consider this an example of good practice and encourage services to consider integration with such schemes where appropriate.”
(Annex 13–15, p. 72)
This recognition reinforces what we, and our global partners, have long advocated: that stopping harm before it spreads is both technically achievable and ethically essential.
Understanding the Scale of NCII
Earlier this year, our research revealed that an estimated 40 million women globally, including 370,000 in the UK, are affected annually by non-consensual intimate image abuse. The proliferation of synthetic sexual imagery and explicit deepfakes has further escalated the threat.
Sophie Mortimer, Manager of the Revenge Porn Helpline, commented:
“Ofcom’s proposals send a clear and powerful signal. Platforms must take responsibility to prevent NCII abuse. The formal recognition of StopNCII.org as best practice is a breakthrough for survivors and an endorsement of the approach we’ve championed. With the rise in AI-generated sexual content, the time for platforms to act is now. StopNCII.org is ready, trusted, and already working at scale.”
What Happens Next
Ofcom’s consultation strengthens the case for platform adoption of tools like StopNCII.org, not only as a matter of compliance, but of ethical duty. SWGfL remains committed to supporting platforms and survivors alike, offering technical, safeguarding and policy expertise through both StopNCII.org and the Revenge Porn Helpline.
We encourage all platforms to explore integration. There is no viable alternative that offers the same privacy, scalability, and survivor control.
Please learn more at StopNCII.org or explore the full consultation.