The Secretary of State’s recent statement to the House of Commons to ‘fast-track legislation, making it an offence to create non-consensual intimate images’. marks a welcome shift in tone. The Government has publicly acknowledged the harm caused by AI enabled sexualised imagery, including non-consensual intimate image abuse, and has signalled an intention to act through existing and forthcoming legislation.
Recognition matters. For many victims, being believed and taken seriously has too often been the first barrier to protection. But recognition alone does not yet translate into meaningful change for those experiencing harm today.
What the Government has Acknowledged
In the statement, non-consensual intimate image abuse and related harms were explicitly identified as priorities. The Government referenced the misuse of generative AI, including deepfake imagery, and confirmed intentions to strengthen criminal law and platform responsibilities through the Online Safety Act and the Crime and Policing Bill.
The Government also referenced the misuse of nudification tools and signalled an expectation that platforms take action to restrict their use.
This reflects a clear evolution in official language. For years, practitioners, researchers and civil society organisations warned that AI would amplify intimate image abuse. That risk is now being acknowledged at the highest levels of Government.
Where Recognition Ends and Victim Experience Begins
For a charity supporting victims, the critical question is not whether harm is acknowledged, but whether this changes what happens when abuse occurs.
UK law already criminalises the sharing of intimate images without consent and also criminalises threats to share such images, recognising the serious psychological and coercive harm involved. These protections are necessary and important.
However, the offence criminalising the creation of non-consensual intimate images will not come into force until 6 February. Until then, victims remain exposed to tools that are widely available, easy to use, and capable of generating sexualised imagery in seconds
In practice, the system still relies heavily on reactive enforcement. Protection is often triggered only after risk or harm has already emerged.
Once an image has been shared, victims are typically required to discover the abuse, report it to platforms, request removal, monitor for reappearance, and repeatedly engage with enforcement processes. Even where takedowns are swift, the initial harm cannot be undone. Distress, loss of control, reputational damage and fear of further exposure frequently persist long after content is removed.
Nudification Tools Must be Addressed
It is important to be clear about the nature of the technology itself. Tools designed to digitally undress or sexualise images of real people have no legitimate safeguarding or social purpose. Their primary function is to facilitate abuse.
Partial measures or platform by platform bans are insufficient. A comprehensive approach is required, with clear expectations that such functionality should not exist within consumer facing services at all. Allowing carve outs or technical loopholes undermines prevention and public confidence.
Prevention Must be Central
If outcomes for victims are to change, prevention must sit alongside enforcement. The technology to reduce repeat harm already exists.
Perceptual hashing allows known abusive images to be identified and blocked before they are re uploaded. Coordinated approaches enable platforms to act collectively rather than in isolation. These tools do not rely on victims repeatedly reporting the same content and significantly reduce the likelihood of ongoing circulation.
Through StopNCII.org, victims and survivors can generate secure digital fingerprints of their images on their own devices without uploading the content itself. Participating platforms can then use those fingerprints to prevent further sharing. This approach is preventative, privacy preserving and victim centred, and it is already operating at significant scale.
Alongside this, SWGfL has consistently highlighted the importance of coordination across services, including the development of shared approaches that prevent re uploads across platforms rather than addressing harm in fragmented and repetitive ways.
Where We Still Fall Short
There is still no explicit commitment to systematically embed prevention mechanisms as a clear regulatory expectation across services.
Voluntary action by individual services is welcome, but experience shows that voluntary safeguards often follow public controversy rather than preventing harm in advance.
Recognition without timely implementation risks leaving victims exposed during the very period when harm is most acute.
What This Means for Victims
For victims of non-consensual intimate image abuse, the difference between policy and protection is tangible.
Acknowledging harm is important. Criminalising abusive behaviour is essential. But prevention is transformative.
A system that relies primarily on reporting and removal continues to place the burden on those already harmed. A system that prevents circulation in the first place reduces repetition, limits exposure and meaningfully changes victim outcomes.
The Government’s statement shows welcome progress in recognising the scale and seriousness of AI enabled intimate image abuse. It matters that these harms are being named plainly and discussed at the highest political levels.
Recognition is not Resolution
Victims need protection that operates before abuse spreads, not only enforcement after harm has occurred. Delays in bringing offences into force and partial approaches to harmful technologies risk undermining that protection.
The tools to prevent harm already exist. What is needed now is timely implementation, comprehensive action, and sustained accountability. We will continue to monitor progress closely, because for victims, half measures and delayed protections are not enough.





