The Government’s decision to commence the offence of creating or requesting a non-consensual intimate image is a necessary and welcome step. It reflects the seriousness of the harm now being caused by AI enabled sexual abuse and the urgency of the current situation as we have seen with the media coverage surrounding Grok and other AI tools.
However, the House of Lords debate on 14 January 2026 made one point abundantly clear: this harm was foreseeable, predicted, and repeatedly raised by Peers in Parliament. The crisis surrounding Grok is not the result of a sudden technological surprise. It is the consequence of regulatory and legislative delay and the rejection by Government of practical safeguards that were proposed well in advance.
None of this diminishes the responsibility of the tech industry to build and operate safe products and protocols, but we must acknowledge the role of Government in failing to adequately criminalise use of emerging digital tooling to create acute harm.
At SWGfL, through our operation of the Revenge Porn Helpline and our wider safeguarding work, we see the human cost of these delays daily. The parliamentary record now reflects that experience.
“It has not been shocking”
During the Lords’ debate, Baroness Kidron challenged the framing of recent events as unexpected:
“I also disagree very strongly with the Minister: it has not been shocking.”
She went on to explain why:
“We foresaw it and, to be honest, we foresaw it in the Online Safety Act, so even on the other side this is not a shock.”
Her contribution is significant because it directly rebuts any suggestion that the misuse of AI systems to create sexualised imagery was unforeseeable. It also anchors that there was repeated foresight in these gaps around enforcement despite legislation being already passed.
Baroness Kidron also pointed to the repeated rejection of amendments intended to address these risks:
“In the last few weeks, the Government have pushed back on the amendments to the Crime and Policing Bill and, before that, to the Data Bill.”
Delays Have Consequences
Peers were equally clear that the issue was not a lack of legal powers, but a failure to act on them in time.
Lord Clement Jones asked directly:
“However, why has it taken this specific crisis with Grok and X to spur such urgency? The Government have had the power for months to commence this offence, so why have they waited until women and children were victimised on an industrial scale?”
Baroness Owen, who has consistently led work on intimate image abuse across multiple Bills, grounded that frustration in a clear timeline:
“I cannot help but feel frustrated that, along with survivors, I have been asking the Government to enforce this law since it achieved Royal Assent last June.”
She added a warning that now reads as prescient:
“I hope it is now clear to the Government that we cannot afford any similar delays.”
Parliament Was Warned This Would Happen
The Lords debate also demonstrates that peers were not only warning that harm would happen, but also how it would happen.
Baroness Keeley highlighted the structural risks created by opaque AI development:
“There is an issue about the lack of transparency in how chatbots such as Grok are trained.”
She then drew a clear inference about capability and risk:
“As I understand it, if an image or multimodal model can generate non-consensual sexual imagery or deepfake pornography, it is certain that the model was trained on large, uncurated web scrapes where such material is common.”
These warnings speak directly to the design and deployment choices that allow abuse to scale rapidly before enforcement can catch up.
They Ignored the Warnings
It would be misleading to suggest that any single intervention would, on its own, have prevented the Grok situation. What the parliamentary record shows instead is something more troubling: a pattern of repeated warnings about foreseeable harm, strengthened through evidence through our helplines, accompanied by specific recommendations and proposals, many of which were delayed, rejected, or accepted only in principle.
Across early 2025, peers repeatedly raised concerns that the structure of existing offences and procedures did not reflect how intimate image abuse actually occurs online. In particular, they questioned whether victims who discover abuse late would be adequately protected, and whether legal and procedural thresholds risked excluding individuals through no fault of their own. These concerns were raised in the context of emerging technologies and the growing likelihood that abuse would be created and distributed at scale before victims became aware of it.
A Timeline of Warnings
In the same Bill, on 7 February 2025, Baroness Owen pressed the Government on the need to reflect how abuse actually occurs online, rather than relying on narrow or outdated assumptions. Her concern was that legal and procedural gaps would be exploited, leaving victims exposed even where offences technically existed.
Those warnings continued as Parliament turned to wider data and technology legislation. During the Data (Use and Access) Bill debate on 28 January 2025, and again on 5 February 2025 and 12 May 2025, peers raised concerns about the governance of advanced systems and the absence of sufficient safeguards as new capabilities were rolled out at scale. These debates repeatedly returned to the same core issue: the pace of technological change was outstripping the pace of implementation and enforcement.
By the autumn of 2025, those concerns had become more explicit. During the Crime and Policing Bill debate on 16 October 2025, Baroness Owen warned that the law still did not adequately account for delayed discovery of abuse and the procedural consequences this creates for victims:
“However, it is vital that we further strengthen this offence, by increasing the time limits prosecutors have to bring forward charges, so that victims are not inadvertently timed out by the six month time limit of a summary offence.”
This intervention directly anticipated the long-tail nature of harm now being seen with AI-generated imagery, where individuals may only discover that content exists months after it has been created and shared.
Also during the Crime and Policing Bill debates on 16 October 2025 and 27 November 2025, Baroness Owen and others raised specific concerns about AI chatbots and automated systems, warning that such systems could be used to generate sexualised and abusive content, and that existing frameworks were not keeping pace.
On 27 November 2025, Baroness Owen explicitly framed the issue as a live and growing risk, not a hypothetical future problem, drawing attention to evidence that chatbots were already widely used and raising concerns that safeguards had not been adequately tested against misuse.
Alongside these debates, Parliament also issued a formal warning through the Women and Equalities Committee.
In its report Tackling non-consensual intimate image abuse, published on 5 March 2025, the Committee addressed the role of synthetic and AI generated imagery directly. In Recommendation 19, it stated that there was “no legitimate reason whatsoever for the use or existence of nudification apps” and called on the Government to ensure that the use of such tools was treated as the creation of synthetic non consensual intimate images, alongside regulatory action by Ofcom against sites and services that promote or facilitate their distribution.
The Government’s response, published on 20 May 2025, rejected 10 recommendations, partially accepted seven, and accepted two in full. Recommendation 19 was one of the two accepted in full. However, the substance of the response relied largely on assertions that existing offences were technology neutral and that options were being considered, describing the issue as complicated and committing only to update the Committee “in due course”. This is significant because the recommendation was not rejected, but accepted, yet no preventative action was visible before the current crisis.
Acknowledgement Without Action
This distinction matters. The Committee’s recommendation was framed in terms of urgency and prevention. The Government response acknowledged the risk but did not set out a timetable or immediate operational measures. In light of subsequent events, this reads less as decisive action and more as acceptance in principle without urgency.
This was not the only recommendation or warning issued in 2025, and no single measure would have prevented the Grok situation in isolation. But taken together, the parliamentary record shows that the nature of the harm was anticipated, the mechanisms of abuse were identified, and the need for timely intervention was repeatedly stressed.
It is this broader pattern that peers returned to in January 2026 when they said plainly that the current crisis was not shocking, but foreseen.
Survivors Should not Suffer to Prompt Action
One of the most concerning aspects of the current moment is the renewed expectation that survivors should once again recount their experiences to drive policy change.
Survivors have already given evidence to Parliament. Specialist services have already provided data, case studies, and technical solutions. The Women and Equalities Committee inquiry and multiple Lords debates document that evidence in detail.
Listening to survivors is essential. But listening without implementing change is not protection.
In the Lords, the Minister rightly acknowledged the role of specialist support:
“The Revenge Porn Helpline is doing fantastic work in providing specialist support and help with getting images removed from the internet, and I commend it for that activity.”
Recognition matters, but it must be matched by action that reduces the need for survivors to seek help in the first place.
What the Government Must Do
If the lessons of the Grok crisis are to be learned rather than repeated, urgent action is required. This should include:
Revisit recommendations from the Women and Equalities Committee that were accepted or rejected but not implemented in practice, including those addressing AI enabled image abuse
Bringing all relevant offences fully into force without further delay
Setting clear expectations on platforms for speed and consistency of response
Addressing transparency and safety testing in AI systems, including training data governance
Ensuring victims are consistently signposted to specialist support services
Implementing preventative measures that reduce repeat and long tail harm, including hashing based approaches where appropriate
Revisiting previously rejected recommendations in light of the harm now evidenced
The House of Lords has been clear. This was not shocking. It was foreseen. The question now is whether the Government will act decisively before the next predictable crisis, or once again only after harm has already occurred.





