The King’s Speech which was delivered yesterday, set out an ambitious plan focused on national security, economic growth, infrastructure and public service reform. However, the speech also revealed notable and concerning gaps, particularly in addressing the growing scale and complexity of online harms.
A Missed Opportunity
Despite the UK’s global leadership in passing the Online Safety Act, the absence of any meaningful new commitments towards online safety was disappointing. The speech and accompanying legislative agenda contained no substantive reference to strengthening protections or accelerating implementation, at a time when harms are rapidly evolving.
As highlighted by the Online Safety Act Network’s response (which SWGfL forms a part of), online safety is not even meaningfully addressed, with artificial intelligence only referenced in passing within a broader “growth” agenda. This omission risks creating an “online safety-shaped hole” in the Government’s programme.
The Online Safety Act 2023 introduced important duties on platforms to tackle illegal content and protect children, backed by Ofcom enforcement powers. But legislation alone is not enough, ongoing political leadership, resourcing, and adaptation to emerging harms are essential. The lack of forward-looking policy as well as the current unreliable political landscape suggests a worrying loss of momentum.
Urgent Action Needed on NCII
One of the most pressing gaps remains the lack of concerted action to tackle non-consensual intimate image abuse (NCII). While the Online Safety Act includes provisions relating to intimate image abuse, and the latest additions to the Crime and Policing Bill included the introduction of a proposed NCII register, the scale and speed of this harm continues to grow, particularly in the context of generative AI.
Recent technological developments have dramatically lowered the barrier to creating and distributing abusive content. The controversy surrounding the Grok AI system earlier this year illustrates the urgency. Millions of sexualised images were reportedly generated, including thousands appearing to depict children, raising profound concerns about industrial-scale abuse enabled by AI tools.
Regulators, including ICO, have since opened investigations into the use of such systems to generate non-consensual sexual imagery, highlighting the significant risks to individuals’ safety.
This context makes the absence of specific measures on NCII in the King’s Speech particularly concerning. There is a clear opportunity (and responsibility) from government to go much further.
Renewed Commitment to Tackling VAWG
The halted focus towards tackling Violence Against Women and Girls (VAWG), is equally troubling. This comes at a moment of instability and uncertainty within government leadership.
The recent resignations of key ministers, Jess Phillips (Minister for Safeguarding) and Alex Davies-Jones (Minister for Victims and VAWG), raise serious questions about the future direction and prioritisation of this work. Both had been central to driving the Government’s ambition to treat VAWG as a national emergency and to halve it within a decade.
Their departures risk creating a vacuum in leadership at precisely the time when renewed political commitment is most needed. There has already been expressed concerns about delays, lack of funding, and insufficient implementation of the VAWG strategy in the past and the worry is that this will continue.
Despite this, we look forward to working closely with the newly appointed ministers Natalie Fleet and Catherine Atkinson to continue important discussions and ensure the momentum we have already achieved does not slip.
Online abuse, including intimate image abuse, is a core component of VAWG. Any serious strategy must recognise that women and girls’ safety cannot be separated from the digital environments in which abuse increasingly takes place.
No Mention of AI
Perhaps the most significant omission in the King’s Speech is the lack of a comprehensive approach to artificial intelligence. While AI is referenced in the context of reducing regulatory burdens to support growth, there is no standalone AI safety legislation or clear regulatory framework to address emerging risks.
At a time when generative AI is fundamentally reshaping the risk landscape, this absence is difficult to justify.
The events surrounding Grok earlier this year, alongside broader concerns about misinformation, synthetic sexual imagery, and automated abuse, it demonstrates that AI is not a future risk, but one that is happening right now and accelerating fast.
Without proactive governance, there is a real danger that innovation will outpace safeguards, leaving individuals exposed to harm at an unprecedented scale.
Address the Gaps
SWGfL welcomes the Government’s commitment to creating a safer, fairer society, however, this ambition must extend fully into online spaces.
The UK has the opportunity to remain a global leader within online safety but leadership requires sustained focus and a readiness to respond and stay ahead of existing and emerging harms.





