25% off on anonymous reporting and safeguarding tools for schools - click to find out more

AI Abuse Using Grok Shows Why Legal Protections Have Completely Fallen Behind

AI Abuse Using Grok Shows Why Legal Protections Have Completely Fallen Behind

Ofcom has just announced they are investigating X over the recent surge in synthetic sexual imagery generated via the AI tool Grok that has been reported to include non-consensual “undressing” and entirely synthetic sexual imagery. This news, alongside reports of AI misuse to our Revenge Porn Helpline, has highlighted how rapidly evolving technology has outpaced both legal protections and platform safeguards. While these images are not genuine, the distress and damage experienced by those who have been targeted is all too real, and often deeply traumatic.

Grok, which is integrated into X, has permitted users to manipulate and sexualise photos of women without consent, creating explicit images that are then shared further across online spaces. Reports from regulators, media and our NGO partners have underscored global concern about this misuse, including content involving adults and, in other cases, minors, raising serious ethical and legal questions about platform responsibility, compliance with the law and AI governance.

Is Grok Doing Enough?

While we condemn the current situation and urge them to do more; as a partner in the StopNCII.org programme, X does have the ability and willingness to mitigate the harm caused by the circulation of non-consensual intimate imagery. X’s Community Guidelines, the rules governing platform use (termed ‘the X Rules’) prohibit non-consensually distributed adult content, and ban sharing of synthetic or manipulated media likely to cause harm. As such, this appears to be a failing of enforcement and sufficient mitigation measures. StopNCII.org’s image hashing technology allows individuals to create secure hashes for their images and helps participating platforms identify and block harmful content before it spreads.

Additionally, StopNCII.org also supports industry hash sharing, allowing partners who have identified NCII content to also share this with StopNCII.org and onwards to all partners, which will help accelerate prevention.  We urge X to make full use of their available solution, feedback on the hashes with their findings, and use StopNCII.org to combat the proliferation of NCII across their platform with immediate effect.

This tool has already been used to protect images across a network of partners, and X’s involvement places it among a group of organisations who have been willing to adopt these proactive safeguards.

While there have been numerous media reports of Grok making changes in how users access image generation features as well as access being reserved solely for paying subscribers, we remain concerned about this situation and continue to seek clarification directly as to exactly what access and restriction changes have been made.

The Law Is Dragging Behind

Despite this, voluntary implementations and internal policy changes are not enough on their own, and a lack of regulation slows uptake of tools like StopNCII.org across the digital platform ecosystem. In addition, victims of synthetic image-based abuse remain poorly protected. The UK government has legislated to criminalise the creation of synthetic sexual content under the Data (Use and Access) Act, but these provisions have not yet been brought into force, meaning that those affected by AI generated intimate image abuse currently have very limited legal redress.

As of Monday 12th January, Technology Secretary Liz Kendall announced that these provisions would be commenced this week and made priority offences under the Online Safety Act.  Meanwhile, we need Ofcom to enforce their existing powers through the Online Safety Act to prevent further proliferation of this content.

Without robust enforcement and clear accountability, perpetrators will continue to exploit legal grey areas and platform inconsistencies to inflict abuse.

To combat this harm at its core, we need a holistic approach that brings together:

  • Clarification on how the UK Government will enforce legislation that recognises and penalises AI-generated intimate image creation.
  • Ofcom to mobilise its current powers to hold Grok to account.
  • More accountability from platforms to implement safeguards that prevent the creation of this content in the first instance.
  • Industry collaboration on technological solutions like StopNCII.org that prevent the spread of non-consensual intimate content.
  • Government backing of the proposed NCII Register that would allow the removal, blocking and disruption of NCII at a much larger and more efficient scale

Sophie Mortimer Head of Support Services said: ‘’Without stronger platform accountability, enforceable legislation and wider industry collaboration in tackling intimate image abuse, this kind of AI generated harm we are seeing with Grok and similar tools will continue to proliferate. Innovation should not come at the cost of people’s safety and if anyone has been affected by this abuse, we strongly urge you to get support through the Revenge Porn Helpline or StopNCII.org. We can only remain hopeful though that through continued efforts and persistent advocacy for stronger government action and platform accountability, we can move towards a better online environment where people actually feel they have the protections they deserve’’.

If you have affected by AI-generated intimate image abuse, support is available from the Revenge Porn Helpline and StopNCII.org - more information can be found at Responding to AI misuse and “Undressing” Images on Social Media.

Back to Magazine

Related Articles