25% off on anonymous reporting and safeguarding tools for schools - click to find out more

Grok Restricts Undressing Features But Only Where It Is Deemed Illegal

Grok Restricts Undressing Features But Only Where It Is Deemed Illegal

Last week we published a response to the developing situation regarding Grok permitting individuals to create synthetic sexual imagery of real people from images online. Since then, X has issued a public statement outlining new changes to the Grok feature and reiterating its “zero tolerance” stance on ‘child sexual exploitation, non-consensual nudity, and unwanted sexual content.’

Any step that reduces harm is welcome, but this still underlines the very problem we have continuously highlighted: protection is still being delivered after the abuse has already occurred. Instead of robust, enforceable laws preventing these situations from happening in the first place, we have to rely on reactive platform safeguards.

In a statement made by X, they confirmed that they have now prevented Grok from editing images of real people in ‘revealing clothing’ with bikinis being listed as an example. They have limited image creation and editing to paid subscribers and ‘geoblock the ability of all users to generate images of real people in bikinis, underwear, and similar attire’ where it is deemed illegal.

These are not trivial measures, but they do not change the reality that AI tools capable of generating non-consensual sexual imagery exist at scale, and too many victims are still currently left to rely on platforms’ goodwill in order to feel protected.

Parliament Clarifies Weak Protections

At the same time as this announcement, Baroness Owen again pressed the Government in Parliament to accept her amendments to strengthen protections against the spread of intimate image abuse content.

Central to the amendments is a requirement for the Government to put in place a regulated framework for hashing (privacy-preserving technology that allows platforms to identify and block specific images without storing or sharing the images themselves). It is the approach developed by StopNCII.org, which already enables anyone over the age of 18 to create secure, privacy preserving image hashes that partner platforms can use to prevent the sharing of NCII.

For effective reduction of the spread of NCII, the UK requires an authoritative, regulated source of verified NCII hashes called the NCII Register that platforms and internet service providers must use to block access and prevent further distribution - allowing prevention on an unprecedented scale.

During those exchanges, the Government offered clarification on how their announced ban on nudification apps would work, with enforcement only applying to apps whose primary purpose is to permit nudification. This is simply not robust enough and creates an obvious and significant loophole to exploit. A service can simply claim that nudification is a feature rather than its “primary purpose” and embed it among other tools. The harm remains identical but due to how this is framed, many platforms and tools would fall outside of the law.

Platform Promises Are Not a Substitute for Law

X’s statement ends by noting that “the rapid evolution of generative AI presents challenges across the entire industry” and that they are “actively working with users, partners, governing bodies and other platforms”. We agree that this ongoing challenge needs to be tackled holistically with the Government, industry and NGOs working together to solve these problems at scale.

Baroness Owen’s amendments recognise this reality. They seek to ensure that the law can prevent harm before it happens as opposed to relying on reactive changes when individuals have already suffered. Minister Liz Kendall’s announcement this week to enforce the law around creating or requesting the creation of non-consensual intimate images is a significant protection yet the clarification of enforcement is still needed to understand how far the law will protect. What’s most disappointing though is that the Government could have enforced this law last year. The topic of strengthening intimate image abuse protections has been brought to parliament many times with the Government refusing to move.

As Lord Tim Clement Jones said in parliament “Why has it taken this specific crisis with Grok and X to spur such urgency? The Government have had the power for months to commence this offence, so why have they waited until women and children were victimised on an industrial scale?”

Baroness Kidron followed with ‘’In the last few weeks, the Government have pushed back on the amendments to the Crime and Policing Bill and, before that, to the data Bill. We have amendments on these issues. We foresaw it and, to be honest, we foresaw it in the Online Safety Act, so even on the other side this is not a shock.”

Until those protections are in place, every platform announcement (however well intentioned) remains a reminder that we are still asking victims to depend on platforms to do the right thing as opposed to holding them to a legal responsibility. Safety by statement is not safety by design and while progress has been made, there is still a long way to go.  We need proactive protection and safeguards built into platforms from the get-go, not reactive responses after widespread harm.

Back to Magazine

Related Articles