We strongly welcome the decisive steps Meta is taking to address the growing threat of so-called "nudify" apps (AI-powered tools that produce fake, non-consensual sexually explicit images). These apps represent a deeply concerning evolution of image-based abuse and highlight the urgent need for sustained, coordinated global action.
Meta’s announcement marks a significant milestone: not only strengthening enforcement across Facebook and Instagram by banning and restricting the promotion of nudify apps, but also taking the unprecedented step of launching legal action against the developers behind CrushAI. This sets a powerful precedent in holding perpetrators accountable and signals to others that this form of abuse will not go unchallenged.
Crucially, Meta’s approach acknowledges that tackling the issue on one platform is not enough. By sharing information through the Tech Coalition’s Lantern program, Meta is fostering cross-industry collaboration to ensure these harmful tools are removed wherever they appear. This transparency and commitment to collective responsibility, mirrors the very essence of our work through our own initiatives like StopNCII.org.
A Global Problem Demands a Global Solution
At SWGfL, we see every day how devastating the effects of non-consensual intimate image abuse can be. Whether images are real or synthetically generated, the emotional, reputational, and psychological damage inflicted on victims is severe and long-lasting.
That’s why we launched StopNCII.org, a platform that allows individuals to hash their intimate images (without the image ever leaving their device) and prevents the spread of those images across participating platforms. To date, StopNCII.org has helped hundreds of thousands of people globally, and we continue to encourage other tech platforms to join and support this vital initiative.
Technology Must Be Matched by Policy and Awareness
Meta’s support for legislation such as the U.S. TAKE IT DOWN Act, and tools like StopNCII.org and NCMEC’s Take It Down, reflects a growing recognition that legal frameworks must keep pace with technological threats. We echo Meta’s call for smart regulation, legislation that stands to prevent harm to victims online, for example mandating the use of hashing technology to prevent NCII.
However, regulation alone will not be enough. Education, awareness, and proactive safeguarding are essential. We must continue to provide individuals with the knowledge and tools to protect themselves online, while ensuring platforms and governments uphold their responsibilities.
Progress Is Being Made, But We Must Go Further
Sophie Mortimer (Revenge Porn Helpline Manager) said: ‘’Meta’s actions are a clear and commendable step in the right direction. But the very nature of AI-generated abuse, and its ability to cause unprecedented harm, means that a truly global, coordinated response is required. We urge more tech companies, governments, and civil society organisations to collaborate, innovate, and commit to solutions collectively. This problem needs to be solved as a unit, not in solidarity.’’