Recent advancements in generative imagery have significantly improved our ability to create realistic images and videos. However, this progress has also led to a troubling surge in non-consensual sexually explicit synthetic media or “deepfakes”—digitally manipulated media that depicts individuals in sexually explicit situations without their consent. The distribution of such non-consensual content online can be deeply distressing and damaging to those affected.
Recognising the rise in these incidents, Google has strengthened their policies and systems to provide better control over this type of content. Recently, they announced several significant updates aimed at enhancing these protections, informed by extensive feedback from experts and those who have been impacted.
Removing Content
For a while now, Google has allowed individuals to request the removal of non-consensual explicit imagery from their Search results. They have now developed more robust systems to streamline this process, making it more effective and scalable.
When an individual successfully requests the removal of non-consensual explicit fake content from Search, their systems will also look to filter out all explicit results for similar searches involving that person. Additionally, when an image is removed from Search, the systems will scan for and remove any duplicates of that image. These protections, already effective for other types of non-consensual imagery, have now been extended to cover fake explicit images as well. These efforts are a move to ensure similar content does not resurface in the future.
Improved Ranking Systems
Given the vast amount of content generated online daily, the most effective way to protect against harmful content is through robust ranking systems that prioritize high-quality information in Search results. Alongside improving their content removal processes, Google have updated their ranking systems to address queries where there is a high risk of explicit synthetic media appearing.
Google are deploying ranking updates that will lower the visibility of explicit synthetic media for many searches. For searches explicitly seeking this type of content and including personal names, Google will aim to display high-quality, non-explicit content, such as relevant news articles. This year, Google has stated that these recent updates have already reduced exposure to explicit image results on such queries by over 70%.
Distinguishing Real from Deepfake
A significant challenge lies in differentiating between real and consensual explicit media (such as an actor’s nude scenes) and explicit synthetic media (such as deepfakes featuring the actor). This distinction is crucial for search engines, and Google seem to be improving their systems to better surface legitimate content while downranking explicit synthetic media.
Furthermore, websites that host a high volume of pages removed under Google’s policies are strong indicators of low-quality sites. Google incorporates this signal into their ranking algorithms, demoting such sites to reduce the prevalence of explicit synthetic media in search results. This approach has proven effective for other types of harmful content and is expected to be similarly valuable for addressing this type of harm.
If you would like to know more about synthetic media, please visit our new topic hub, suitable for professionals, teachers and parents.
If you are concerned about intimate image abuse, you can also explore how StopNCII.org can protect your intimate images from being shared online, with protection also combating against synthetic sexual content. If you are based in the UK, you can contact the Revenge Porn Helpline for further advice and support.