New research published by Ofcom has demonstrated the growing impact that AI-generated content is having on young people and adults. The latest report reveals that 43% of people over the age of 16 have seen synthetic content online, and explores how synthetic content is continuously being used to cause harm online.
Synthetic content, also known as ‘deepfakes’, can include any images, videos, text or audio that have been generated using AI. Find out more about the latest findings and Ofcom’s proposals to reduce the harm of synthetic media online.
Key Findings
- 43% of individuals aged 16 and older have seen at least one deepfake online in the last six months. Rising to 50% among children aged 8-15.
- 14% of adults who have seen synthetic content report encountering synthetic sexual content.
- The most common deepfakes encountered by children aged 8-15 were categorized as "funny or satirical" (58%) and deepfake scam advertisements (32%).
- Only 9% of adults feel confident in their ability to identify deepfakes, while a slightly higher percentage (20%) of older children aged 8-15 report similar confidence.
Synthetic Sexual Content
Ofcom's recent research highlights the growing prevalence and impact of synthetic sexual content, as 1-in-7 users who saw any form of synthetic media also reported seeing sexual deepfakes, predominantly featuring women.
Notably, 64% of sexual synthetic content involved celebrities or public figures, while 17% were believed to depict individuals under 18. Alongside this, 15% said the synthetic sexual media was of someone they knew, whilst a further 6% said it depicted themselves.
Any adult who believes they have had synthetic sexual content of themselves shared without consent can contact the Revenge Porn Helpline for advice and support.
Synthetic Harmful Content
The latest research highlights the different ways that synthetic harmful content is being used to cause harm online. Ofcom also identifies how synthetic content is specifically being used to demean, defraud, and disinform.
Predominantly, the research found that most synthetic harmful content was either ‘funny or satirical’ according to 8 to 15-year-olds. However, 32% also said they had encountered some form of synthetically made scam advert.
Anyone over the age of 13 who has witnessed synthetic harmful content on social media platforms can visit Report Harmful Content to find out more about community guidelines and reporting the content.
What’s Next for Synthetic Media?
Alongside their research, Ofcom considers several avenues that tech firms can take to address harmful synthetic content. The report suggests key strategies including using filters to prevent harmful content creation, embedding invisible watermarks and metadata in AI-generated content, and implementing detection methods to support identification. Additionally, Ofcom suggests that platforms should enforce clear rules and swiftly act against violations to help reduce the risks associated with deepfakes.
Ofcom also revealed that in their draft illegal and children safety codes, they have recommended ‘robust measures that services can take to tackle illegal and harmful deepfakes’. These measures will include user verification, labelling schemes, content moderation and user reporting.
Learn more about Synthetic Media and ‘Deepfakes’
With synthetic media becoming more prevalent online, SWGfL has released a new topic hub to help everyone understand more about what synthetic content is, and the support available for adults, parents and schools who may have been affected by harmful AI-generated content.
Visit the SWGfL Synthetic Media Hub to learn more and help you and your community stay safe online.