With the rise of generative AI, the impact of synthetic media (commonly known as deepfakes) has somewhat taken the world by surprise. With instances of artificially generated images, voices and videos playing a concerning role in the spread of misinformation and fake news, the requirement for users to increase their digital literacy and upskill themselves in critically evaluating the trustworthiness of content has been more essential than ever.
While artificial intelligence will continue to become more advanced and harder to identify, there are still a number of ways which may help determine whether something is real or not. In this piece, we explore ways in which synthetic media can differ from genuine material and how we can all spot some of the most common signs of generated content.
Why are Deepfakes so Accurate?
To put it simply, synthetic media is essentially material that is trying to copy something that is already established and then predict how it would appear within a new scenario. In the same vein as counterfeit money, synthetic media tries to replicate what is already established to the best of its ability and then attempts to pass itself off as genuine and real.
One of the more impressive qualities of AI generation is that it can generate almost anything. By pulling information from all parts of the internet, predictions can be made around anything we set our minds to. A hamster riding a unicorn, a band photo of the Beatles and Slipknot, a garden paved with lava, the sky is very much the limit.
Because the internet is so full of material, these AI predictions, base themselves on a lot of information which is often why they appear so convincing. Although the accuracy may be impressive, there will still be signs to tell that it’s not organic and genuine.
Spotting the Signs of a Deepfake
If you’ve ever looked at an image or a video and thought ‘something doesn’t seem right’, it might be synthetically generated. Like a lot of things, the devil is in the detail! From afar, synthetically generated content can be extremely convincing, but, in most cases, the closer we look, the less convincing it becomes.
Synthetic media is often highly produced, making great use of high-quality textures and lighting. There is an aura of being overly professional with subject matters appearing unnatural or highly stylized. Although the overall package may seem to bring the core ingredients to life, minor details often do not translate well.
If we’re looking at a group of people, check their hands, feet and smaller features such as eyes, noses or ears. Synthetic media currently struggles with translating these areas accurately and often, instances such as irregular shapes, multiple fingers, and toes start appearing. If we’re looking at a huddled group of people, check the geography of how people stand by each other, are they in proportion and is everyone’s arms and legs in the right spot?
If we’re watching a video of someone talking to us, does their voice keep in time with their mouth? Do they have natural facial expressions and is the person conveying inflections and emotive tones, rhythms and cadence in their voice? AI generated content can accurately portray an established voice but smaller details add the ‘personality’ factor which AI finds harder to replicate.
What Is the Purpose Of Deepfake Content?
Like a lot of our online safety teachings have already said, we have to ultimately rely on our own critical thinking. Synthetically made content can be used for a variety of positive purposes but unfortunately, it can also be used for harm. It’s important to ask ourselves when faced with an online image, video or voice:
- Is it asking anything of me e.g. a financial call to action?
- Is there a bold claim or any scaremongering taking place?
- Is it conveying someone who is widely recognised in an unusual, strange or negative light?
- Is it trying to convince me of something harmful? e.g. claims of evidence or secrets.
As we said earlier, if something doesn’t seem right then you might be faced with synthetic media. Harmful synthetic media usually has a purpose, and it tends to revolve around spreading misinformation, scams, harmful rumours, slander as well as putting individuals in sexual, compromising or intimate situations.
Consider what the purpose of the content is and critically determine whether it can be trusted or not. To support and protect those in your network, you have a responsibility to research and seek additional clarification around any claims proposed by online material before sharing it any wider. Ensure you dig deep and don’t take everything at face value.
Improving Digital Literacy
Generative AI and synthetic media present both fascinating possibilities and significant challenges. While these technologies can create highly convincing and imaginative outputs, they also pose risks in terms of misinformation and public deception.
By improving our skills in critically evaluating content, paying attention to details, and questioning the intent behind what we see and hear, we can better navigate online content with confidence and authority. As AI continues to evolve and synthetic media becomes harder to spot, our vigilance and critical thinking will be crucial in distinguishing reality from fabrication.
If you are concerned about synthetic media and want to upskill your communities with the digital skills required to safely navigate the online world, you can use ProjectEVOLVE to bring the most pressing online topics into the classroom.
ProjectEVOLVE EDU can also be used by teachers and safeguarding staff to improve educational approaches towards digital literacy.
If you would like to know more about synthetic media, please visit our new topic hub, suitable for professionals, teachers and parents.