Our brains can subconsciously detect deepfakes, even if our conscious minds are deceived, according to a team of neuroscientists from the University of Sydney.
Deepfakes are applied to various forms of media, like videos, images, or audio, usually by cybercriminals who seek to mislead or sway public opinion.
There has been a rapidly growing unease around the advancement of deepfakes over the years, as it comes with countless criminal applications like creating evidence of scenes that never actually happened.
Poor quality deep fakes are easy to spot, especially those in the form of videos with off lip-synching, or patchy skin tones. However, as deepfake technologies become more and more evolved, disinformation becomes more common and alarmingly convincing.
Understanding our subconscious
In a study published in Vision Research, scientists from the University of Sydney have found that people’s brains can detect artificial intelligence-generated fake faces, even though they could not verbally report which were real and which were fake.
Proponents of the study used a method that looked into the brain activity of participants and discovered that deepfakes could be identified 54 per cent of the time. However, only 37 per cent of the time were the participants able to verbally identify the artificially created faces.
“Although the brain accuracy rate in this study is low – 54 per cent – it is statistically reliable,” said senior researcher Thomas Carlson, an associate professor from the University of Sydney’s School of Psychology.
“The fact that the brain can detect deepfakes means current deepfakes are flawed. If we can learn how the brain spots deepfakes, we could use this information to create algorithms to flag potential deepfakes on digital platforms like Facebook and Twitter,” Carlson added.
The researchers in the study used two experiments: behavioural and neuroimaging. For the first method, participants were shown 50 images of real and computer-generated fake faces and were asked to identify which were real and which were fake.
A different group was then shown the same images while their brain activity was recorded using electroencephalogram, without knowing that half of the images were fraudulent.
Results of both experiments were compared, suggesting that people’s brains were better at detecting deepfakes than their eyes.
Race against deepfakes
The researchers say their findings could serve as a springboard in the fight against deepfakes and even lead to the development of technology that could alert people to deepfake scams in real time.
An infamous example of the extreme dangers of deepfakes was seen during the 2016 US presidential elections, when a Russian troll farm deployed over 50,000 bots on Twitter, using deepfakes as profile pictures in a bid to influence the outcome of the polls. Some research suggested that this led to a 3 per cent boost in Donald Trump’s voter turnouts.
More recently, a deepfake video of Ukrainian President Volodymyr Zelensky surfaced on social media platforms, where he was urging Ukrainian troops to surrender to Russian forces.
The researchers noted that given the novelty of this field of research, their study is just a starting point and will not immediately lead to a foolproof way to detect deepfakes.
Associate Professor Carlson said more research must be done to combat the looming threat of deepfakes. “What gives us hope is that deepfakes are created by computer programs, and these programs leave ‘fingerprints’ that can be detected,” he said.