Introduction/Background
The president-elect of South Korea, Yoon Suk-yeol, used an unusual strategy during his campaign earlier this year. His campaign team used Deepfakes to make an “AI avatar,” which helped him win the election. This technology is good for appealing to younger voters and getting them more involved. With the help of AI technology, he looks like a more modern candidate than his competitors [among young people].
Since 1st January, the video has been seen by millions of people and covered by a lot of media. The Deepfake was an interactive experience where potential voters could interact with him and ask him questions. He came up with amusing answers to those questions. To get the information needed to make his avatar, the real Yoon recorded 3,000 words (20 hours of audio and video).
Deepfakes Are Becoming More Widespread And Not All Are Humorous
The relevance of this is due to the growing use of deepfakes. Jordan Peele, a filmmaker who won an Oscar for his work, made a fake video of President Obama insulting President Trump in 2018. This was to warn people not to trust online content. As the technology gets better, there are fears that deepfake videos will spread fake news. There will be more and more threats to national security because of Deepfakes. Evidence of this was shown in 2019, the number of people who used deepfake videos went from 7,964 to 14,678.
It is clear that more and more people are using deepfakes, either for PR stunts or to attack people. There is likely a direct correlation between the increased use of deepfakes, and attacks utilising the same technology.
This Is Posing A Growing Threat To KYC Firms That Offer Biometric Authentication
The growing threat of Deepfake attacks poses a threat to consumers and KYC companies that provide biometric authentication services. The threat originates from fraudsters using deepfake technology to impersonate others, replacing their identity with a customer’s, most likely to access their finances.
As this is the most effective approach for mimicking a customer’s identity online, it is the biggest threat to KYC organizations. This was demonstrated in 2020 when a fraudster impersonated a corporate director and stole $35 million using auditory deepfake technology. If high-ranking members of well-known organizations are at risk, then so are the general clients of KYC firms. According to iProov’s 2020 Deepfake research, 72 percent of consumers believe that authenticating one’s identity is more vital than ever. If KYC firms can persuade their customers that they have a powerful instrument to combat this threat, client engagement with their services would likely skyrocket.
How We Could Detect Yoon Suk-Yeol’s Deepfake
When our algorithm takes a reading from a live subject, it uses a method called photoplethysmography. This is to quantify changes in light absorption through the skin and into the bloodstream (PPG). We can then extract changes in blood volume and offer real-time physiological data using this information. We use a spectrogram to measure the signal strength obtained from the reading. A spectrogram is a graph that shows the frequency of a signal in Hertz (Y-axis) over time (X-axis). It appears to real humans as a single bright light beam. Deepfakes, on the other hand, frequently ruin the spectrogram.
We can be certain about the authenticity of a video if we use this strategy. In fact, we ran our algorithms on some footage of Yoon Suk-Yeol’s deepfake, the outcome was a destroyed spectrogram (see Figure 1). We were able to determine that the person in the video was not real. We may eliminate the threat of deepfakes with this strategy, perhaps removing national security issues. Deepfake generation methods are unquestionably more advanced than deepfake detection approaches. Before continued growth, governments and organizations must implement countermeasures to this menace.
Security For Ourselves And Future Generations
The Growing threat of Deepfakes attacks seem to be an issue that we’re not focused on enough. It seems that few resources are being pulled to provide ready countermeasures for this issue. Our method of reading live physiological data to determine the authenticity of a subject is readily available and scalable through Microsoft’s Azure cloud platform. This puts us in the very real position to be the first line of defense against this dangerous threat.
If you are a KYC firm looking to improve your biometric authentication method and prevent against these types of attacks, contact us to learn more about our solution.