In the acclaimed HBO series “Game of Thrones,” the warning that “the white walkers are coming” refers to a menacing race of ice creatures threatening humanity. In a similar vein, Ajay Amlani, president of biometric authentication company iProov, argues that we should regard deepfakes with equal urgency.
“There’s been growing concern over deepfakes in recent years,” Amlani explained. “Now, the winter is here.” According to a recent iProov survey, 47% of organizations have encountered deepfakes, and 70% believe generative AI-created deepfakes will significantly impact their operations. Alarmingly, only 62% are taking this threat seriously.
“This is a genuine concern,” Amlani stated. “You can create entirely fictitious individuals who look, sound, and react as if they were real.”
Deepfakes—realistic, fabricated avatars, voices, and media—have rapidly advanced and are often nearly indistinguishable from reality. This sophistication poses serious risks to organizations and governments alike. For example, a finance employee lost $25 million after being misled by a deepfake video call of their “chief financial officer.” Moreover, cybersecurity firm KnowBe4 revealed that a new hire turned out to be a North Korean hacker who used deepfake technology to bypass security measures.
Amlani noted significant regional disparities in deepfake incidents: 51% of organizations in Asia Pacific, 53% in Europe, and 53% in Latin America report encounters with deepfakes, compared to just 34% in North America. “Many malicious actors operate internationally, often targeting local regions first,” he added.
The survey ranks deepfakes among the top security concerns, alongside password breaches (64%) and ransomware (63%), with both phishing/social engineering and deepfakes affecting 61% of respondents. “Trusting anything digital is increasingly difficult,” Amlani said. “It’s crucial that we question everything online and build robust defenses to verify identities.”
Biometric Solutions for Deepfake Threats
The rise of processing speeds, enhanced information sharing, and generative AI has empowered threat actors in creating sophisticated deepfakes. Although some basic measures exist—like content flags on video-sharing platforms—these are insufficient, according to Amlani. “Such attempts barely scratch the surface of an extensive issue,” he remarked.
Traditional verification methods, like captchas, have become overly complicated, making it challenging even for genuine users to prove their identity, particularly among the elderly or those with cognitive impairments. In contrast, Amlani advocates for biometric authentication as a more effective solution.
iProov's research shows that 75% of organizations are adopting facial biometrics as a primary defense against deepfakes, followed by multifactor authentication (67%) and education on deepfake risks (63%). Companies are also auditing security measures (57%) and updating systems (54%) to combat deepfake threats.
The effectiveness of various biometric methods in combating deepfakes is as follows:
- Fingerprint: 81%
- Iris: 68%
- Facial: 67%
- Advanced Behavioral: 65%
- Palm: 63%
- Basic Behavioral: 50%
- Voice: 48%
Not all biometric tools are equally effective, Amlani noted. Some methods require cumbersome movements, making them easier for deepfake creators to bypass. In contrast, iProov employs an AI-driven tool that utilizes light reflections from the device screen, analyzing unique facial features. If the results differ from expected patterns, it could indicate that a threat actor is attempting to exploit physical images or masks.
iProov's technology is being implemented across commercial and governmental sectors, and Amlani claims it offers a “highly secure, quick solution” with a pass rate exceeding 98%.
“There is a widespread acknowledgment of the deepfake threat,” Amlani concluded. “A global effort is essential to combat this issue, as bad actors operate without borders. It’s time to equip ourselves for this challenge.”