Deepfake Risks on the Rise: An Urgent Threat to Enterprises
Deepfake technology is rapidly emerging as one of the most formidable forms of adversarial AI, with projected losses skyrocketing from $12.3 billion in 2023 to $40 billion by 2027—an impressive compound annual growth rate of 32%. According to Deloitte, the proliferation of deepfakes is particularly concerning in the banking and financial services sectors.
In the past year alone, deepfakes have increased by an astonishing 3,000%, and incidents are expected to rise by 50% to 60% in 2024, totaling 140,000 to 150,000 cases worldwide this year. The latest generation of generative AI applications equips attackers to create convincing deepfake videos, impersonated voices, and fraudulent documents quickly and cheaply. The Pindrop 2024 Voice Intelligence and Security Report estimates that deepfake fraud targeting contact centers is costing businesses about $5 billion annually, reinforcing the technology's threat to banking and finance.
Bloomberg reported a thriving underground market on the dark web, where scamming software is sold for prices ranging from $20 to several thousand dollars. An infographic based on Sumsub’s Identity Fraud Report 2023 illustrates the alarming increase of AI-powered fraud on a global scale.
Enterprise Preparedness: A Growing Concern
As adversarial AI evolves, it introduces unforeseen attack vectors that complicate security landscapes. Alarmingly, one in three enterprises lacks a strategy to combat the risks associated with deepfakes, particularly those targeting their key executives. The Ivanti 2024 State of Cybersecurity Report highlights that 30% of organizations have no plans to identify or defend against adversarial AI attacks.
The report reveals that 74% of surveyed enterprises are already witnessing AI-powered threats, with 89% believing these threats are just beginning. Furthermore, 60% of CISOs, CIOs, and IT leaders feel unprepared to defend against such attacks. Using deepfakes as part of more extensive strategies, including phishing and ransomware, is increasingly common, indicating a shift toward more sophisticated threat scenarios fueled by generative AI.
Targeting Top Executives: A New Strategy for Attackers
Cybersecurity experts report a worrying trend: deepfakes are evolving from easily detectable to realistic impersonations, particularly targeting CEOs. Industry executives note that aggressive nation-state actors and large cybercriminal organizations are investing resources in generative adversarial network (GAN) technologies. The sophistication of these attacks was highlighted by a notable incident targeting the CEO of the world’s largest advertising firm.
In a recent conversation with the Wall Street Journal, CrowdStrike CEO George Kurtz discussed how advancements in AI both bolster cybersecurity defenses and enhance attacker capabilities. He observed that the current deepfake technology is so advanced that it’s challenging to discern authenticity. In stating, “The deepfake technology today is so good,” Kurtz emphasized the risks posed by disinformation campaigns driven by nation-states.
CrowdStrike's Intelligence team actively studies what makes deepfakes convincing, striving to understand the technology's trajectory for maximum viewer impact. Kurtz likened the dissemination of information to a pebble creating ripples in a pond, where significant topics amplify and affect public perception.
The Call to Action for Enterprises
Enterprises are at risk of losing the battle against adversarial AI if they fail to keep pace with attackers' rapid evolution in deploying AI for deepfake attacks. With deepfakes becoming increasingly prevalent, the Department of Homeland Security has issued a guide titled “Increasing Threats of Deepfake Identities,” underscoring the urgency for enterprises to strengthen their defenses.
It is crucial for organizations to develop proactive strategies to mitigate the risks posed by deepfake technology and stay ahead in the ongoing AI arms race.