CISOs and CIOs are evaluating the benefits of leveraging generative AI as a continuous learning engine that captures behavioral data, telemetry, intrusion, and breach information against the associated risks. The objective is to establish a new “muscle memory” of threat intelligence to enhance breach prediction and improve SecOps workflows.
However, trust in generative AI remains divided. Recent conversations with CISOs from various manufacturing and service sectors reveal that, despite potential productivity gains across marketing, operations, and security, concerns regarding compromised intellectual property and data confidentiality are primary issues raised by board members.
Deep Instinct's recent survey, "Generative AI and Cybersecurity: Bright Future of Business Battleground?" quantifies insights gathered from CISO interviews. The research indicates that while 69% of organizations have adopted generative AI tools, 46% of cybersecurity professionals believe these tools increase the likelihood of attacks. A staggering 88% of CISOs and security leaders assert that weaponized AI attacks are inevitable.
Eighty-five percent of respondents suspect recent attacks have been powered by generative AI, especially with the emergence of WormGPT, a generative AI tool marketed on underground forums for phishing and business email compromise attacks. Tools like FraudGPT have quickly gained traction on the dark web, demonstrating the rapid commercialization of weaponized AI.
Sven Krasser, Chief Scientist and Senior Vice President at CrowdStrike, noted that attackers are accelerating the weaponization of large language models (LLMs) and generative AI. While this increases the pace and volume of attacks, Krasser emphasizes that it does not fundamentally enhance the quality of those attacks. He suggests that cloud-based security leveraging AI to correlate global signals can effectively counter these evolving threats.
Max Heinemeyer, Director of Threat Hunting at Darktrace, warns that businesses must implement cyber AI for defense before offensive AI becomes widespread. He asserts that when the conflict devolves into algorithms battling algorithms, only autonomous responses at machine speeds will effectively counteract AI-enhanced attacks.
The market for generative AI applications is rapidly expanding. The ability of generative AI to learn continuously is particularly advantageous for analyzing vast data generated by endpoints. Ongoing updates to threat assessment and risk prioritization algorithms are anticipated to yield new use cases that CISOs and CIOs hope will enhance threat prediction. For example, Ivanti's partnership with Securin aims to deliver real-time risk prioritization and improve security postures.
Ivanti and Securin’s collaboration combines Securin’s Vulnerability Intelligence (VI) with Ivanti Neurons for a near-real-time threat intelligence ecosystem. Dr. Srinivas Mukkamala, Chief Product Officer at Ivanti, emphasizes the benefits of AI-augmented human intelligence in providing thorough threat intelligence and risk prioritization.
The generative AI cybersecurity market is projected to soar from $1.6 billion in 2022 to $11.2 billion by 2032, reflecting a compound annual growth rate (CAGR) of 22%. Canalys anticipates that generative AI will underpin over 70% of cybersecurity operations within the next five years.
Forrester categorizes generative AI use cases into three primary areas: content creation, behavior prediction, and knowledge articulation. Allie Mellen, Forrester Principal Analyst, highlights that while the integration of AI and machine learning in security tools is not new, it is evolving to more effectively utilize historical data and risk-scoring methodologies.
Gartner predicts that 80% of applications will feature generative AI capabilities by 2026, indicating a rapid adoption across organizations. CISOs emphasize that the adaptability of a platform is crucial for maximizing the value of generative AI applications, particularly in enhancing broader zero-trust security frameworks.
To effectively integrate generative AI within a zero-trust framework, CISOs recommend securing all applications, platforms, tools, and endpoints through continuous monitoring, dynamic access controls, and ongoing verification. The incorporation of generative AI could introduce new attack vectors, prompting CISOs to prioritize defenses against query attacks, prompt injections, model manipulation, and data poisoning.
The management of knowledge using generative AI is emerging as a key use case, replacing lengthy and costly system integration projects. At RSAC 2023, several vendors launched ChatGPT-based solutions, including Google Security AI Workbench, Microsoft Security Copilot, and others, demonstrating the trend toward AI-enhanced security.
CrowdStrike’s introduction of Charlotte AI aims to amplify productivity for security analysts by automating repetitive tasks and improving threat detection processes using conversational AI. Set to roll out over the next year, Charlotte AI captures real-time interactions to track threats effectively.
Cloud configuration errors remain a significant target for attackers, with exploitation incidents up 95% year-over-year. In response to this growing threat landscape, experts predict increased M&A activity aimed at enhancing multi-cloud and hybrid cloud security. CrowdStrike’s acquisition of Bionic exemplifies this trend, reinforcing the importance of generative AI in strengthening overall cybersecurity efforts.