Zscaler Reports 600% Surge in Enterprise AI Adoption in Under a Year, Highlighting Data Security Risks

Enterprises are increasingly relying on artificial intelligence (AI) and machine learning (ML) tools, with transactions skyrocketing nearly 600%. This surge jumped from 521 million in April 2023 to 3.1 billion by January 2024. In response to heightened security concerns, organizations have blocked 18.5% of all AI/ML transactions, reflecting a staggering 577% increase over nine months.

CISOs and the organizations they safeguard have valid reasons for their cautious approach, leading to unprecedented levels of blocked AI/ML transactions. Attackers have adapted by weaponizing large language models (LLMs) to infiltrate organizations undetected. This rise of adversarial AI represents a burgeoning threat that often goes unnoticed.

According to Zscaler’s ThreatLabz 2024 AI Security Report, enterprises must adopt scalable cybersecurity strategies to protect their expanding AI/ML tools. Key issues highlighted in the report include data protection, AI data quality management, and privacy concerns. Analyzing over 18 billion transactions from April 2023 to January 2024, ThreatLabz examined current AI and ML tool usage in various sectors.

Industries such as healthcare, finance, insurance, services, technology, and manufacturing experience both significant adoption of AI/ML tools and increased vulnerability to cyberattacks. Manufacturing generates the highest volume of AI traffic, accounting for 20.9% of transactions, closely followed by finance and insurance at 19.9% and services at 16.8%.

Blocking transactions represents a temporary but swift response

In their effort to shield against potential cyberattacks, CISOs and their teams are blocking record numbers of AI/ML transactions. This proactive measure aims to protect high-risk sectors from a wave of cyber threats.

Currently, ChatGPT is the most utilized and blocked AI tool, succeeded by OpenAI, Fraud.net, Forethought, and Hugging Face. The top blocked domains include Bing.com, Divo.ai, Drift.com, and Quillbot.com. Between April 2023 and January 2024, enterprises blocked over 2.6 billion transactions.

Manufacturing blocks just 15.65% of AI transactions, a concerningly low figure considering its susceptibility to cyberattacks, particularly ransomware. Conversely, the finance and insurance sector blocks 37.16% of transactions, reflecting heightened data security and privacy concerns. Alarmingly, the healthcare industry blocks only 17.23% of AI transactions, raising questions about its commitment to safeguarding sensitive data.

Disruptions in critical sectors like healthcare and manufacturing can lead to significant ransomware payouts. The recent United Healthcare ransomware attack illustrates how a coordinated assault can incapacitate entire supply chains.

Blocking is a short-term solution to a much larger challenge

To go beyond merely blocking, organizations should leverage the telemetry capabilities of advanced cybersecurity platforms. CrowdStrike, Palo Alto Networks, and Zscaler are amongst those promoting insights derived from telemetry data.

CrowdStrike CEO George Kurtz emphasized the importance of linking weak signals from various endpoints to enhance detection capabilities. This approach extends to third-party collaborations, allowing for deeper insights and improved novel detections.

Key cybersecurity vendors with extensive expertise in AI and decades of experience in ML include Blackberry Persona, Broadcom, Cisco Security, CrowdStrike, CyberArk, Cybereason, Ivanti, SentinelOne, Microsoft, McAfee, Sophos, and VMware Carbon Black. These companies are likely to train their LLMs using AI-driven attack data to match the sophisticated methods employed by attackers.

A new, more lethal AI threat landscape has emerged

According to Zscaler’s report, AI-driven risks can be categorized into two main areas: the data protection and security risks associated with enterprise AI tools, and the new cyber threat landscape fueled by generative AI and automation.

CISOs face formidable challenges in defending against various AI attack techniques outlined in the report. Addressing employee negligence when utilizing ChatGPT and ensuring confidential data is not inadvertently shared should be critical board-level discussions. Prioritizing risk management is essential to any robust cybersecurity strategy.

Safeguarding intellectual property from leaks via ChatGPT, controlling shadow AI, and achieving data privacy and security are integral to a successful AI/ML strategy.

Last year, Alex Philips, CIO at National Oilwell Varco (NOV), shared insights on generative AI with his board, emphasizing the importance of understanding both the advantages and risks of ChatGPT. Philips regularly updates the board on developments in generative AI technologies, fostering an informed expectation about potential security measures required to prevent significant breaches.

Striking a balance between productivity and security is vital in navigating the challenges posed by the new AI threat landscape. Zscaler's CEO faced a vishing and smishing attempt where attackers impersonated him in WhatsApp messages, attempting to deceive an employee into revealing sensitive information. Fortunately, Zscaler's systems thwarted the attack, which is indicative of a growing trend targeting top executives and tech leaders.

Attackers are increasingly using AI to orchestrate fast-paced ransomware attacks. Zscaler reports that AI-driven ransomware is a tool in nation-state hackers' arsenals, with frequency on the rise. By employing generative AI, attackers create comprehensive tables of vulnerabilities associated with an organization’s firewalls and VPNs. This intelligence enables them to optimize code exploits, tailoring payloads for specific environments.

Moreover, Zscaler highlights how generative AI can identify weaknesses within enterprise supply chains, revealing optimal connection pathways to the core network. While strong security measures may be in place, downstream vulnerabilities often present the greatest risks. Attackers continuously refine their tactics using generative AI, leading to sophisticated and targeted assaults that are increasingly challenging to detect.

Ultimately, adversaries aim to incorporate generative AI throughout the ransomware attack chain, automating reconnaissance and code exploitation to generate advanced polymorphic malware and ransomware. By streamlining critical components of the attack process, threat actors can execute faster, more targeted, and sophisticated attacks against enterprises.

Most people like

Find AI tools in YBX