New research from Menlo Security highlights the urgent cybersecurity challenges posed by the rapid adoption of generative AI in enterprises. As tools like ChatGPT become integral to daily workflows, businesses must reevaluate their security strategies.
“Employees are adopting AI in their daily tasks. Our controls must not only block misuse but also prevent unregulated access,” stated Andrew Harding, VP of Product Marketing at Menlo Security, in an exclusive interview. “While there's been significant growth in generative AI usage within enterprises, security and IT teams face persistent challenges. We need effective tools that apply robust controls to AI applications, enabling CISOs to manage risks while leveraging the productivity benefits of GenAI.”
A Surge in AI Usage Amid Risks
Menlo Security's report reveals alarming trends: visits to generative AI sites in enterprises have surged over 100% in the last six months, and the number of frequent users has increased by 64%. This rapid integration has exposed new vulnerabilities.
Although many organizations are enhancing security measures for generative AI, researchers find that most implement ineffective domain-by-domain policies. “Organizations are strengthening security, but they’re missing the mark. Policies applied on a domain basis are insufficient,” Harding emphasized.
This fragmented approach struggles to keep up with the emergence of new generative AI platforms. The report indicates that attempted file uploads to generative AI sites have risen by 80% in six months, driven by new features. The risks extend beyond potential data loss.
Experts warn that generative AI can significantly amplify phishing threats. Harding remarked, “AI-powered phishing is simply smarter phishing. Enterprises need real-time protection to prevent AI-driven scams from becoming a significant issue.”
From Novelty to Necessity
The rise of generative AI, especially with the launch of ChatGPT, has transformed from a technological novelty to a business necessity. OpenAI first introduced GPT-1, its generative AI system, in June 2018. This was followed by advancements like Google Brain's PaLM, with 540 billion parameters, launched in April 2022.
The public fascination surged with the introduction of DALL-E in early 2021, but it was ChatGPT's debut in November 2022 that sparked widespread engagement. Users quickly began incorporating these tools into various tasks, from crafting emails to debugging code, showcasing the versatility of AI.
However, this rapid integration brings significant risks often overlooked. Generative AI systems are only as secure, ethical, and accurate as the data used in training. They can inadvertently expose biases, spread misinformation, and even disclose sensitive information.
Moreover, these models pull data from vast sections of the internet, leading to challenges in controlling what content is ingested. If proprietary information is publicly posted, these AI systems can inadvertently absorb and later reveal it.
The Balancing Act of Security and Innovation
To effectively balance security and innovation, experts advocate a multi-layered approach. Harding suggests “implementing copy and paste limits, establishing security policies, monitoring sessions, and enforcing group-level controls across generative AI platforms.”
Organizations must learn from past technological shifts. Technologies like cloud computing and mobile devices inherently introduced new risks, prompting ongoing adaptations in security strategies. The same proactive approach is essential for generative AI.
The time to act is now. “Generative AI site visits and active users are on the rise, but security and IT teams continue to face challenges,” Harding warned.
To safeguard their operations, businesses must swiftly evolve their security strategies to match the rapid adoption of generative AI. Striking a balance between security and innovation is crucial, lest the risks associated with generative AI spiral beyond control.