Just as cloud platforms rapidly evolved to provide enterprise computing infrastructure, Menlo Ventures envisions the modern AI stack following a similar growth path, with immense value creation potential akin to public cloud platforms.
The venture capital firm highlights that the foundational AI models currently in use mirror the early days of public cloud services. Achieving the right balance between AI and security is crucial for the evolving market to fulfill its potential.
Are you ready for AI agents? Menlo Ventures’ latest blog post, “Part 1: Security for AI: The New Wave of Startups Racing to Secure the AI Stack,” elaborates on how the intersection of AI and security can drive new market growth.
“One analogy I’ve been drawing is that these foundational models are very much like the public clouds we know today, such as AWS and Azure. However, 12 to 15 years ago, as that infrastructure-as-a-service layer began, we witnessed massive value creation once the new foundation was established,” explained Rama Sekhar, Menlo Venture’s partner focusing on cybersecurity, AI, and cloud infrastructure.
Sekhar added, “We believe something similar is on the horizon; the foundation model providers are at the base of the infrastructure stack.”
Addressing Security Challenges to Accelerate Generative AI Growth
In an interview, Sekhar and Feyza Haskaraman, Menlo Ventures Principal specializing in Cybersecurity, SaaS, Supply Chain, and Automation, emphasized that AI models are central to a new modern AI stack. This stack relies on a continuous flow of sensitive enterprise data for self-learning. They noted that the rise of AI results in an exponential increase in threat surfaces, with large language models (LLMs) becoming prime targets.
Securing LLMs using current tools proves challenging, creating a trust gap in enterprises and hindering generative AI adoption. This gap stems from the disconnect between the hype surrounding generative AI and its actual implementation. Meanwhile, attackers are increasingly using AI-based techniques, which intensifies corporate concerns about losing the AI competition.
To unlock generative AI's true market potential, Sekhar and Haskaraman believe it is essential to address security concerns. Menlo Ventures’ survey identified three primary barriers to generative AI adoption: unproven ROI, data privacy issues, and the misconception that enterprise data is difficult to utilize with AI.
Improving security for AI can help alleviate data privacy concerns while also addressing the other two barriers mentioned. They highlighted that OpenAI’s models have recently been subjected to cyberattacks, including a DoS attack last November that affected their API and ChatGPT services, leading to numerous outages.
Governance, Observability, and Security: Essential Foundations
Menlo Ventures asserts that governance, observability, and security are foundational elements necessary for scaling AI security. These components form the bedrock of their market map.
Governance tools are experiencing rapid growth. Media reports indicate a surge in AI-based governance and compliance startups, particularly cloud-based solutions that offer time-to-market and global scalability advantages. Tools like Credo and Cranium assist businesses in tracking their AI services, assessing safety and security risks, and ensuring comprehensive awareness of AI usage within the organization—all crucial for protecting and monitoring LLMs.
Observability tools are vital for monitoring models and aggregating logs on access, inputs, and outputs, aiding in the detection of misuse and enabling full auditability. Menlo Ventures points to startups like Helicone and CalypsoAI as key players addressing these needs in the solution stack.
Security solutions focus on establishing trust boundaries. Sekhar and Haskaraman argue for stringent controls around model usage, both internally and externally. Menlo Ventures is particularly interested in AI firewall providers, such as Robust Intelligence and Prompt Security, which validate input and output, safeguard against prompt injections, and detect personally identifiable information (PII). Companies like Private AI and Nightfall specialize in identifying and redacting sensitive data, while firms like Lakera and Adversa aim to automate red teaming activities to test the robustness of security measures. Threat detection solutions like Hiddenlayer and Lasso Security are also crucial in monitoring LLMs for suspicious behavior. Additionally, solutions like DynamoFL and FedML for federated learning, Tonic and Gretel for creating synthetic data, and Private AI or Kobalt Labs for identifying sensitive information are integral to the Security for AI Market Map below.
Prioritizing Security for AI in DevOps
With a significant portion of enterprise applications utilizing open-source solutions, securing software supply chains is another area where Menlo Ventures aims to mitigate the trust gap.
Sekhar and Haskaraman contend that AI security must be inherently integrated into the DevOps process, ensuring it is foundational to enterprise application architecture. They stressed that embedding security for AI should be so pervasive that its protective value helps bridge the current trust gap, thereby facilitating broader generative AI adoption.