Lakera Secures $20M to Safeguard Enterprises from LLM Vulnerabilities

Lakera Secures $20 Million in Series A Funding to Fortify Generative AI Security

Lakera, a Swiss startup dedicated to safeguarding generative AI applications from malicious prompts and other threats, has successfully raised $20 million in a Series A funding round led by the European venture capital firm Atomico. As generative AI continues to gain prominence, notably through popular tools like ChatGPT, concerns around security and data privacy have emerged in enterprise environments.

To provide context, large language models (LLMs) serve as the backbone of generative AI, enabling machines to interpret and produce human-like text. Whether generating poetry or summarizing legal documents, LLMs require specific “prompts” to guide their outputs. Unfortunately, these prompts can be crafted to deceive the system into revealing confidential information or granting unauthorized access to private networks. This issue of “prompt injection” poses a significant and escalating threat, which Lakera is working to combat.

Established in Zurich in 2021, Lakera officially launched its services last October with an initial investment of $10 million, focusing on protecting organizations from the inherent vulnerabilities of LLMs, such as data leakage and prompt injections. The technology works seamlessly with any LLM, including those developed by OpenAI, Google, Meta, and Anthropic.

At its core, Lakera presents itself as a "low-latency AI application firewall," designed to secure data flows to and from generative AI applications. Its flagship product, Lakera Guard, utilizes a robust database that gathers insights from diverse sources, including publicly available open-source datasets from Hugging Face, proprietary machine learning research, and an interactive game called Gandalf. In this game, users attempt to outsmart the system to access a hidden password, thus enhancing its security models.

Gandalf becomes increasingly complex with each level, enabling Lakera to formulate a "prompt injection taxonomy" that categorizes various attack types. “We are AI-first, developing our own models to detect prompt injections and other malicious activities in real time,” explained David Haber, Lakera’s co-founder and CEO. “Our systems learn continuously from extensive generative AI interactions, enabling our detection models to adapt and evolve alongside the changing threat landscape.”

Lakera asserts that by integrating its technology with the Lakera Guard API, companies can significantly enhance their defenses against harmful prompts. Additionally, Lakera has developed advanced models to identify toxic content, covering areas like hate speech, sexual content, violence, and profanity—tools particularly beneficial for public-facing applications such as chatbots.

Like its prompt defense solutions, companies can easily implement Lakera’s content moderation capabilities with a single line of code, allowing for a centralized policy control dashboard to adjust content thresholds as needed.

With its recent $20 million funding boost, Lakera is positioned for expansion, especially in the U.S. The company already boasts a growing number of notable North American clients, including AI startup Respell and Canadian unicorn Cohere.

“Large enterprises, SaaS providers, and AI model developers are rapidly advancing to deploy secure AI applications,” Haber noted. “Financial institutions are recognizing the risks related to security and compliance, becoming early adopters of our solutions, but interest is surging across various sectors. Companies realize the necessity of incorporating generative AI into their core operations to maintain a competitive edge.”

In addition to lead investor Atomico, the Series A funding round saw participation from Dropbox’s venture capital arm, Citi Ventures, and Redalpine.

Most people like

Find AI tools in YBX