Enkrypt AI Secures $2.35 Million Seed Funding for Generative AI Security
Boston-based startup Enkrypt AI, which specializes in providing a control layer for the secure use of generative AI, has successfully raised $2.35 million in a seed funding round led by Boldcap.
Although this investment is modest compared to typical AI funding, Enkrypt’s innovative solution addresses a critical need: the private, secure, and compliant deployment of generative AI models. Founded by Yale PhD graduates Sahil Agarwal and Prashanth Harshangi, Enkrypt claims its technology can significantly enhance the adoption of generative AI for enterprises—potentially speeding up deployment by as much as ten times.
Harshangi remarked, “We’re championing a paradigm where trust and innovation coalesce, enabling the deployment of AI technologies with the confidence that they are as secure and reliable as they are revolutionary.”
In addition to Boldcap, notable contributors in the seed round include Berkeley SkyDeck, Kubera VC, Arka VC, Veredas Partners, Builders Fund, and several angel investors from the AI, healthcare, and enterprise sectors.
What Does Enkrypt AI Offer?
As the demand for generative AI grows, many companies are eager to leverage this technology to enhance workflows and boost efficiency. However, deploying foundational models comes with safety challenges, including the need to maintain data privacy, protect against security threats, and ensure compliance with regulations both before and after deployment.
Currently, most organizations manage these issues manually through internal teams or third-party consultants. This traditional approach, while functional, is time-consuming and can delay AI projects by up to two years, resulting in missed business opportunities.
Founded in 2023, Enkrypt aims to bridge this gap with Sentry, an all-in-one solution that offers visibility and oversight of large language model (LLM) usage and performance across business functions. Sentry safeguards sensitive information, protects against security threats, and ensures compliance through automated monitoring and robust access controls.
“Sentry serves as a secure enterprise gateway, enabling model access controls alongside data privacy and model security,” CEO Sahil Agarwal explained. “It routes all LLM interactions through our proprietary guardrails to prevent data breaches and maintain regulatory compliance.”
Enhancing Security and Compliance for Generative AI
Sentry’s guardrails, powered by Enkrypt’s proprietary technology, proactively prevent prompt injection attacks and data vulnerabilities. This solution can sanitize model data and anonymize sensitive personal information, while continuously moderating AI-generated content for toxicity and relevance.
CISOs and product leaders gain full visibility over all generative AI initiatives, enabling them to implement Enkrypt’s guardrails effectively, which reduces regulatory, financial, and reputational risks.
Testing Sentry's Impact on Generative AI Vulnerabilities
While Enkrypt is still pre-revenue and does not yet share specific growth metrics, its Sentry technology is under evaluation by mid to large-sized companies in regulated sectors like finance and life sciences.
For instance, one Fortune 500 company using Meta’s Llama2-7B model experienced a reduction in jailbreak vulnerabilities from 6% to 0.6% after implementing Sentry. This advancement allowed the enterprise to adopt LLMs more quickly—transitioning from years to weeks for various applications across departments.
Agarwal highlighted that companies are seeking comprehensive solutions rather than multiple point-products to handle sensitive data leaks, prompt injection attacks, and compliance management, underscoring the unique all-in-one nature of their offering.
Looking ahead, Enkrypt plans to expand its solution to a broader range of enterprises, demonstrating its capabilities across diverse models and environments. This step is vital, as prioritizing safety has become essential for any organization developing or deploying generative AI solutions.
“We are currently collaborating with design partners to refine the product. Our main competitor is Protect AI, which recently acquired Laiyer AI to enhance their security and compliance offerings,” Agarwal noted.
In addition, the U.S. government's NIST standards body has formed an AI safety consortium with over 200 firms to focus on establishing foundational standards for AI safety measurement.