Protect AI Strengthens LLM Security Through Open Source Acquisition Initiatives

Securing AI and ML Workflows: Protect AI's Innovative Approach

Securing artificial intelligence (AI) and machine learning (ML) workflows poses a multifaceted challenge involving numerous components.

Seattle-based startup Protect AI is addressing this challenge with its latest acquisition of Laiyer AI, the leading firm behind the LLM Guard open-source project. The financial details of the transaction remain undisclosed. This acquisition will enhance Protect AI's platform to better safeguard organizations against the potential risks associated with large language models (LLMs).

Radar: A Comprehensive AI Security Solution

Protect AI's core commercial platform, Radar, offers visibility, detection, and management capabilities for AI/ML models. Following a successful $35 million Series A funding round in July 2023, the company aims to expand its AI security initiatives. Daryan Dehghanpisheh, the president and founder of Protect AI, stated, “We want to drive the industry to adopt MLSecOps, which fundamentally helps organizations see, understand, and manage AI-related risks and security vulnerabilities.”

Enhancements from LLM Guard and Laiyer AI

The LLM Guard project, managed by Laiyer AI, provides governance for LLM operations. It features input controls that protect against prompt injection attacks—an increasing concern in AI utilization. Additionally, LLM Guard mitigates the risk of personally identifiable information (PII) leaks and toxic language. On the output side, it protects users from various threats, including malicious URLs.

Dehghanpisheh reiterated Protect AI’s commitment to keeping LLM Guard open-source while planning a commercial offering, Laiyer AI, which will introduce enhanced performance and enterprise functionalities.

Moreover, Protect AI will integrate LLM Guard technology into its broader platform, ensuring protection across all stages of AI model development and deployment.

Building on Open Source Experience

Protect AI's evolution from open-source initiatives to commercial products is evidenced by its leadership of the ModelScan open-source project, which identifies security risks in ML models. This technology underpins Protect AI's recently announced Guardian technology, aimed at scanning for vulnerabilities in ML models.

Scanning models for vulnerabilities is more complex than traditional virus scanning; ML models lack specific known vulnerabilities to identify. “A model is a self-executing piece of code,” Dehghanpisheh explained. “Malicious executable calls can easily be embedded, resulting in significant risks.” Protect AI’s Guardian technology is designed to detect such threats.

Towards a Comprehensive AI Security Platform

Dehghanpisheh envisions Protect AI evolving beyond individual security products. In the current landscape, organizations often utilize various technologies from multiple vendors. Protect AI aims to unify its tools within the Radar platform, integrating seamlessly with existing security tools such as SIEM (Security Information and Event Management) systems.

Radar provides a comprehensive overview of the components within an AI model, essential for governance and security. With Guardian technology, Protect AI can identify potential risks before deployment, while LLM Guard mitigates usage risks. Ultimately, Protect AI strives to deliver a holistic enterprise AI security solution, allowing organizations to manage all AI security policies from a unified platform.

“You’ll have one policy for enterprise-wide AI security,” Dehghanpisheh concluded.

Most people like

Find AI tools in YBX