Can Generative AI Be Safe? WitnessAI's Approach to Risk Management
Generative AI has the potential to create content, but it also comes with risks, including bias and toxic outputs. So, is it possible for generative AI to be "safe"? Rick Caccia, CEO of WitnessAI, believes it can.
“Securing AI models is a significant challenge, particularly for AI researchers; however, it differs from securing usage,” Caccia, former SVP of Marketing at Palo Alto Networks, explained in an interview. “I liken it to a sports car: having a powerful engine, or model, is irrelevant without effective brakes and steering. The controls are essential for safe and efficient operation.”
The enterprise sector is eager for these controls. While there is cautious optimism about the productivity potential of generative AI, many companies remain wary of its limitations. A recent IBM poll revealed that 51% of CEOs are hiring for generative AI-related positions that have emerged this year. However, only 9% of companies feel prepared to manage risks such as privacy and intellectual property concerns associated with generative AI, according to a Riskonnect survey.
WitnessAI’s innovative platform acts as a safeguard, monitoring interactions between employees and specific generative AI models used by their organizations—not models limited by APIs like OpenAI’s GPT-4 but rather akin to Meta’s Llama 3. It implements risk-management policies to protect sensitive data.
“One of the key benefits of enterprise AI is its ability to unlock and democratize access to essential data, enabling employees to perform their jobs more effectively. However, inadvertently overexposing sensitive data or allowing data breaches poses significant risks,” Caccia stated.
WitnessAI offers several modular solutions designed to mitigate various generative AI-related risks. For instance, one module sets rules to prevent certain teams from misusing generative AI tools, such as inquiring about unreleased earnings reports or sharing internal code. Another module automatically redacts proprietary information from input prompts and implements protective measures against potential manipulative attacks.
“We aim to define challenges like the safe adoption of AI and then provide solutions to address these issues,” Caccia explained. “Chief Information Security Officers (CISOs) prioritize the protection of the business, and WitnessAI empowers them to ensure data security, prevent prompt injection, and enforce identity-based policies. Meanwhile, Chief Privacy Officers (CPOs) need to comply with existing and upcoming regulations, and we offer them the visibility and tools to monitor activities and assess risks.”
However, WitnessAI faces a unique privacy challenge: all data processed through its platform is reviewed before reaching a generative model. The company is upfront about this design and provides tools to track employee interactions with models, including their inquiries and received responses. While this transparency is notable, it raises potential privacy concerns.
In response to inquiries about privacy, Caccia emphasized that WitnessAI's platform is structured to prevent confidential information from leaking. “We’ve engineered a millisecond-latency platform with built-in regulatory separation, uniquely isolating enterprise AI activities,” he clarified. “We create individual instances for each client, with encryption keys that remain exclusive to them. Their AI activity data is kept entirely private—we cannot access it.”
This design may help assuage customer privacy concerns. However, employee apprehensions about monitoring remain a difficult issue to navigate. Surveys indicate that many individuals disapprove of workplace monitoring, believing it adversely affects morale. Nearly a third of respondents in a Forbes survey indicated they might consider resigning if their employer monitored their online activities and communications.
Nevertheless, Caccia reported strong ongoing interest in WitnessAI’s platform, with 25 early corporate adopters currently in a proof-of-concept stage, as the platform is set to become widely available in Q3. The company has successfully raised $27.5 million from Ballistic Ventures—its incubator—and GV, Google’s venture arm.
This funding will help expand WitnessAI's team from 18 to 40 by year-end as it seeks to gain a competitive edge in the emerging market for AI compliance and governance solutions, a landscape crowded with established players like AWS, Google, Salesforce, and emerging startups such as CalypsoAI.
“We’ve crafted a sustainable plan that could carry us through 2026 even without sales, yet we’re already approaching nearly 20 times the pipeline necessary to achieve our sales targets this year,” Caccia noted. “Although this marks our initial funding round and public launch, secure AI enablement is an evolving sector, and all our features are tailored to meet the demands of this burgeoning market.”
Stay tuned for our upcoming AI newsletter! Sign up now to start receiving insights directly in your inbox.