Thinking Twice Before Sharing: Discover How Metomic Prevents Data Leaks with ChatGPT

OpenAI's ChatGPT is poised to transform the way we work, but its integration into enterprises presents a dilemma. While businesses recognize generative AI as a competitive advantage, the risk of exposing sensitive information remains high.

The Challenge of Uncontrolled Usage

Many employees are already using ChatGPT without their employers’ knowledge, inadvertently sharing sensitive data. Companies require a solution, and Metomic aims to fill that gap. The data security software firm has launched a browser plugin, Metomic for ChatGPT, designed to track user activity within OpenAI’s large language model (LLM).

“There’s no perimeter to these apps; it’s a wild west of data sharing activities,” stated Rich Vibert, CEO of Metomic. “No one has visibility at all.”

Extent of Data Leaks

Research indicates that 15% of employees regularly input company data into ChatGPT, primarily including source code (31%), internal business information (43%), and personally identifiable information (PII) (12%). Key departments engaged in this activity include R&D, finance, and sales/marketing.

“This is a brand new problem,” Vibert explained, highlighting the “massive fear” companies have about employee use of these tools. “There are no barriers; you just need a browser.”

Metomic's findings show that employees are leaking critical financial data, including balance sheets and code snippets. Particularly concerning is the exposure of customer chat transcripts, which can accumulate extensive sensitive information such as names, email addresses, and credit card numbers.

“Complete customer profiles are being entered into these tools,” Vibert noted, with potential access for competitors and hackers, leading to breach of contract risks.

The Risk of Malicious Insider Threats

Beyond unintentional leaks, there are risks from departing employees who may use generative AI tools to take proprietary data with them. Additionally, malicious insiders may deliberately leak sensitive information, posing further threats.

While some companies opt to ban ChatGPT and similar platforms, Vibert warns this is not a sustainable strategy.

“These tools are here to stay,” he asserts, emphasizing ChatGPT's value in enhancing productivity and efficiency.

Data Security Focused on Employee Engagement

The Metomic ChatGPT integration operates within the browser, recognizing when employees log into the platform and scanning uploaded data in real time. If sensitive data, such as PII or security credentials, is detected, users are notified to redact or categorize that data appropriately.

Security teams also receive alerts about potential data breaches, ensuring proactive measures.

“This is data security through the lens of employees,” Vibert explained, emphasizing visibility and control rather than outright restrictions.

“It transforms noise into actionable insights for security and analytics teams,” he added, combating alert fatigue faced by IT departments.

Navigating the SaaS Landscape

Today, enterprises are leveraging a multitude of SaaS applications—an estimated 991, yet only 25% are interconnected.

Vibert highlights the unprecedented rise in SaaS tool usage, emphasizing that Metomic connects with various applications and is equipped with 150 data classifiers to recognize critical data risks based on contextual factors like industry regulations.

“Understanding where data is placed across tools is essential,” he said, noting the importance of identifying “data hot spots” within-specific departments or employees.

As the SaaS ecosystem expands, so too will the use of generative AI tools like ChatGPT.

As Vibert aptly states, “It’s not even day zero of a long journey ahead of us.”

Most people like

Find AI tools in YBX

Related Articles
Refresh Articles