Exclusive: How Can We Safeguard Generative AI? Insights from IBM's Approach

As organizations increasingly harness the potential of generative AI, security challenges are also on the rise.

Today, IBM is addressing these risks by launching a new security framework specifically designed for generative AI. The IBM Framework for Securing Generative AI aims to protect workflows throughout their entire lifecycle, from data collection to production deployment. This framework offers insights into common security threats that organizations may face while working with generative AI, along with actionable recommendations for effective defense strategies. Over the past year, IBM has enhanced its generative AI capabilities with its watsonX portfolio, which includes advanced models and governance tools.

IBM's program director for emerging security technology, Ryan Dougherty, emphasized the importance of this framework: “We distilled our expertise to outline the most likely attacks and the top defenses organizations should implement to secure their generative AI initiatives.”

What Makes Generative AI Security Unique?

While IBM possesses extensive experience in security, the risks associated with generative AI workloads are both familiar and unique. The three core tenets of IBM’s approach are securing the data, the model, and the usage. These pillars underscore the need for secure infrastructure and AI governance throughout the entire process.

Sridhar Muppidi, IBM Fellow and CTO at IBM Security, highlighted that fundamental data security practices, such as access control and infrastructure security, remain vital in generative AI, just as they are in traditional IT environments.

However, certain risks are distinctly associated with generative AI. For instance, data poisoning involves introducing false data into a dataset, leading to inaccurate outcomes. Muppidi also pointed out that issues related to bias and data diversity, along with data drift and privacy concerns, require particular attention in this domain. Additionally, prompt injection—where users maliciously alter model output through prompts—presents a new area of risk that necessitates novel controls.

Key Security Concepts: MLSecOps, MLDR, and AISPM

The IBM Framework for Securing Generative AI is not merely a singular tool but rather a collection of guidelines and recommendations to safeguard generative AI workflows.

As the landscape of generative AI evolves, new security categories are emerging, including Machine Learning Detection and Response (MLDR), AI Security Posture Management (AISPM), and Machine Learning Security Operations (MLSecOps). MLDR focuses on identifying potential risks within models, while AISPM has similarities to Cloud Security Posture Management (CSPM), emphasizing the importance of proper configuration and best practices for secure deployments.

Muppidi explained, “Just as we have DevOps, which evolved into DevSecOps with added security, MLSecOps represents a comprehensive lifecycle approach from design to usage, integrating security throughout.”

This framework positions organizations to better navigate the complexities of generative AI security, ensuring robust defenses against emerging threats.

Most people like

Find AI tools in YBX

Related Articles
Refresh Articles