"Gen AI Innovation Race Unveils Security Gaps, Reports IBM and AWS"

What Will It Take to Secure Generative AI?

A recent study from IBM and Amazon Web Services (AWS) reveals that there is no straightforward solution to securing generative AI (gen AI). Based on a survey by the IBM Institute for Business Value of leading U.S. executives, the report underscores the critical importance of security in AI initiatives. A notable 82% of C-suite leaders agree that secure and trustworthy AI is vital for business success.

However, there's a stark contrast between intentions and actions: organizations currently secure only 24% of their generative AI projects. This concern over security is echoed in a PwC report, which notes that 77% of CEOs are worried about AI cybersecurity risks.

To address these challenges, IBM is collaborating with AWS to enhance gen AI security. Today, they announced the IBM X-Force Red Testing Service for AI, aimed at advancing security measures in generative AI.

Navigating the Innovation-Security Dilemma

While securing technology may seem fundamental, many organizations prioritize innovation over security. The report highlights that 69% of organizations put innovation first, often overlooking comprehensive security measures. Dimple Ahluwalia, Global Senior Partner for Cybersecurity Services at IBM Consulting, observes that leaders are under pressure to innovate with gen AI, leading to security becoming an afterthought.

"As with the early days of cloud computing," Ahluwalia notes, "many are rushing into generative AI without adequate security planning, which ultimately compromises security."

Establishing Guardrails for Effective AI Security

To foster trust in generative AI, organizations must implement robust governance. This involves creating policies, processes, and controls aligned with business goals—81% of executives believe new governance models are essential for securing generative AI.

Once governance frameworks are established, organizations can develop strategies to secure the entire AI pipeline. This requires collaboration among security, technology, and business teams, as well as leveraging the expertise of technology partners for strategy development, training, cost justification, and compliance navigation.

IBM X-Force Red Testing Service for AI

Alongside governance, validation and testing are critical for ensuring security. The IBM X-Force Red Testing Service for AI marks IBM's first testing service designed specifically for AI. This service assembles a diverse team of experts in penetration testing, AI systems, and data science, utilizing insights from IBM Research and the Adversarial Robustness Toolbox (ART).

The term “red teaming” refers to a proactive security approach where adversarial tactics are employed to identify vulnerabilities. Chris Thompson, Global Head of X-Force Red at IBM, explains that the focus has shifted to testing AI model safety and security, although traditional red team tactics on stealth and evasion had been underexplored.

"The nature of attacks on gen AI applications resembles traditional application security threats, but with unique twists and expanded attack surfaces," Thompson notes.

As we move through 2024, IBM is witnessing a convergence of strategies that resemble true red teaming. Their approach encompasses four key areas: AI platforms, the ML pipeline for tuning and training models, the production environment for gen AI applications, and the applications themselves.

"Aligned with our traditional red teaming efforts, we aim to uncover missed detection opportunities and enhance the speed of detecting potential threats targeting these innovative AI solutions," Thompson concludes.

Most people like

Find AI tools in YBX