This article is part of a VB Special Issue titled “Fit for Purpose: Tailoring AI Infrastructure.” Explore the other stories here.
Unlocking the potential of AI for increased efficiency, cost savings, and enhanced customer insights necessitates a balanced approach between cybersecurity and governance.
Are You Prepared for AI Agents?
AI infrastructure must be adaptable to the evolving needs of a business. Cybersecurity should safeguard revenue, while governance must align with internal compliance and broader organizational requirements.
To scale AI safely, businesses need to continually enhance their core infrastructure components. It is equally crucial for cybersecurity, governance, and compliance to share a unified data platform that enables real-time insights.
“AI governance establishes a structured framework for managing, monitoring, and controlling the human-centric use and development of AI systems,” said Venky Yerrapotu, founder and CEO of 4CRisk. “Integrated AI tools can introduce risks such as biases, data privacy concerns, and potential misuse.”
A strong AI infrastructure simplifies audits, assists AI teams in identifying obstacles, and highlights critical gaps in cybersecurity, governance, and compliance.
“With minimal industry-approved governance or compliance frameworks, organizations must create effective safeguards for safe AI innovation,” noted Anand Oswal, SVP and GM of network security at Palo Alto Networks. “Failing to do so can be costly, as adversaries constantly seek to exploit the vulnerabilities in AI.”
Defending Against AI Infrastructure Threats
Malicious actors, including cybercrime gangs and nation-state actors, aim to enhance their methods, posing threats to both financial and physical AI infrastructure. “Regulations and AI evolve at different paces,” said Etay Maor, chief security strategist at Cato Networks. “Regulators often lag behind technology, especially in AI, while threat actors exploit the system without the constraint of regulations.”
Groups based in China, North Korea, and Russia actively target AI infrastructures, using AI-generated malware to exploit vulnerabilities in ways undetectable by traditional defenses.
Security teams face significant challenges as well-funded cybercriminal organizations and nation-states increasingly target AI systems. Effective security measures include model watermarking, which incorporates unique identifiers into models to monitor unauthorized use, and AI-driven anomaly detection tools for real-time threat surveillance.
Various companies reported using red teaming techniques, with Anthropic illustrating the effectiveness of human-in-the-middle design to address security weaknesses during model testing. “Human-in-the-middle design will remain essential to provide contextual intelligence, refine large language models, and minimize hallucinations,” stated Itamar Sher, CEO of Seal Security.
Models as High-Risk Threat Surfaces
Every AI model released into production represents a new threat surface that needs protection. According to Gartner, 73% of enterprises have deployed hundreds or thousands of models. Malicious actors exploit vulnerabilities using various techniques. The NIST Artificial Intelligence Risk Management Framework is an essential resource for organizations building AI infrastructure, outlining common threats such as data poisoning, evasion, and model theft.
AI Security highlights, “AI models are often targeted through API queries to reverse-engineer their functionality.”
CISOs warn that establishing robust AI infrastructure is an ongoing challenge. “Even if AI isn't explicitly security-centric, it significantly impacts your ability to secure your environment,” remarked Merritt Baer, CISO at Reco.
Placing Design-for-Trust at the Core of AI Infrastructure
Just as operating systems aim for accountability, explainability, fairness, robustness, and transparency, so must AI infrastructure. The NIST framework advocates for a design-for-trust roadmap, emphasizing validity and reliability as key design goals to ensure trustworthy AI performance.
The Essential Role of Governance in AI Infrastructure
AI systems must be developed, deployed, and maintained ethically and securely. Effective governance creates workflows, provides visibility, and offers real-time updates regarding algorithmic transparency, fairness, accountability, and privacy. The foundation of strong governance begins with continuous monitoring and alignment of models with societal values.
"Governance by design" incorporates these principles early in the development process, ensuring ethical oversight. “Implementing an ethical AI framework requires attention to security, bias, and data privacy not only during design but also throughout testing and validation,” explained WinWire CTO Vineet Arora.
Designing AI Infrastructures to Mitigate Bias
Minimizing bias in AI models is essential for delivering accurate and ethical outcomes. Organizations must take responsibility for monitoring and improving their AI systems to reduce biases. Techniques like adversarial debiasing can help diminish correlations between protected attributes and outcomes, lessening discrimination risks. Additionally, resampling training data ensures balanced representation across different demographics.
“Embedding transparency and explainability into AI design allows organizations to understand decision-making processes better, facilitating the detection and rectification of biased outputs,” notes NIST. Providing insights into AI decision-making enables organizations to address and learn from biases effectively.
How IBM Manages AI Governance
IBM’s AI Ethics Board oversees its AI infrastructure and projects to ensure compliance with ethical standards. They employ “focal points”—mid-level executives with AI expertise—who review ongoing projects to uphold IBM's Principles of Trust and Transparency.
Christina Montgomery, IBM’s chief privacy and trust officer, asserts that, “Our AI ethics board plays a critical role in defining internal processes and guardrails to ensure we introduce technology responsibly.”
Governance frameworks should be integrated from the design phase, ensuring transparency, fairness, and accountability throughout AI development and deployment.
Ensuring Explainable AI
Bridging the gaps between cybersecurity, compliance, and governance is increasingly vital in AI infrastructure. AI systems must provide clear explanations of their decision-making processes to foster trust and accountability. “As with business decisions, AI systems should clarify how they reach their conclusions,” emphasized Joe Burton, CEO of Reputation. “Focusing on data rights, regulatory compliance, access control, and transparency enables organizations to leverage AI for innovation while maintaining integrity and responsibility.”