AI governance company Credo AI has launched a new platform designed to enhance visibility and compliance around responsible AI policies by integrating with third-party AI operations and business tools.
The newly available Credo AI Integrations Hub allows enterprise clients to connect generative AI development platforms, such as Amazon SageMaker, MLFlow, and Microsoft Dynamics 365, to a centralized governance platform. Additionally, platforms commonly used for deploying these applications—like Asana, ServiceNow, and Jira—can also be integrated into the Hub.
The idea behind the Integrations Hub is to streamline the connection between AI applications and Credo AI’s governance platform. Enterprises will no longer need to upload documentation to validate safety and security standards; instead, the Hub automatically gathers relevant metadata containing these metrics.
According to Navrina Singh, founder and CEO of Credo AI, the Integrations Hub was carefully crafted to embed AI governance—covering data disclosure rules and internal AI usage policies—into the initial stages of the development process.
“As global organizations adapt to the rapid adoption of AI tools, our goal is to help them maximize their AI investments by simplifying governance, eliminating excuses about its complexity,” stated Singh.
The Integrations Hub features pre-configured connections with popular platforms such as Jira, ServiceNow, Amazon's SageMaker and Bedrock, Salesforce, MLFlow, Asana, Databricks, Microsoft Dynamics 365, Azure Machine Learning, Weights & Biases, Hugging Face, and Collibra. Custom integrations can also be developed for an additional fee.
Governance from the Start
Surveys indicate that responsible AI and governance—assessing compliance with regulations, ethical considerations, and privacy—are becoming increasingly critical for companies. However, few organizations currently evaluate these risks effectively.
As businesses navigate responsible practices in generative AI, tools that simplify risk assessment and compliance are gaining traction. Credo AI is one of several companies striving to make responsible AI practices more accessible.
IBM’s Watsonx product suite includes a governance platform for evaluating models for accuracy, bias, and compliance, while Collibra offers AI governance tools that create workflows for monitoring AI programs.
While Credo AI checks applications for potential brand risks, it primarily focuses on ensuring organizations comply with existing and upcoming regulations regarding automated systems.
Currently, generative AI regulations are sparse, though enterprises have been adhering to data privacy and retention policies established through machine learning and data rules.
Singh noted that some regions require reports on AI governance, highlighting New York City Law 144, which prohibits automated tools in employment decisions.
“Specific metrics like the demographic parity ratio are required for compliance. Credo AI translates this law into actionable insights for your AI operations, allowing us to collect necessary metadata to fulfill legal obligations,” Singh explained.