Navigating AI Audits: Addressing Bias, Performance, and Ethics
The challenge of auditing AI models for bias, performance, and ethical compliance is a pressing issue for organizations. At the recent VB AI Impact Tour in New York City, presented by UiPath, industry leaders discussed effective methodologies, best practices, and real-world case studies. Notable speakers included Michael Raj, VP of AI and Data Network Enablement at Verizon Communications; Rebecca Qian, co-founder and CTO of Patronus AI; and Matt Turck, managing director at FirstMark. To conclude the event, VB CEO Matt Marshall engaged in a dialogue with Justin Greenberger, SVP of Client Success at UiPath, focusing on what makes an AI audit successful and how to initiate the process.
Greenberger emphasized the need for a proactive approach to risk assessment: “The risk landscape used to be evaluated annually. Now, it should be reassessed almost monthly. Do you understand your risks and the controls in place to mitigate them? The Institute of Internal Auditors (IIA) has updated its AI framework, but it primarily covers fundamental aspects. Important questions include: What are your monitoring KPIs? Is there transparency in your data sources? Are there protocols for accountability? The evaluation cycle must be more frequent.”
He cited the General Data Protection Regulation (GDPR) as an example of regulatory overreach that ultimately established a solid foundation for data security across businesses. Interestingly, the rise of generative AI is seeing global markets adapt at similar rates, creating a level competitive landscape as organizations evaluate their risk tolerance regarding this technology.
Overcoming Pilot Challenges and Engaging Employees
While enterprise-wide adoption of AI is still developing, many companies are running initial pilot projects to explore its capabilities. Some persistent challenges include identifying subject matter experts with the necessary contextual knowledge and critical thinking skills to define use cases effectively. Furthermore, enabling employee understanding and engagement is crucial. Greenberger noted that a clear understanding of what employees should learn about AI technologies, particularly concerning ethical use and the risks of deep fakes, is still evolving.
Organizations are primarily incorporating generative AI into existing workflows rather than completely overhauling processes. Consequently, audits must evolve to monitor how private data is utilized within various applications, including sensitive medical cases.
Evolving Roles in the Age of AI
As AI technology advances, the role of humans in the auditing process remains critical. Greenberger explained that while users initiate queries, AI systems process information and deliver necessary data for decision-making. For example, an employee at a logistics company may use AI-generated quotes in customer interactions. However, traditional human roles may face automation challenges.
“Currently, humans retain decision-making responsibilities,” Greenberger stated. “Over time, as we become more comfortable with audit controls and routine spot checks, this will likely change. Ultimately, humans may need to focus on creative and emotional aspects of their roles, as the decision-making authority could shift away from them. This evolution is inevitable as technology progresses.”
In summary, organizations must prioritize continual evaluation of AI systems to mitigate risks and ensure ethical practices as they navigate an ever-changing technological landscape.