"Gen AI Safety Challenges Prompt Enterprises to Enhance AI Audit Protocols"

The increasing frequency of significant errors made by customer support AI agents, including recognized brands like Chevy, Air Canada, and New York City, has intensified the demand for improved reliability in artificial intelligence systems.

If you are an enterprise decision-maker involved in developing generative AI applications and strategies, and you’re finding it challenging to keep pace with the latest chatbot technologies and accountability measures, consider attending our exclusive AI event in New York on June 5. This event will focus on the “AI Audit” and is tailored for technical leaders in the enterprise sector.

Join us for this networking event hosted by a leading media outlet, where we will feature insights from three prominent figures in the AI ecosystem, discussing the best practices for AI auditing.

We will hear from Michael Raj, VP of AI and Data at Verizon, who will share how implementing thorough AI audits and employee training has established a framework for responsible generative AI use in customer interactions. He will be moderated by Matt Turck, a prominent investor at FirstMark and the publisher of the annual data and AI landscape.

Additionally, Rebecca Qian, co-founder and CTO of Patronus AI, will discuss the latest strategies and technologies in AI auditing that help identify and address safety vulnerabilities. Qian brings her experience from Meta, where she led AI evaluation initiatives at Meta AI Research (FAIR).

I will be hosting discussions alongside my colleague Carl Franzen, executive editor at a leading media outlet. We are excited to have UiPath as a sponsor, with Justin Greenberger, SVP at UiPath, providing insights into how evolving auditing and compliance guidelines are shaping AI protocols within organizations. This event is part of our AI Impact Tour series, aimed at promoting dialogue and networking among enterprise leaders looking to implement generative AI effectively.

So, what exactly is an AI audit, and how does it differ from AI governance? After establishing governance rules for AI, it’s crucial to conduct audits of generative AI applications to ensure compliance with those rules. This need is especially pressing amid rapid technological advancements. Major LLM providers, such as OpenAI and Google, have recently introduced models like ChatGPT and Gemini that can see, hear, speak, and express emotions. Coupled with innovations from other companies, including Meta (Llama 3), Anthropic (Claude), and Inflection (with its empathy-driven AI), keeping up with accuracy and privacy auditing requirements has become increasingly complex.

Several new companies, such as Patronus AI, are emerging to fill this gap by creating benchmarks, datasets, and diagnostics to assist in identifying sensitive personally identifiable information (PII) within AI interactions. Furthermore, traditional techniques like retrieval augmented generation (RAG) and system prompts often fail to prevent errors, as issues can stem from the underlying LLM training datasets, which frequently lack transparency. This reality underscores the necessity of robust auditing practices.

Don't miss this vital opportunity for enterprise AI decision-makers committed to ethical leadership in the digital landscape. Apply now to secure your place at the AI Impact Tour, and position yourself at the forefront of AI innovation and governance.

Most people like

Find AI tools in YBX