Verizon Executive Unveils Responsible AI Strategy in an Evolving 'Wild West' Landscape

Verizon is leveraging generative AI applications to enhance customer support and experiences for its over 100 million phone customers while expanding its responsible AI team to mitigate risks.

Michael Raj, Vice President of AI for Verizon’s network enablement, discussed several measures the company is implementing as part of this initiative. These include requiring data scientists to register AI models with a central team for security reviews and increasing scrutiny of large language models (LLMs) used in Verizon's applications to reduce bias and prevent toxic language.

AI Auditing: The Wild West

Raj spoke at a media AI Impact event in New York City, focusing on the challenges of auditing generative AI applications, as LLMs can often be unpredictable. He noted that the field of AI auditing is still in its infancy, and companies must accelerate their efforts, particularly since regulators have yet to provide specific guidelines.

Recent high-profile errors from customer support AI across various industries—including mishaps involving Chevy, Air Canada, and leading LLM providers like Google—have highlighted the urgent need for increased reliability in AI systems.

Government regulators are only releasing broad guidelines, leaving private companies to define the specifics, according to Justin Greenberger, Senior Vice President at UiPath, which assists large companies with automation. “It feels like the Wild West,” added Rebecca Qian, co-founder of Patronus AI, a firm focused on auditing LLM projects.

Most companies are currently working on the first step of AI governance—developing rules for the use of generative AI. The next step involves audits to ensure compliance with these policies, but few have the necessary resources, experts agree.

A recent Accenture report found that while 96% of organizations support some form of government regulation surrounding AI, only 2% have fully implemented responsible AI practices.

Empowering Agents with AI

Verizon aims to lead in applied AI by equipping frontline employees with intelligent conversational assistants to better manage customer interactions. These agents often face overwhelming amounts of information, but generative AI can alleviate this by instantly providing personalized customer information and handling 80% of repetitive tasks. This allows agents to focus on the 20% of issues that require human intervention, enabling personalized recommendations.

Additionally, Verizon is utilizing generative AI and deep learning technologies to enhance customer experiences on its network and website, while also improving its products and services. Raj mentioned that the company has developed models to predict customer churn among its vast user base.

Centralized AI Governance for Safety

Verizon is investing heavily in AI governance, focusing on tracking model drift and bias. This initiative has led to the consolidation of all governance functions into one “AI and Data” organization, which includes the “Responsible AI” unit. Raj emphasized that this unit is essential for AI safety, working closely with both the CISO's office and procurement executives. Earlier this year, Verizon published its responsible AI roadmap in partnership with Northeastern University.

To manage AI models effectively, Verizon has made data sets accessible to developers and engineers for direct interaction with the models, ensuring the use of approved tools.

The trend of registering AI models is expected to gain momentum across other B2C companies, according to UiPath's Greenberger, who suggested models need to be “version controlled and audited,” similar to pharmaceutical regulations. He also highlighted that businesses should frequently assess their risk profiles due to the rapid advancements in technology, and legislative measures for model registration are under consideration in the U.S. and other countries.

Emerging AI Governance Units

According to Greenberger, many sophisticated companies are establishing centralized AI teams, like Verizon’s, and the emergence of “AI Governance” units is spreading. Collaborating with third-party suppliers of LLMs is prompting enterprises to rethink their strategies, as each provider offers multiple models with varying capabilities.

Given the unpredictable nature of generative AI applications, legislating the auditing process poses unique challenges. As Patronus AI’s Qian pointed out, the potential for failures related to safety, bias, and misinformation necessitates industry-specific regulations, particularly in high-stakes sectors like transportation or healthcare.

Transparency in AI auditing remains a significant hurdle, with traditional AI being simpler to understand compared to generative AI’s complexities. Currently, only about 5% of companies have completed pilot projects centered on bias and responsible AI, according to Greenberger.

As the AI landscape evolves rapidly, Verizon’s commitment to responsible AI exemplifies a benchmark for the industry, signaling the urgent need for better governance, transparency, and ethical standards in deploying these technologies.

Most people like

Find AI tools in YBX

Related Articles
Refresh Articles