A recent survey by PwC, which included 1,001 U.S.-based executives in business and technology roles, reveals that 73% of respondents are currently using or planning to implement generative AI in their organizations.
Despite this growing interest, only 58% have begun assessing AI-related risks. PwC emphasizes that responsible AI should focus on value, safety, and trust, integrating these elements into a company's risk management processes.
Are You Ready for AI Agents?
Jenn Kosar, PwC's U.S. AI Assurance Leader, highlighted that while it was previously acceptable for companies to launch AI projects without robust responsible AI strategies, that time has passed. “We’re further along now in the cycle, so the time to build on responsible AI is now,” Kosar stated. She noted that earlier AI projects were typically limited to small teams, but we are now witnessing widespread adoption of generative AI.
Gen AI pilot projects play a crucial role in shaping responsible AI strategies, as they provide insights into effective team integration and AI system utilization.
Emerging Concerns
The importance of responsible AI and risk assessment has gained significant attention following the deployment of Elon Musk’s xAI Grok-2 model on the social platform X (formerly Twitter). Early feedback indicates that this image generation service lacks adequate restrictions, enabling the creation of controversial and misleading content, including deepfakes of public figures in sensitive scenarios.
Key Priorities for Responsible AI
In the survey, respondents were asked to prioritize 11 capabilities identified by PwC, including:
- Upskilling
- Embedding AI risk specialists
- Periodic training
- Data privacy
- Data governance
- Cybersecurity
- Model testing
- Model management
- Third-party risk management
- Specialized software for AI risk management
- Monitoring and auditing
Over 80% of participants reported progress in these areas, though only 11% claimed to have fully implemented all 11 capabilities. PwC cautioned that many organizations may be overstating their advancements, noting that the complexities of managing responsible AI can impede full implementation.
For instance, effective data governance is crucial for defining the access AI models have to internal data while instituting protective measures. Additionally, traditional cybersecurity approaches may not sufficiently safeguard against sophisticated attacks like model poisoning.
Accountability in Responsible AI
To assist companies in navigating their AI transformation, PwC advises prioritizing a comprehensive responsible AI strategy. A key recommendation is to establish clear ownership and accountability for responsible AI practices, ideally under a single executive. Kosar emphasized the importance of viewing AI safety as an organizational priority, suggesting the appointment of a chief AI officer or a dedicated responsible AI leader to collaborate with various stakeholders.
“Maybe AI will be the catalyst to unify technology and operational risk,” Kosar remarked.
PwC further suggests that organizations consider the entire lifecycle of AI systems. This involves moving beyond theoretical considerations to actively implement safety and trust policies throughout the organization. Preparing for future regulations requires a commitment to responsible AI practices, alongside plans for stakeholder transparency.
Kosar expressed surprise at the survey responses indicating that many executives view responsible AI as a commercial advantage. “Responsible AI is not just about managing risks; it should also create value. Organizations see it as a competitive edge grounded in trust,” she concluded.