A recent survey reveals that nearly 90% of companies are actively engaging with responsible AI strategies, whether through implementation, assessment, or development. This proactive approach emerges at a crucial moment as governments worldwide are tackling the complexities of AI regulation. Currently, 37% of organizations are implementing responsible AI strategies, while 34.8% are in the process of formulating their plans. Remarkably, 15.4% have established a “well-defined” strategy integrated across all AI development efforts.
Globally, governments are crafting regulations to address potential harms posed by AI, with the European Union’s AI Act leading the charge as the most comprehensive and advanced legislation expected to be enacted by early to mid-2024. In the United States, President Biden has initiated an executive order that facilitates public scrutiny of major tech companies' AI models, while U.K. Prime Minister Rishi Sunak recently concluded the inaugural global AI Safety Summit at the historic Bletchley Park.
Despite impending regulations, the private sector is taking initiative. The swift response of businesses is noteworthy; just one year ago, generative AI gained significant attention with the launch of OpenAI’s ChatGPT, despite the technology’s longstanding presence.
The astonishing capabilities of large language models and associated chatbots have intensified awareness of the risks they pose—hallucinations, biases, copyright issues, privacy invasions, and cybersecurity threats—and highlighted the pressing need for responsible AI practices.
### The Imperative for Ethical AI
The advantages of adopting ethical AI practices are clear: from increasing customer trust to minimizing regulatory and legal vulnerabilities, companies recognize the societal implications of AI and are making strides to ensure their systems are transparent, fair, and accountable.
In a closer look at their practices, 72% of respondents indicated that they either have a program in place or are planning for one. Notably, only a quarter of respondents reported that their organizations lack data management and governance systems for AI, while a significant number remained unsure of their company's status.
When considering their approaches, 45.8% described theirs as “measured and thoughtful,” asserting they are progressing at the right pace. In contrast, 26% admitted to moving faster than they can ensure responsible implementation. Alarmingly, 19% of companies have no programs in place for responsible AI.
The survey included 227 participants, predominantly from North America (43.2%), with representatives from Western Europe (25.1%) and Africa (9%). Slightly over half were employed by organizations with fewer than 100 staff, while 12.8% hailed from firms with over 10,000 employees. A significant portion of the respondents worked in corporate management or R&D and technical strategy roles.
### Navigating Challenges
While enthusiasm for responsible AI is evident, challenges abound, particularly the absence of standardized definitions and benchmarks. The rapidly evolving AI landscape necessitates regular updates to corporate policies and regulations to remain effective.
Monitoring ethical AI practices is crucial, yet results indicate room for improvement. While 60% of respondents claimed to measure the correctness of their responsible AI implementation, this is notably lower than the nearly 90% applying ethical AI frameworks. Only 18.1% have these measures firmly established, with 41.9% still progressing toward this goal.
The process of measuring ethical AI is structured for 33.9% of respondents, featuring clear steps, guidelines, and ongoing monitoring. Meanwhile, 21.1% are still developing their formal processes. Alarmingly, nearly one-fifth lacks a measurement strategy or is unaware of their company’s approach.
When asked about the timing of incorporating responsible AI considerations, 40% stated they do so prior to the design or implementation of technology, while 41.4% integrate ethical considerations during the design phase. Only 10.6% factor in these issues post-implementation.
Among the identified critical applications for responsible AI, predictive analytics garnered 30.4% of the votes, followed by customer experience at 27.3%, and chatbots and virtual assistants at 20.3%. The least prioritized areas, as indicated by responses, were voice/speech recognition, text analysis, and visual analytics.
### A New Era of AI Development
The survey highlights a pivotal moment in the AI landscape. As companies align their practices with responsible AI strategies and governments advance regulatory measures, we are likely entering an era where ethical considerations are integral to AI innovation—signifying a noteworthy shift in the development and application of artificial intelligence.