To spotlight the remarkable contributions of women in the AI sector, we are excited to announce a series of interviews that celebrate their invaluable work in shaping the AI revolution. Throughout the year, we will consistently highlight these influential voices, shedding light on achievements that often go unnoticed. Explore more profiles here.
Karine Perset serves as the head of the AI unit at the Organization for Economic Co-operation and Development (OECD), where she oversees the OECD.AI Policy Observatory and the OECD.AI Networks of Experts.
With expertise in AI and public policy, Perset has an extensive background, having previously advised the Internet Corporation for Assigned Names and Numbers (ICANN)’s Governmental Advisory Committee and served as Counsellor for the OECD’s Science, Technology, and Industry Directorate.
What work are you most proud of in the AI field?
I take immense pride in the initiatives we drive at OECD.AI. In recent years, there has been a significant rise in the demand for policy resources focused on trustworthy AI from both OECD member countries and various stakeholders in the AI ecosystem.
When we began this journey in 2016, only a few countries had national AI strategies. Today, however, the OECD.AI Policy Observatory has become a comprehensive resource, documenting over 1,000 AI initiatives across nearly 70 jurisdictions.
Governments worldwide are grappling with key questions regarding AI governance. There is a pressing need to balance the innovative opportunities AI offers with its associated risks. The emergence of generative AI in late 2022 has underscored this critical challenge.
The 10 OECD AI Principles introduced in 2019 were remarkably prescient, anticipating many key issues still relevant today, five years later. These principles guide governments toward developing AI policies that prioritize trust and benefit humanity and the planet. They emphasize the centrality of people in the development and deployment of AI, a perspective that must remain front and center, regardless of how impressive advancements in AI capabilities become.
To track the implementation of the OECD AI Principles, we established the OECD.AI Policy Observatory, a vital hub for near real-time AI data, analyses, and reports. These resources have become authoritative references for policymakers worldwide. However, the OECD cannot address these challenges alone; multi-stakeholder collaboration remains our approach. The OECD.AI Network of Experts, comprising over 350 leading AI experts globally, harnesses collective intelligence to enhance policy analysis. This network is organized into six expert groups focused on key issues such as AI risk, accountability, and incidents.
How do you navigate the challenges of the male-dominated tech and AI industries?
Data indicates that a gender gap still exists regarding access to AI skills and resources. Many women face barriers in obtaining training, which contributes to their underrepresentation in AI research and development. In OECD countries, more than twice as many young men aged 16 to 24 are programmers, an essential skill for advancing in AI. Clearly, we need to work harder to attract women into the AI field.
While the private sector in AI tends to be male-dominated, the AI policy arena is becoming more balanced. For instance, my team at OECD reflects near gender parity. We work alongside inspiring female experts such as Elham Tabassi from NIST, Francesca Rossi from IBM, and many others who enrich the AI conversation with diverse perspectives.
It's essential to have women and diverse groups in technology, academia, and civil society to ensure a range of viewpoints. Unfortunately, in 2022, only one in four researchers publishing on AI globally was a woman. Although the number of AI papers co-authored by women is rising, women account for only about half of AI publications, with the gap widening as publication numbers grow. We clearly need better representation of women and diverse voices in the AI domain.
So, how do I navigate the male-dominated technology landscape? I engage actively. I'm grateful for my position, which allows me to connect with experts, government officials, and corporate representatives, and to discuss AI governance on international platforms. This engagement lets me share insights, challenge assumptions, and amplify data-driven discussions.
What advice would you give to women seeking to enter the AI field?
From my experience in AI policy, I encourage women not to hesitate in voicing their insights and perspectives. Diverse voices are crucial when developing AI policies and models, as everyone brings a unique viewpoint to the conversation.
To create safer, more inclusive, and trustworthy AI, we must examine AI models and data from various angles, asking ourselves, "What perspectives are we missing?" Silence can result in significant insights being overlooked. A unique viewpoint can illuminate issues others may not see, and collectively, we can achieve far more.
It's also vital to recognize that many different roles exist within AI. A computer science degree is not a necessity for all positions. We already see professionals from various fields—law, economics, social science—bring their perspectives to AI. Future innovations will increasingly stem from merging domain expertise with AI knowledge and technical skills. Universities are expanding AI education beyond computer science, and I believe interdisciplinary approaches are key to fostering AI careers. I encourage women from all backgrounds to explore how they can engage with AI without fear of inadequacy compared to their male counterparts.
What are some of the most pressing issues facing AI as it evolves?
The most pressing challenges in AI can be grouped into three main areas:
1. Bridging the gap between policymakers and technologists: The advent of generative AI in late 2022 caught many off guard, despite prior indications from some researchers. Each discipline views AI challenges from its unique perspective, yet the complexity of these issues requires collaboration across sectors. Bridging these gaps is essential to comprehensively understand AI and keep up with its rapid advancements.
2. International interoperability in AI regulations: Major economies are regulating AI to varying degrees. For example, the European Union is moving forward with its AI Act while the U.S. has introduced an executive order for safe AI development. Notably, Brazil and Canada are also proposing new AI regulations. Striking a balance between public protection and fostering business innovation is critical, as the borderless nature of AI means different regulatory approaches can hinder effective governance.
3. Tracking AI incidents: The surge in AI incidents, particularly with generative AI, poses a significant challenge. Failing to address these risks could lead to further erosion of public trust. Analyzing past incidents can inform preventive measures for future occurrences. To this end, we launched the AI Incidents Monitor, a tool that tracks global AI incidents using news sources to provide real-time insights for policymakers navigating the risks associated with AI technologies.
What are some issues AI users should be aware of?
Policymakers globally are increasingly focused on protecting citizens from AI-generated misinformation, including synthetic media like deepfakes. While misinformation is not novel, the scale, quality, and affordability of AI-generated content present unique challenges.
Governments are exploring strategies to help the public identify AI-generated content and evaluate its reliability, but this remains a developing area with no unified response yet.
Our AI Incidents Monitor provides updates on global trends in misinformation and deepfakes, but ultimately, it is essential for individuals to develop media literacy skills. Cultivating the ability to critically evaluate information from reliable sources is essential in navigating the growing volume of AI-generated content.
What is the best way to responsibly build AI?
Many in the AI policy community are dedicated to promoting responsible AI development. However, achieving this often depends on the specific context of the AI application. Responsible AI necessitates careful attention to ethical, social, and safety considerations throughout the AI lifecycle.
One of the OECD AI Principles emphasizes accountability for the proper functioning of AI systems. AI developers must implement measures to ensure their systems are trustworthy, meaning they should respect human rights, be fair and transparent, and maintain appropriate levels of security and safety. To achieve this, risks must be managed across every stage of the AI lifecycle—from planning and design through to data processing, model validation, and deployment.
Last year, we released a report on “Advancing Accountability in AI,” outlining how risk management frameworks can be integrated into the AI lifecycle to foster trustworthy AI. The report identifies processes and tools to better define, assess, manage, and govern risks at every stage of AI development.
How can investors better advocate for responsible AI?
Investors play a pivotal role in encouraging responsible practices within the companies they support. They possess significant power to influence internal operations through the financial resources they provide.
For instance, the private sector can contribute to developing responsible AI guidelines through initiatives like the OECD’s Responsible Business Conduct (RBC) guidelines, which are now being tailored for AI. These guidelines facilitate international compliance for AI companies while ensuring transparency throughout the AI value chain—from suppliers to deployment entities and end-users.
By championing the adoption of standards and guidelines for AI—such as the RBC guidelines—investors can be instrumental in fostering trustworthy AI development, ultimately shaping the future of AI technologies to benefit society as a whole.