Microsoft and OpenAI Introduce $2M Fund to Combat Election Deepfakes

Microsoft and OpenAI Launch $2 Million Fund to Tackle AI Risks and Deepfakes in Elections

Microsoft and OpenAI have unveiled a $2 million initiative aimed at addressing the escalating threats posed by AI and deepfakes, which could potentially "deceive voters and undermine democracy." With over 2 billion people slated to participate in elections across approximately 50 countries this year, there are growing concerns about the impact of AI on voter behavior, particularly within "vulnerable communities" that may be more prone to believing misleading information at first glance.

The emergence of generative AI technologies, like the widely-used ChatGPT, has introduced a new realm of risks related to AI-generated "deepfakes," which are increasingly harnessed for spreading disinformation. The widespread availability of these tools allows virtually anyone to produce convincing fake videos, images, or audio clips involving prominent political figures.

On Monday, India’s Election Commission urged political parties to refrain from utilizing deepfakes and other forms of disinformation in their online campaigning efforts. In response to these pressing concerns, major tech companies—including Microsoft and OpenAI—have collectively committed to voluntary measures aimed at mitigating these risks. They are also developing a unified framework to combat deepfakes that are specifically designed to mislead voters.

In an effort to safeguard elections, leading AI enterprises are beginning to implement restrictions in their technologies. For instance, Google has opted to prevent its Gemini AI chatbot from engaging in discussions related to elections, while Meta, the parent company of Facebook, is also curbing election-related interactions via its AI chatbot.

Today, OpenAI introduced a new deepfake detection tool for disinformation researchers, intended to help identify false content generated by its DALL-E image generator. Additionally, OpenAI has joined the steering committee of the Coalition for Content Provenance and Authenticity (C2PA), which includes well-known members such as Adobe, Microsoft, Google, and Intel.

The newly established “Societal Resilience Fund” is part of a broader initiative focused on “responsible” AI development. According to a blog post from the companies, Microsoft and OpenAI are committed to advancing "AI education and literacy among voters and vulnerable communities." This initiative will provide grants to several organizations, including Older Adults Technology Services (OATS), the Coalition for Content Provenance and Authenticity (C2PA), the International Institute for Democracy and Electoral Assistance (International IDEA), and Partnership on AI (PAI).

As stated by Microsoft, these grants aim to enhance the public’s understanding of AI and its implications. For example, OATS plans to utilize its funding to create training programs aimed at individuals aged 50 and older in the U.S., focusing on the "foundational aspects of AI."

“The launch of the Societal Resilience Fund is a significant step that underscores Microsoft and OpenAI’s commitment to addressing challenges in AI literacy and education,” noted Teresa Hutson, Microsoft’s corporate VP for technology and corporate responsibility, in the blog post. “We will remain dedicated to this mission and will continue collaborating with organizations that align with our objectives and values.”

Most people like

Find AI tools in YBX

Related Articles
Refresh Articles