White House Secures Voluntary Pledges from AI Firms to Combat Deepfake Pornography

White House Statement on Combating AI-Related Image-Based Abuse

Today, the White House announced commitments from key AI companies aimed at reducing the creation and distribution of image-based sexual abuse. Participating firms, including Adobe, Anthropic, Cohere, Common Crawl, Microsoft, and OpenAI, have outlined their strategies to prevent their platforms from generating non-consensual intimate images (NCII) of adults and combat child sexual abuse material (CSAM).

These companies pledged to:

- "Responsibly source their datasets and protect them from image-based sexual abuse."

Additionally, except for Common Crawl, the other companies will:

- "Incorporate feedback loops and iterative stress-testing in their development processes to prevent AI models from producing image-based sexual abuse."

- "Remove nude images from AI training datasets" where appropriate.

While this commitment is voluntary and does not impose new actionable requirements or penalties for non-compliance, it represents a positive step towards addressing a significant issue.

Notably absent from the White House announcement were major tech players such as Apple, Amazon, Google, and Meta. However, many technology and AI companies are independently enhancing support for victims of NCII, aiming to curb the spread of deepfake images and videos. Initiatives like StopNCII are collaborating with several firms for a unified approach to eliminate such harmful content, while many businesses are developing proprietary tools for reporting AI-generated image-based sexual abuse on their platforms.

If you suspect you are a victim of non-consensual intimate image-sharing, you can file a case with StopNCII [here]. If you are under 18, you can report to NCMEC [here].

Most people like

Find AI tools in YBX

Related Articles
Refresh Articles