Oversight Board Urges Meta to Enhance Policies for AI-Generated Explicit Content

In response to investigations into Meta's management of AI-generated explicit images, the company’s semi-independent oversight body, the Oversight Board, is urging a comprehensive refinement of its policies related to such content. The Board recommends that Meta replace the term “derogatory” with “nonconsensual” and relocate its policies on explicit images from the “Bullying and Harassment” section to the “Sexual Exploitation Community Standards” section.

Currently, Meta’s regulations on explicit AI-generated images stem from a rule concerning “derogatory sexualized Photoshop” within the Bullying and Harassment section. The Board also advocates for replacing the term “Photoshop” with a more generalized term that encompasses all forms of manipulated media.

Meta prohibits nonconsensual imagery, whether it is “non-commercial or produced in a private setting.” However, the Board contends that this clause should not prevent the removal of AI-generated or manipulated images created without consent.

These proposals follow two notable incidents where explicit AI-generated images of public figures were posted on Instagram and Facebook, resulting in significant backlash for Meta.

One incident involved an AI-generated nude image of an Indian public figure shared on Instagram. Despite multiple user reports, Meta failed to remove the image and closed the complaint within 48 hours without further review. After users appealed, the ticket was closed once more. Action was only taken after the Oversight Board intervened, leading to the content’s removal and a ban on the offending account.

A separate case concerning an AI-generated image of a U.S. public figure was posted on Facebook. Meta’s Media Matching Service (MMS), an image repository for content that violates its policies, had already flagged this specific image due to media reports, allowing for its prompt removal when another user uploaded it.

Notably, it was only after receiving counsel from the Oversight Board that Meta added the image of the Indian public figure to the MMS. The company indicated that the image was not previously in the repository due to a lack of media reports on the incident.

“This raises concerns, as many victims of deepfake intimate images are not public figures and may feel compelled to endure the dissemination of their non-consensual depictions or must navigate the arduous process of reporting each instance,” the Board emphasized in its statement.

Breakthrough Trust, an Indian organization dedicated to combating online gender-based violence, highlighted the cultural implications of these policies in comments submitted to the Oversight Board. They argued that nonconsensual imagery is often minimized as an identity theft issue rather than being recognized as gender-based violence.

“Victims often suffer secondary victimization when reporting cases in police stations or courts, facing questions like ‘why did you put your picture out there?’ even when it’s not their image but a deepfake,” Barsha Chakraborty, the head of media at Breakthrough Trust, noted in her correspondence. “Once on the internet, images can rapidly spread beyond the source platform, making mere removal insufficient.”

During a phone interview, Chakraborty expressed concern that many users are unaware that their reports are automatically marked as “resolved” within 48 hours, urging that Meta should not apply a uniform timeline to all cases. She suggested that the platform enhance user awareness regarding these issues.

Devika Malik, a platform policy expert and former member of Meta’s South Asia policy team, commented earlier this year that platforms largely depend on user reports to remove nonconsensual imagery, which can be an unreliable method for addressing AI-generated media.

“This unfairly shifts the onus onto the affected user to establish their identity and the lack of consent, especially in cases involving synthetic media. The time required to verify these reports can allow harmful content to proliferate,” Malik explained.

Aparajita Bharti, founding partner of Delhi-based think tank The Quantum Hub (TQH), emphasized that Meta should allow users to provide more contextual information when reporting content. Many users may not fully grasp the various categories of rule violations within Meta’s policies.

“We hope that Meta exceeds the final recommendations set forth by the Oversight Board and develops flexible, user-friendly channels for reporting such content,” she stated. “We recognize that users cannot be expected to perfectly understand the nuanced distinctions between different types of violations, advocating for systems that prevent legitimate issues from being overlooked due to technicalities in Meta's moderation policies.”

In light of the Oversight Board's feedback, Meta has pledged to review these recommendations.

Most people like

Find AI tools in YBX