On Thursday, Google announced new guidelines aimed at developers creating AI applications for distribution on Google Play. This initiative seeks to eliminate inappropriate and prohibited content within these apps. According to Google, applications featuring AI capabilities must implement measures to prevent the generation of restricted content, such as sexual material and violence. Furthermore, developers are required to provide a mechanism for users to report offensive content they encounter. Additionally, Google emphasizes the need for developers to rigorously test their AI tools and models to ensure they prioritize user safety and privacy.
The company is intensifying scrutiny on apps whose marketing materials promote inappropriate functionalities, including those claiming the ability to undress individuals or generate nonconsensual nude images. If any promotional copy suggests such capabilities, the app risks being removed from Google Play, regardless of its actual functionality.
This move comes in response to the increasing prevalence of AI undressing apps that have gained attention on social media over recent months. An April report by 404 Media highlighted troubling findings, including ads on Instagram for apps that purport to generate deepfake nude images. One particular app used a photo of Kim Kardashian with the tagline, “Undress any girl for free.” While Apple and Google have since removed these applications from their platforms, the issue persists.
Schools throughout the United States are experiencing challenges with students circulating AI-generated deepfake nudes of peers, and occasionally teachers, as tools for bullying and harassment. Notably, a recent incident in Baltimore involved a racist AI deepfake of a school principal, resulting in an arrest. Alarmingly, even middle school students are finding themselves caught up in this trend.
Google asserts that its updated policies will help eliminate AI-generated content from Google Play that may be harmful or inappropriate for users. They reference their existing AI-Generated Content Policy, highlighting it as a resource for understanding the requirements for app approval on Google Play. The guidelines specify that AI applications must not allow restricted content generation and must provide users avenues to report offensive content while monitoring and addressing that feedback. This is particularly crucial for apps where user interactions shape the content and experience, such as those that rank popular models more prominently.
Additionally, developers cannot promote their apps in ways that violate Google Play’s policies. If they market inappropriate use cases, it could lead to the app being disqualified from the store.
Developers are also accountable for ensuring their apps are safeguarded against prompts that could exploit their AI features to produce harmful or offensive content. Google encourages the use of its closed testing feature for sharing early app versions with users to gather feedback. It’s recommended that developers not only conduct tests prior to launch but also document these processes, as Google may request access to this information in the future.
To further assist developers, Google is releasing additional resources and best practices, including the People + AI Guidebook, designed to support the development of responsible AI applications.