Google is set to introduce a new policy targeting potentially harmful generative AI applications, with enforcement beginning early next year. This policy mandates that developers of Android apps on the Play Store incorporate a feature that allows users to report or flag offensive AI-generated content directly within the app. Google emphasizes that this reporting mechanism should aid developers in refining their content filtering and moderation strategies.
This policy shift comes in response to the rapid proliferation of AI-generated apps, some of which have enabled users to exploit the software for creating inappropriate content. Last year's incident with the app Lensa, which was manipulated into generating NSFW imagery, serves as a notable example. Additionally, apps like Remini, which gained popularity for producing AI headshots, have been criticized for creating unrealistic body images, leading to concerns about the portrayal of women. More recently, Microsoft and Meta's AI tools faced scrutiny after users circumvented safeguards to generate controversial and misleading images.
The implications extend beyond mere aesthetics; serious issues arise from the misuse of AI image generators. Reports have surfaced of individuals using open-source AI tools to produce child sexual abuse material (CSAM) on a large scale. With impending elections, there are increasing fears regarding the potential for deepfakes to mislead voters.
According to the new policy, AI-generated content includes "text-to-text conversational generative AI chatbots,” akin to applications like ChatGPT, as well as image generation based on various prompts. Google reiterated that all applications, including those that generate AI content, must adhere to existing regulations that prohibit restricted content, including CSAM and deceptive practices.
In addition to the policy change, Google Play will conduct a closer examination of specific app permissions, particularly those that request extensive access to photos and videos. Under the updated guidelines, apps will only be permitted to access media directly related to their core functionality. For instances where apps have infrequent needs—such as requesting users to upload selfies—developers must utilize the new Android photo picker.
Furthermore, the new policy will restrict disruptive full-screen notifications to instances of genuine urgency. This change addresses the abuse of full-screen prompts by many apps aimed at upselling subscriptions or additional services. Moving forward, Google will enforce limits on this feature and establish a special “Full Screen Intent permission” that will only be granted to apps targeting Android 14 and above and demonstrating a true necessity for full-screen functionality.
In an unexpected move, Google is taking the lead in regulating AI applications, a space where Apple has historically been more proactive. While Apple has not yet formalized an AI or chatbot policy in its App Store Guidelines, it has implemented stricter regulations on data collection methods known as “fingerprinting” and on apps that imitate others. Google Play’s revised guidelines are being rolled out now, granting AI app developers until early 2024 to incorporate the new flagging and reporting features into their applications.