Google is introducing new online safety features that enhance the removal of explicit deepfakes from search results. These updates aim to facilitate the removal of non-consensual explicit content and prevent similar images from appearing in search results. When users successfully request the removal of explicit deepfake content depicting them, Google's systems will also filter out all explicit results related to similar searches and eliminate duplicate images.
“These protections have already proven successful in addressing other types of non-consensual imagery, and we’ve now built the same capabilities for explicit fake images,” said Google product manager Emma Higham. “These efforts are designed to provide individuals with peace of mind, especially if they’re concerned about similar content surfacing in the future.”
To further safeguard user experiences, Google is refining search rankings for queries that are more likely to yield explicit fake content. Searches intentionally seeking deepfake images of real individuals should now return “high-quality, non-explicit content,” like relevant news articles. Additionally, sites that frequently receive removal requests for explicit deepfakes will see a demotion in their search rankings.
Previous updates reportedly reduced exposure to explicit image results for specific deepfake queries by over 70 percent this year. Google is also exploring methods to differentiate between legitimate explicit content, such as consensual nude scenes, and explicit deepfakes, ensuring that authentic images can still be displayed while lowering the visibility of fakes.
These enhancements build on Google's ongoing efforts to combat dangerous and explicit content online. Earlier actions included banning advertisers from promoting deepfake pornography in May, expanding the types of doxxing information eligible for removal in 2022, and beginning to blur sexually explicit imagery by default in August 2023.