Bumble is enhancing its platform to allow users to report AI-generated profiles more easily. The dating and social connection app has added an option labeled "Using AI-generated photos or videos" within its Fake Profile reporting menu.
“Removing misleading or dangerous elements is essential for fostering meaningful connections,” said Risa Stein, Vice President of Product at Bumble. “We are committed to improving our technology to ensure Bumble remains a safe and trusted dating environment. This new reporting feature helps us understand how bad actors misuse AI, enabling our community to make confident connections.”
A recent Bumble survey revealed that 71% of Gen Z and Millennial users want restrictions on AI-generated content in dating apps, with the same percentage viewing AI-generated images of individuals in unfamiliar settings or activities as a form of catfishing.
Fake profiles not only deceive users but can also lead to significant financial losses. In 2022, the Federal Trade Commission reported nearly 70,000 cases of romance scams, resulting in $1.3 billion in losses. To combat these threats, dating apps, including Bumble, implement robust safety measures. Earlier this year, Bumble introduced the Deception Detector, an AI-powered tool designed to identify fake profiles, and also launched a feature to shield users from unwanted explicit content. Meanwhile, Tinder has rolled out its own profile verification system in the US and UK.