Stolen Faces: How AI-Generated Fake Photos Fuel Bullying and Revenge—And the AI Solutions Needed to Combat This New Threat

Miriam Adib from Spain recently faced a troubling situation with her 14-year-old daughter, who confided that she had been bullied. The girl showed her a graphic photo featuring her face—although the body did not belong to her. As a gynecologist, Adib noted that if she weren’t familiar with her daughter's physique, she might have believed the image was real. Disturbingly, this doctored image has been circulating widely in local social media groups, raising concerns about how such false representations can have serious consequences.

This issue isn't confined to Spain; similar incidents have been reported in the UK and South Korea. A South Korean girl, known by the pseudonym Heejin, discovered dedicated chat rooms online containing explicit images that replaced her face with those from her student days, showcasing a disturbing trend in digital manipulation.

The term "Deepfake" has emerged recently, combining "deep learning" and "fake." This technology uses artificial intelligence to create realistic fake images, videos, and audio. With advancements in AI, the ability to generate convincing fake photos is becoming easier. Many people have encountered manipulated images on social media, such as celebrities depicted in compromising situations they never actually experienced. In 2017, creative users began swapping actor Nicolas Cage’s face into famous movie scenes, paving the way for a broader range of deepfake creations that require minimal technical know-how.

The proliferation of this technology has led to various harmful activities, including fake news, financial fraud, and escalating incidents involving explicit imagery targeting teenagers, children, and women. In response, there is a pressing need for accessible counter-technology to combat these threats. Following a wave of reported deepfake sexual crimes, a research team from the Chinese Academy of Sciences named VisionRush announced the open-source release of their AI tool designed to detect such fabrications.

This AI detection technology learns to identify the characteristics of AI-generated images. During the identification phase, it employs advanced methods to scrutinize pixel details and extract specific features prone to manipulation, such as texture, lighting, expressions, and edge details. This technology participated in a recent global Deepfake defense competition, where the organization encouraged participants to make their models publicly available.

Additionally, major tech companies are taking action. In early September, Microsoft’s Bing announced a partnership with StopNCII, a platform aimed at protecting deepfake victims with a free database. Victims can create a unique "digital fingerprint" for their faces, which is then used to search and potentially remove unauthorized images across partnering social media platforms like Reddit, TikTok, Instagram, and Snapchat.

As the battle against digital manipulation evolves, users are urged to take proactive measures to protect their privacy online. This includes minimizing the sharing of videos featuring children and teenagers, remaining vigilant against unauthorized photography, and reviewing app permissions for personal information access. The journey to effective protection against deepfakes and privacy breaches requires collective action and advanced technological solutions.

Most people like

Find AI tools in YBX