Meta Introduces Auto-Blurring for Nudity in Instagram DMs to Enhance Teen Safety

Meta announced on Thursday that it is testing new features on Instagram aimed at protecting young users from unwanted nudity and sextortion scams. One of the key features, "Nudity Protection in DMs," automatically blurs any images detected as containing nudity, creating a safer experience for adolescents.

In addition, Meta plans to alert teens with warnings about the risks of sharing intimate images, encouraging them to think carefully before doing so. The company hopes these initiatives will enhance protection against scammers who may send nude images to coerce young people into reciprocating with their own photographs.

Meta is also introducing changes designed to make it harder for potential scammers to interact with teens. By developing technology to identify accounts linked to sextortion scams, Meta will restrict the activities of these suspicious accounts, reducing their ability to reach teen users. Moreover, Meta is expanding its partnership with the cross-platform child safety program, Lantern, to include more data specific to sextortion signals.

Despite maintaining policies against sending unsolicited nudity and coercive behavior, Meta acknowledges that these issues continue to affect many young users, sometimes leading to severe emotional consequences. Below, we delve into the specifics of these changes.

Nudity Protection Features

The "Nudity Protection in DMs" feature aims to shield young Instagram users from cyberflashing by placing nude images behind a safety screen. Users will have the option to decide whether to view these images.

"We will also encourage teens not to feel pressured to respond, with options to block the sender and report the chat," said Meta. This safety screen will be active by default for users under 18, while older users will receive notifications prompting them to enable the feature.

"When nudity protection is activated, individuals sending nude images will see a reminder to be cautious about sharing sensitive photos, with the ability to retract any images sent in error," the company noted.

Anyone attempting to forward a nude image will receive the same cautionary message. Utilizing on-device machine learning, this filter functions within end-to-end encrypted chats since image analysis occurs on the user's device. The nudity filter has reportedly been under development for nearly two years.

Safety Tips for Users

As an added precaution, Instagram users who send or receive nude images will be directed to important safety tips created with expert guidance. These tips highlight potential risks, such as the possibility of screenshots or forwards without the user's consent and the need to carefully scrutinize profiles to verify identities.

"These tips include links to resources such as Meta's Safety Center, support helplines, StopNCII.org for those over 18, and Take It Down for users under 18," the company stated.

Additionally, Meta is piloting pop-up messages for users who engage with accounts flagged for sextortion. These messages will point them to relevant resources.

"We're also integrating new child safety helplines into our in-app reporting options, which will connect teens to local helplines when they report issues like nudity or sexual solicitation," Meta added.

Advanced Technology to Detect Sextortionists

While Meta actively removes accounts identified as engaging in sextortion, the challenge lies in detecting these bad actors. The company is investing in technology to identify accounts that may be involved in sextortion based on various behavioral signals.

"While these indicators alone are not proof of a rules violation, we're employing precautionary measures to reduce the likelihood of these accounts finding and interacting with teen users," Meta explained. "This complements our existing efforts to monitor suspicious accounts."

The specific technology utilized for this detection remains undisclosed, nor has Meta revealed the exact signals indicative of potential sextortionists (we have requested further information). It’s likely the company analyzes communication patterns to identify harmful actors.

Accounts flagged as potential sextortionists will encounter restrictions on their ability to message or interact with other users. Any message requests sent by potential sextortion accounts will be redirected to the recipient's hidden requests folder, ensuring users are not notified of these messages.

Current chats with potential scam accounts will not be abruptly shut down, but users will receive Safety Notices advising them to report any threats and reminding them that they can decline any uncomfortable requests.

Teen users already enjoy protections limiting direct messages from unknown adults and certain teens. Meta is now testing a feature that hides the "Message" button on teenagers' profiles when viewed by potential sextortion accounts, regardless of connections.

"We're also exploring options to hide teens from these accounts in follower, following, and likes lists, and to make it harder for them to find teen profiles in search results," the company noted.

Amid increasing scrutiny in Europe regarding child safety on Instagram, regulators have expressed concerns over Meta's strategies since the introduction of the bloc's Digital Services Act (DSA) last summer.

A Step Towards Enhanced Safety

Meta previously announced efforts to combat sextortion, including its collaboration with Take It Down—a tool that allows individuals to generate a hash of intimate images locally for sharing with authorities, creating a repository to combat non-consensual content distribution.

Previous methods to tackle these challenges faced criticism for necessitating that young users upload their images. Without stringent laws mandating social networks to ensure child safety, Meta has largely self-regulated, leading to inconsistent results.

However, various legislative requirements have emerged in recent years, such as the U.K.’s Children Code and the DSA, compelling platforms like Meta to pay closer attention to minor protections.

For instance, in July 2021, Meta made young Instagram accounts private by default ahead of the U.K. compliance deadline, followed by even stricter privacy measures in late 2022.

In January, Meta outlined plans for more stringent messaging protocols for teens just before meeting the DSA's compliance deadline in February.

This gradual approach to enhancing safety measures raises questions about why it took so long for the company to implement stronger protections. Critiques suggest that Meta may have prioritized engagement over robust safety measures, as highlighted by former employee Francis Haugen.

When asked about extending the new protections to Facebook, a Meta spokesperson stated, "We aim to address the most significant needs, which, concerning unwanted nudity and educating teens about sharing sensitive images, we believe is primarily on Instagram DMs."

Meta continues to introduce stricter messaging regulations and parental controls to enhance the safety of its teenage user base.

Most people like

Find AI tools in YBX