Snap’s AI Chatbot Under Fire in UK for Kids' Privacy Issues

Snap’s AI chatbot has drawn the attention of the U.K.’s data protection authority, raising alarms about potential risks to children's privacy. The Information Commissioner’s Office (ICO) announced it has issued a preliminary enforcement notice to Snap, highlighting a "potential failure to appropriately assess the privacy risks associated with its generative AI chatbot, 'My AI.'"

This notice does not imply a confirmed breach but indicates that the U.K. regulator is concerned Snap may not have adequately ensured that its product complies with data protection regulations, particularly since the introduction of the Children’s Design Code in 2021. The ICO stated, “Our investigation provisionally found that Snap’s risk assessment prior to launching 'My AI' did not sufficiently evaluate the data protection risks linked to generative AI technology, especially regarding children. Assessing data protection risk is crucial given the innovative technologies involved and the handling of personal data of children aged 13 to 17.”

Snap now has an opportunity to address these concerns before the ICO makes its final ruling on potential rule violations. Information Commissioner John Edwards noted, “Our preliminary findings suggest a troubling oversight by Snap to adequately identify and assess privacy risks to children and other users before launching 'My AI.' We stress that organizations must consider both the risks and benefits associated with AI. Today’s preliminary notice demonstrates our commitment to safeguarding U.K. consumers’ privacy rights.”

Snap introduced the generative AI chatbot in February, which later became available in the U.K. in April. The technology powers a virtual friend accessible at the top of users' feeds, where they can seek advice or send snaps. Initially exclusive to Snapchat+ subscribers, Snap quickly extended access to free users, allowing the AI to return snaps generated through its technology.

The company claims that 'My AI' was developed with enhanced moderation features, including age considerations to ensure content is suitable for users. The bot is designed to avoid generating responses that are violent, hateful, sexually explicit, or otherwise inappropriate. Additionally, Snap offers parental controls that inform parents if their child has interacted with the bot in the past week, via the Family Center.

Despite these precautions, reports have emerged of the bot providing inappropriate suggestions. For instance, in March, The Washington Post revealed that it advised a 15-year-old on concealing the smell of alcohol. In another instance, the bot provided a 13-year-old with advice on making their first sexual experience "special" by suggesting candles and music.

While concerns grow among parents, anecdotes also surfaced of teenagers expressing their discontent with the presence of AI in their Snapchat experience, even resorting to bullying the bot.

In response to the ICO notice, a Snap spokesperson said, “We are carefully reviewing the ICO’s provisional decision. Like the ICO, we value our users' privacy and ensured 'My AI' underwent a comprehensive legal and privacy review before its launch. We will continue to collaborate constructively with the ICO regarding our risk assessment procedures."

This isn't the first time an AI chatbot has attracted scrutiny from European privacy regulators. In February, Italy's Garante halted the "virtual friendship service" Replika from processing local users' data due to concerns about risks to minors. The Italian authority later imposed a similar processing ban on OpenAI’s ChatGPT, which was lifted in April after OpenAI enhanced privacy disclosures and user controls, allowing users to request the exclusion of their data from AI training.

The rollout of Google’s Bard chatbot faced delays following concerns from Ireland’s Data Protection Commission, prompting additional disclosures and controls before its EU launch in July. Similarly, Poland’s data protection authority confirmed last month that it is investigating a complaint regarding ChatGPT.

Dr. Gabriela Zanfir-Fortuna, VP for global privacy at the Future of Privacy Forum, highlighted a statement from G7 data protection authorities about the need for embedding privacy in the design of generative AI technologies. "Developers should implement 'Privacy by Design' principles and document their analyses in a privacy impact assessment," she explained.

Earlier this year, the ICO released guidelines for developers of generative AI, proposing eight critical questions to consider during product development. Speaking at a G7 symposium, Edwards emphasized the importance of vigilance among developers to avoid repeating mistakes made during the rise of social media and online advertising. "We are observing and ready to take action," he warned.

Zanfir-Fortuna noted that while the ICO's preliminary enforcement action against Snap is not unprecedented, there appears to be an heightened awareness and public warning from regulators regarding generative AI. “Regulators are encouraging companies to prioritize data protection, as they frequently issue preliminary decisions, compliance deadlines, and warning letters rather than immediate enforcement actions,” she advised.

This report has been updated with additional commentary.

Most people like

Find AI tools in YBX