The U.K.'s data protection authority has concluded a nearly year-long investigation into Snap's AI chatbot, My AI, declaring that the social media platform has adequately addressed concerns regarding children's privacy. Concurrently, the Information Commissioner’s Office (ICO) issued a caution to the industry, urging businesses to proactively evaluate risks to individual rights before launching generative AI tools.
Generative AI (GenAI) involves advanced artificial intelligence technology focused on content creation. In Snap’s instance, this technology fuels a chatbot that interacts with users in a conversational manner, responding via text messages and snaps, thus facilitating automated user engagement.
Snap’s AI functionality is powered by OpenAI’s ChatGPT, and the company asserts that it has implemented several safeguards within the application. These include default programming to consider age-appropriateness and robust guidelines aimed at preventing children from accessing inappropriate content. Parental controls are also integrated into the system.
“Our investigation into ‘My AI’ serves as a crucial reminder for the industry,” stated Stephen Almond, the ICO's executive director of regulatory risk, in a Tuesday announcement. “Organizations developing generative AI must prioritize data protection from the outset, rigorously assessing and mitigating risks to people's rights and freedoms before launching products.”
“We will maintain oversight on organizations’ risk assessments and utilize our full enforcement capabilities—including fines—to protect the public from potential harm,” he emphasized.
In October, the ICO had issued Snap a preliminary enforcement notice, spotlighting what it viewed as a “failure to properly assess the privacy risks” associated with its generative AI chatbot, My AI. This notice has been the sole public reprimand for Snap, which faces potential fines of up to 4% of its annual revenue for confirmed data breaches.
Announcing the investigation's conclusion Tuesday, the ICO noted that Snap had taken "significant steps" to perform a more comprehensive review of the risks linked to My AI after its intervention. The ICO confirmed that Snap has shown "appropriate mitigations" in response to the issues raised, although they did not specify any additional measures taken (we have inquired for more information).
Further details may emerge when the regulator publishes its final decision in the coming weeks. The ICO expressed satisfaction that Snap has conducted a risk assessment for My AI that complies with data protection legislation, affirming its commitment to monitor the deployment of My AI and the management of emerging risks.
In response to the investigation's conclusion, a Snap spokesperson shared: “We are pleased that the ICO recognizes the steps we've taken to ensure community safety while using My AI. Although we believe we thoroughly examined the risks posed by My AI, we acknowledge that our documentation could have been clearer, and we have adjusted our global procedures in line with the ICO's feedback. We appreciate the ICO’s acknowledgment of our compliance with UK data protection laws and look forward to an ongoing collaborative relationship.”
Snap did not divulge specific mitigations adopted in light of the ICO’s intervention. The U.K. regulator has emphasized that generative AI enforcement is a priority. It has directed developers to its guidance on AI and data protection legislation and is currently conducting a consultation on privacy law’s application to generative AI models.
While the U.K. has yet to establish formal AI regulations, relying instead on existing regulatory bodies like the ICO, the European Union has approved a risk-based AI framework that is set to implement transparency responsibilities for AI chatbots in the near future.
Snap's AI Chatbot Faces U.K. Scrutiny Over Children's Privacy Concerns