Elon Musk’s X Faces Nine Privacy Complaints Over EU User Data Collection for Grok Training

X, the social media platform owned by Elon Musk, is facing a wave of privacy complaints for allegedly using data from European Union users to train its AI models without obtaining prior consent.

Recently, a vigilant social media user uncovered a setting indicating that X had silently started processing user post data to enhance its Grok AI chatbot. This revelation took the Irish Data Protection Commission (DPC) by surprise, as they oversee X's adherence to the EU’s General Data Protection Regulation (GDPR).

Under the GDPR framework, organizations can face penalties of up to 4% of their global annual turnover for confirmed violations. The nine complaints lodged against X with data protection authorities across Austria, Belgium, France, Greece, Ireland, Italy, the Netherlands, Poland, and Spain allege that it lacks a valid legal basis for using European users' posts in AI training without their consent.

Max Schrems, chairman of the privacy rights nonprofit noyb supporting these complaints, commented: “We have seen numerous instances of ineffective enforcement by the DPC in recent years. Our goal is to ensure that Twitter complies with EU law, which at the very least requires user consent for this purpose.”

The DPC has initiated legal action in the Irish High Court, seeking an injunction to compel X to cease using the data for AI model training. However, noyb argues that the DPC’s efforts have not been adequate, highlighting there is no mechanism for X users to request the deletion of their data that has already been utilized. In response, noyb has submitted GDPR complaints in Ireland and seven additional countries.

The complaints assert that X lacks a legitimate basis to process the data of around 60 million EU users for AI training without their consent. Although X appears to be leaning on a “legitimate interest” legal basis for this processing, privacy experts maintain that explicit user consent is necessary.

“Companies that engage directly with users should provide a simple yes/no prompt before processing their data. They routinely do this for numerous other purposes, so implementing it for AI training is entirely feasible,” stated Schrems.

In June, Meta paused its plans to use user data for AI training following support for GDPR complaints from noyb and interventions from regulators.

However, X's strategy of discreetly utilizing user data for training AIs went largely unnoticed for several weeks. According to the DPC, processing of European data for AI model training occurred between May 7 and August 1.

X users were eventually granted the ability to opt out of this processing through a setting added to the web platform in late July. Yet, prior to this change, there was no option to prevent data usage, and opting out is difficult when users are unaware such processing was taking place.

This situation is critical as the GDPR aims to safeguard Europeans from unforeseen uses of their personal information that could infringe upon their rights and freedoms.

In its case against X’s chosen legal basis, noyb references a ruling from Europe's highest court last summer, which involved a competition complaint against Meta regarding user data for ad targeting. The court decided that a legitimate interest legal basis was not appropriate for that context, emphasizing the need for user consent.

Additionally, noyb highlights that providers of generative AI systems often claim they cannot meet other core GDPR requirements, such as the right to be forgotten or the right to request a copy of personal data. These issues are similarly raised in other pending GDPR complaints against OpenAI’s ChatGPT.

Most people like

Find AI tools in YBX