Italian DPA Accuses OpenAI's ChatGPT of Breaching Europe’s Privacy Laws

OpenAI is currently facing allegations of violating European Union privacy laws, following a comprehensive investigation by Italy’s data protection authority into its AI chatbot, ChatGPT.

While specific details of the authority's draft findings remain undisclosed, the Italian data protection agency, Garante, has formally notified OpenAI, granting the company 30 days to provide a defense against the suspected violations.

Confirmed breaches of EU regulations can lead to significant fines, potentially reaching €20 million or up to 4% of OpenAI’s global annual revenue. More critically, data protection authorities possess the power to enforce changes in data processing practices to rectify confirmed violations. As a result, OpenAI may be compelled to alter its operational methods or even withdraw its services from EU Member States where compliance is mandated.

OpenAI has been approached for a comment regarding the Garante’s notification, and this report will be updated upon receiving their official response.

In a recent statement, OpenAI asserted:

"We are confident that our practices are aligned with GDPR and other privacy legislation. We implement additional measures to safeguard user data and privacy, aiming for our AI to understand the world rather than private individuals. Our commitment to minimizing personal data in training systems like ChatGPT remains strong, and we proactively refuse requests for private or sensitive information. We intend to continue collaborating constructively with the Garante."

AI Model Training Legality Under Scrutiny

The Italian authority previously expressed concerns regarding OpenAI’s adherence to the General Data Protection Regulation (GDPR) last year, prompting a temporary ban on local data processing for ChatGPT, which subsequently led to a brief suspension of the chatbot in the Italian market.

In its provision issued on March 30, the Garante outlined the absence of a suitable legal basis for collecting and processing personal data for training purposes related to ChatGPT’s underlying algorithms. It also highlighted the AI tool's tendency to “hallucinate”— producing potentially inaccurate information about individuals—alongside concerns regarding child safety. Ultimately, the authority indicated that ChatGPT may be infringing Articles 5, 6, 8, 13, and 25 of the GDPR.

Despite these concerns, OpenAI quickly reinstated ChatGPT in Italy after addressing some issues raised by the Garante, although the investigation was set to continue. Recent findings suggest that the tool may indeed be violating EU laws.

The Garante has not specified which breaches it has officially confirmed, but the legal basis for OpenAI’s processing of personal data for AI model training remains a crucial concern. ChatGPT’s training involved large amounts of data scraped from the public internet, including personal information, leading to significant challenges for OpenAI regarding GDPR compliance.

The GDPR outlines six potential legal bases for data processing, but many are not applicable in this case. Following an order from the Garante last April, OpenAI was instructed to discontinue citing “performance of a contract” for model training, leaving them with two options: obtaining consent or claiming legitimate interests.

Given that OpenAI has not sought consent from the millions (or possibly billions) of users whose data has been processed, claiming permission from EU citizens appears impractical. After the Garante’s intervention, OpenAI sought to lean on a claim of legitimate interests. However, this requires organizations to allow individuals to object to the processing of their data, which could complicate operations for ChatGPT’s AI model.

Theoretically, fulfilling this requirement might necessitate the dismantling of certain models and retraining them entirely without the data of those who opt out—an expensive and challenging prospect. Furthermore, a prevailing question remains about whether the Garante will ultimately find the legitimate interests claim valid in this context.

Based on previous rulings, it seems unlikely that legitimate interests can be justified for such extensive data processing. The processing must be necessary and less intrusive alternatives should be considered. The EU’s highest court has already ruled against Meta using legitimate interests for profiling and tracking individuals for its advertising operations. This raises significant questions about whether a similar justification could hold for OpenAI in processing vast quantities of personal data to develop generative AI technology, especially considering the potential risks of misuse that could harm individuals.

A representative from the Garante confirmed that the legal basis for model training remains part of the ongoing investigation, yet they have not disclosed which articles they suspect OpenAI of violating at this point.

Today’s announcement from the Garante is not a final decision; they will await OpenAI’s response before concluding.

The Garante’s statement reads:

"The Italian Data Protection Authority has officially notified OpenAI, the operator of the ChatGPT AI platform, regarding its objection to violations of data protection laws. Following a provisional order to restrict data processing on March 30, and due to findings from the preliminary investigation, the Authority suspects one or more unlawful acts concerning the regulations set forth by the EU. OpenAI has 30 days to submit defense documents in response to the alleged violations. The authority will also consider ongoing efforts by the special task force comprised of EU Data Protection Authorities."

OpenAI is also facing scrutiny in Poland, stemming from a complaint filed last summer involving inaccuracies generated by ChatGPT about a specific individual. That separate GDPR investigation is still ongoing.

To address growing regulatory pressures within the EU, OpenAI is attempting to establish a physical presence in Ireland. In January, the company announced its intent to use an Irish entity as the data service provider for EU users. This move aims to achieve “main establishment” status in Ireland and shift GDPR compliance oversight to Ireland’s Data Protection Commission via the one-stop-shop mechanism. However, OpenAI has yet to secure this status, which means ChatGPT remains vulnerable to investigations by various DPAs across the EU. Even if successful, the Italian investigation will persist due to the preceding data processing activities.

The European Data Protection Board has initiated a task force to coordinate how GDPR applies to ChatGPT, aimed at fostering consistent outcomes across various national inquiries, including those in Italy and Poland. However, national authorities maintain their independence and regulatory authority, meaning there are no guarantees that concurrent investigations will yield uniform conclusions.

ChatGPT has resumed its services in Italy after enhancing privacy disclosures and controls.

Most people like

Find AI tools in YBX

Related Articles
Refresh Articles