EU’s ChatGPT Taskforce Unveils Initial Strategies for Navigating AI Chatbot Privacy Compliance

A European Union data protection taskforce, which has been reviewing how the EU’s data protection regulations apply to OpenAI’s widely-used chatbot, ChatGPT, shared preliminary findings on Friday. The main conclusion is that privacy enforcers remain uncertain about key legal issues, particularly regarding the legality and fairness of OpenAI’s data processing methods.

This uncertainty is significant as penalties for confirmed violations of the EU's privacy laws can reach up to 4% of a company’s global annual revenue. Regulatory bodies also have the authority to halt non-compliant data processing activities. Consequently, OpenAI faces substantial regulatory risks, especially given the sparse framework of dedicated AI laws, which are still years away from full implementation.

However, the lack of clarity from EU data protection enforcers regarding existing laws' applicability to ChatGPT means OpenAI may feel emboldened to operate as usual, despite numerous complaints alleging its technology contravenes the General Data Protection Regulation (GDPR).

One notable example involves Poland’s data protection authority (DPA) launching an investigation after a complaint about the chatbot generating false information about an individual without correcting the inaccuracies. A similar complaint has recently surfaced in Austria.

Heightened Complaints, Limited Enforcement

While GDPR theoretically applies whenever personal data is collected, large language models like OpenAI's GPT—foundational to ChatGPT—engage in extensive data scraping across public internet platforms, including social media. This presents potential conflicts with EU legislation, as data protection authorities (DPAs) have the power to halt non-compliant processing.

Last year, Italy's privacy watchdog temporarily banned OpenAI from processing data of local ChatGPT users, invoking emergency provisions within GDPR. This decision forced OpenAI to temporarily suspend its service in Italy until it complied with demands from the DPA concerning user information and control mechanisms. Despite these adjustments, the Italian investigation into ChatGPT’s lawfulness continues, leaving OpenAI under legal scrutiny in the EU.

Under the GDPR, entities must establish a legal basis for processing personal data. The regulation outlines six potential bases; however, many are not applicable to OpenAI. The Italian DPA has already advised OpenAI against claiming contractual necessity as a basis for data processing, narrowing its options to either obtaining user consent or relying on a broad category known as legitimate interests (LI), which mandates a balancing test and the option for users to object to processing.

Since Italy's intervention, OpenAI seems to have shifted to claiming legitimate interests for processing personal data used in model training. Nevertheless, in January, the DPA’s draft findings indicated that OpenAI breached GDPR stipulations. The complete details of these findings remain unavailable, and a final decision on the complaint is still pending.

Addressing the Lawfulness of ChatGPT

The taskforce's report addresses the intricate legal landscape, emphasizing the necessity for ChatGPT to have a valid legal basis throughout all stages of personal data processing, including data collection, filtering, and output generation.

The first three processing stages involve what the taskforce describes as “peculiar risks” to individuals' fundamental rights. The report highlights the extensive and automated nature of web scraping, which can lead to the acquisition of large volumes of sensitive personal data, such as information about health, sexuality, and political beliefs. The GDPR mandates stricter processing standards for this type of data.

Moreover, the taskforce stresses that public data does not automatically qualify as “manifestly” public in a way that would exempt it from requiring explicit consent for processing sensitive data. To depend on legitimate interests, OpenAI must justify its data processing necessity, restrict processing to what is essential for this necessity, and perform a balancing test that weighs its interests against the data subjects' rights and freedoms.

The taskforce proposes that implementing “adequate safeguards,” such as “technical measures” and precise data collection criteria, could tilt the balancing test in OpenAI’s favor by minimizing potential privacy impacts. Additionally, it suggests measures to delete or anonymize personal data prior to the training phase.

OpenAI is also aiming to utilize legitimate interests for processing users’ prompt data for model training. The report highlights the critical need for users to be “clearly informed” that their data may be utilized for training purposes, which is a factor in the balancing test for legitimate interests.

Ultimately, it is up to the individual DPAs addressing complaints to determine if OpenAI fulfills the criteria to legitimately rely on these interests. If not, the only legal path available for OpenAI in the EU would be to request consent from users—an impractical option given the vast amount of personal data likely included in training datasets.

Fairness and Transparency are Essential

On the principle of fairness in GDPR, the taskforce emphasizes that privacy risks should not be shifted to users, such as by including disclaimers in terms and conditions suggesting “data subjects are responsible for their chat inputs.”

“OpenAI remains accountable for adhering to GDPR and should not claim that certain personal data inputs were prohibited from the outset,” the report notes.

Regarding transparency obligations, the taskforce acknowledges that OpenAI might be able to invoke an exemption from notifying individuals about the data collected due to the extensive web scraping required for training datasets. However, it reiterates the vital importance of informing users about the potential for their inputs to be used for training.

The report also addresses the issue of ChatGPT generating inaccurate information—an occurrence known as "hallucinating"—stating that the principle of data accuracy within GDPR must be respected. OpenAI is urged to provide users with clear information regarding the “probabilistic output” of the chatbot and its limited reliability. The taskforce encourages the inclusion of explicit warnings that generated text might be biased or fabricated.

In terms of user rights, such as the right to rectify personal data—central to many GDPR complaints about ChatGPT—the report underscores the importance of enabling users to easily exercise their rights. It criticizes OpenAI's current system, noting that while users can request the blocking of inaccurate data generation, they are not given the option to correct false information. Nevertheless, the taskforce does not offer detailed guidance on improving how OpenAI allows users to exercise their data rights, suggesting instead that the company implement “appropriate measures” to uphold data protection principles.

Enforcement of GDPR Regulations Remains Uncertain

Established in April 2023 following Italy's significant intervention, the ChatGPT taskforce aims to coordinate the enforcement of the EU's privacy laws in relation to emerging technologies. Operating within the European Data Protection Board (EDPB), this taskforce works alongside independent DPAs that enforce the laws in their jurisdictions.

Despite the DPAs’ independence, there appears to be hesitance among regulators regarding how to handle emerging technologies like ChatGPT. When Italy's DPA announced its draft decision earlier this year, it highlighted the relevance of the taskforce’s work in its proceedings, indicating a tendency for regulators to wait for the taskforce’s comprehensive report—potentially a year away—before initiating their enforcement actions.

In a recent interview, Poland’s DPA suggested that its investigation into OpenAI would pause until the taskforce concludes its efforts. After inquiries, the DPA did not specify whether its enforcement was delayed due to the ongoing work of the ChatGPT taskforce. According to a spokesperson from the EDPB, the taskforce's work does not influence the individual analysis conducted by DPAs in their ongoing investigations, but they acknowledged the EDPB’s role in promoting cooperation among DPAs regarding enforcement actions.

Currently, there seems to be a varied array of opinions among DPAs concerning the urgency of responding to issues surrounding ChatGPT. While Italy’s DPA acted quickly, former Irish data protection commissioner Helen Dixon cautioned against rushing to impose a ban, emphasizing the need to appropriately strategize regulation.

It is notable that OpenAI established an operational base in the EU through Ireland last fall. This was followed by adjustments to its terms and conditions, designating OpenAI Ireland Limited as the regional provider for services like ChatGPT. This allowed the AI company to seek the Irish Data Protection Commission (DPC) as its primary supervisor for GDPR compliance.

Such regulatory maneuvers appear to have yielded positive outcomes for OpenAI. According to the EDPB’s report, as of February 15, the company achieved main establishment status under the GDPR, enabling it to utilize the One-Stop Shop (OSS) mechanism for complaints. This means cross-border complaints will now be channeled through Ireland, simplifying matters for OpenAI and reducing the risks associated with decentralized GDPR enforcement faced in countries like Italy and Poland.

Ultimately, this shift means that future decisions regarding complaints will rest with Ireland’s DPC, which has developed a reputation for a more business-friendly enforcement approach to GDPR among major tech firms. This environment could position OpenAI to benefit from a favorable interpretation of EU data protection regulations.

OpenAI was contacted for a response regarding the EDPB taskforce's preliminary report, but did not respond by the time of publication.

In response to the EDPB's assertions about OpenAI's operation in the EU, Maciej Gawronski of GP Partners, representing a complainant in the Polish investigation, remarked that there is insufficient evidence suggesting that OpenAI's EU office holds decision-making authority under GDPR. He emphasized that it is implausible to manage personal data processing from a headquarters in both the U.S. and the EU and expressed skepticism about the validity of the EDPB's report, suggesting it reads like an attempt to portray OpenAI as compliant. Gawronski asserted that the Polish DPA maintains both competence and obligation to investigate their complaint.

Most people like

Find AI tools in YBX

Related Articles
Refresh Articles