Google's GenAI Undergoing Privacy Risk Assessment Scrutiny in Europe

Google's primary privacy regulator in the European Union has launched an investigation to determine if the company has adhered to the bloc’s data protection laws regarding the use of individuals’ information in training generative AI.

The investigation focuses on whether Google was required to conduct a Data Protection Impact Assessment (DPIA) to proactively evaluate the potential risks that its AI technologies might pose to the rights and freedoms of individuals whose data was used in training these models.

Generative AI tools are notorious for generating convincing yet inaccurate information. This tendency, coupled with their capability of delivering personal data on request, increases the legal liabilities for their developers. Ireland’s Data Protection Commission (DPC), which regulates Google’s compliance with the General Data Protection Regulation (GDPR), has the authority to impose fines of up to 4% of Alphabet's (Google's parent company) global annual revenue for confirmed violations.

Google has produced several generative AI tools, including a suite of general-purpose large language models (LLMs) branded as Gemini (previously known as Bard). These technologies enhance AI chatbots and improve web search functionality. At the foundation of these consumer-oriented AI tools lies Google’s LLM called PaLM 2, which debuted at last year’s I/O developer conference.

The Irish DPC is investigating how Google developed this foundational AI model under Section 110 of Ireland’s Data Protection Act 2018, which incorporates GDPR into national law.

Training generative AI models typically necessitates substantial datasets, and the sources and types of information acquired by LLM developers are under increasing scrutiny due to various legal issues, including copyright and privacy concerns. Specifically, any personal data of EU individuals utilized for AI training is subject to the bloc’s data protection regulations, regardless of whether it was sourced from the public internet or directly from users. This scrutiny has resulted in several LLMs facing compliance questions and GDPR enforcement actions, including OpenAI (producer of GPT and ChatGPT) and Meta, the developer of the Llama AI model.

Additionally, X, owned by Elon Musk, has faced GDPR complaints and scrutiny from the DPC regarding the use of individuals' data for AI training, prompting court proceedings and an agreement by X to restrict its data processing, although no sanction has been imposed. However, X may still incur GDPR penalties if the DPC finds its handling of user data for training the AI tool Grok to be non-compliant.

The DPC's investigation into Google’s generative AI is part of a broader regulatory effort.

“The statutory inquiry evaluates whether Google fulfilled any requirements to conduct an assessment in accordance with Article 35 of the GDPR (Data Protection Impact Assessment) prior to processing the personal data of EU/EEA individuals associated with the development of its foundational AI model, Pathways Language Model 2 (PaLM 2),” the DPC stated in a press release.

The DPC emphasized that a DPIA is vital for ensuring that individuals’ fundamental rights and freedoms are adequately protected when processing personal data is likely to result in high risks.

“This inquiry is part of the DPC's collaborative efforts with other EU/EEA regulators to oversee the processing of personal data in the development of AI models and systems,” the DPC added, highlighting the ongoing initiatives among GDPR enforcers within the bloc to establish a consensus on applying privacy laws to generative AI tools.

Google has not responded to inquiries regarding the sources of data for its generative AI tools, but a spokesperson stated, “We take our obligations under the GDPR seriously and will work constructively with the DPC to address their questions.”

Most people like

Find AI tools in YBX