California's Privacy Watchdog Considers New AI Rules for Opt-Out and Access Rights

California's Privacy Protection Agency (CPPA) Sets the Stage for AI Regulation

The California Privacy Protection Agency (CPPA) is taking significant steps to regulate artificial intelligence (AI) through draft regulations designed to protect consumer data. As a key player in shaping digital privacy laws in a state that hosts many tech giants and AI innovators, the CPPA has introduced guidelines for what it terms automated decision-making technology (ADMT)—essentially, AI.

According to Ashkan Soltani, the CPPA’s executive director, this draft represents “the most comprehensive and detailed regulations in the AI field.” Drawing inspiration from the European Union's General Data Protection Regulation (GDPR), which has empowered individuals regarding automated decisions since May 2018, the CPPA aims to build upon this framework with stringent provisions that challenge tech companies while providing clarity and security for consumers.

At the heart of the proposed regulations are opt-out rights, pre-use notifications, and access rights that will empower California residents to understand how their data is utilized in AI and automation. Importantly, the draft suggests that AI profiling may also be included, which could have major implications for US advertising technology (adtech) firms like Meta, whose business models rely on user tracking and profiling for targeted advertising.

Under these proposed regulations, adtech companies might need to offer California residents the option to decline commercial data surveillance. The law states that businesses must now allow consumers to opt-out of data usage for behavioral advertising. Additionally, the current draft restricts the conditions under which firms can bypass this opt-out right, particularly in cases involving security or fraud prevention.

Soltani describes the CPPA’s strategy for regulating ADMT as risk-based, paralleling the EU's AI Act, which has faced contentious discussions over how to regulate AI effectively. With ongoing issues regarding the EU's legislation and the lack of a unified federal privacy law in the US, California has the potential to emerge as a leading global authority on AI regulation.

While the implications of California’s AI regulation may be profound at the local level, its specific reach remains confined to state residents. Companies may opt to extend similar privacy protections to individuals in other states, but compliance would not be legally mandated beyond California.

California's journey into AI regulation follows its earlier implementation of GDPR-inspired data protection laws, specifically the California Consumer Privacy Act (CCPA), which came into effect in early 2020. The momentum continued with a voter-approved initiative in 2020 that sought to enhance and redefine certain privacy protections. The current draft regulations concerning ADMT are part of this broader effort.

In a press release, the CPPA stated, “The proposed regulations would ensure consumers have the right to opt-out of and access information regarding businesses’ use of ADMT as specified by the CCPA.” The Board of the Agency will review these proposals in a meeting scheduled for December 8, 2023, before formal rulemaking begins in the following year.

Moreover, the CPPA is considering additional draft requirements for risk assessments that will work in tandem with the ADMT regulations. These combined frameworks aim to provide consumers with greater control over their personal information, ensuring that AI technologies are designed with privacy in mind.

Vinhcent Le, a member of the CPPA Board and a key figure in drafting these regulations, affirmed, “California is leading the way in fostering innovative privacy protections for emerging technologies, including AI. These proposed rules ensure that automated decision-making is employed responsibly, safeguarding individual privacy, including that of children and employees.”

Key Aspects of the Proposed Regulations

The draft regulations focus on access and opt-out rights concerning business use of ADMT. The proposed regime intends to allow residents to request the exclusion of their data from automated decision-making, with limited exemptions. These exemptions apply primarily to data necessary for:

- Security purposes: Preventing, detecting, and investigating incidents.

- Fraud prevention.

- Safety: Protecting the consumer's physical well-being.

- Services requested by the consumer, under stringent conditions ensuring businesses must justify their necessity for such data processing.

Under these measures, claims that participation in automated processes negate the ability to opt-out will likely face scrutiny. Organizations utilizing ADMT will bear compliance costs if they attempt to deny consumer opt-out requests.

To ensure transparency, businesses intending to employ ADMT must provide “pre-use notices” to consumers. These notifications will inform individuals about how their information will be utilized, allowing them to opt-out or access more detailed information about the intended use of AI or automation.

The proposed framework also includes access rights, enabling residents to request information about:

- How their data is utilized in ADMT.

- The outputs and decisions generated by these technologies, including details on any human oversight.

- The logic behind the technology and the parameters influencing its outcomes.

While similar in intent to the GDPR’s requirements, the proposed CCPA framework works to enhance consumer access to information, specifying what companies must disclose in response to requests. The only exemptions from these access rights proposed include security, fraud prevention, or safety.

Not every application of ADMT will be encompassed by the CCPA's proposed regulations. The draft establishes specific thresholds for when the rules apply, focusing on decisions that significantly impact consumers, such as employment opportunities or profiling in public spaces.

Once finalized, the regulations may also extend to advertising profiling and the use of data from consumers under the age of 16, indicating a conscious effort to limit the operations of big data players and enhance consumer protections.

The draft proposal marks the beginning of the CPPA’s rulemaking process, with a public consultation expected imminently. While a final version remains forthcoming, should the Agency expedite its efforts, a regulation could potentially be formalized by late next year, with compliance expected as early as 2025. As AI technology continues to evolve, these regulations will need to adapt alongside them, aiming to ensure consumer rights in a rapidly changing landscape.

Most people like

Find AI tools in YBX