The U.K.'s Competition and Markets Authority (CMA) has concluded an initial review of generative AI, leading to the development of a report featuring seven proposed principles to embed “consumer protection and healthy competition” at the core of the responsible development and use of foundation models (FMs). This initiative, which was announced back in May, aims to engage stakeholders on AI's market impacts and guide future regulatory action.
The proposed principles, which the CMA is presenting for stakeholder consideration, include:
1. Accountability: FM developers and deployers are responsible for their outputs to consumers.
2. Access: Ensuring ongoing, unrestricted access to key inputs.
3. Diversity: Encouraging a variety of business models, both open and closed.
4. Choice: Providing businesses with enough options to utilize FMs in a manner that suits them.
5. Flexibility: Allowing the ability to switch between or utilize multiple FMs as needed.
6. Fair Dealing: Preventing anti-competitive behavior, including self-preferencing and bundling.
7. Transparency: Informing consumers and businesses about the risks and limitations of FM-generated content for informed decision-making.
Drawing from its experience in market regulation and feedback from AI stakeholders, the CMA has crafted this draft of pro-innovation principles, reflecting instructions from the U.K. government to evaluate AI's implications within existing regulatory frameworks. Over time, the CMA may promote these principles as best practices to mitigate potential competition complaints in the rapidly evolving AI landscape.
While the CMA is laying the groundwork, a further update regarding these principles is anticipated in early 2024.
Assessing AI Impacts
Will Hayter, senior director of the CMA’s Digital Markets Unit (DMU), emphasized the significance of this review for competition and consumer welfare: “If the market functions effectively, the best products will prevail, benefiting consumers and businesses alike. Conversely, a malfunctioning market could hinder innovative enterprises and harm consumers.” He stressed the importance of proactively understanding ongoing developments rather than retroactively analyzing their implications.
Hayter acknowledged the dual potential of foundation models: while they can drive innovation and competition, they also pose challenges. “The benefits and risks could manifest rapidly, so we are intent on identifying both positive and negative outcomes as we progress,” he pointed out.
The CMA defines foundation models as large-scale AI systems that can be fine-tuned for various customer applications, playing a critical role in the AI supply chain by facilitating the development of customer-facing applications and services.
When asked about categorizing foundation model developers for specific regulation under the U.K.’s forthcoming “pro-competition” reforms, Hayter stated it is premature to make definitive predictions about how these emerging technologies will influence markets. However, the regulator is keen to stay ahead in evaluating technologies with significant potential impacts.
“We believe it’s vital to encourage the market toward favorable outcomes. These proposed principles are intended to facilitate that,” Hayter explained. “At this stage, they are genuinely open for discussion, allowing us to collaborate with various organizations to refine them and explore how best they can be realized.”
The CMA received over 70 responses in response to its call for input, encompassing a diverse range of organizations—from AI labs to civil society groups. Hayter expressed eagerness to embrace these varied perspectives as the principles and conversations evolve.
The U.K. government released its own principles for AI development in March, establishing guidelines that overlap somewhat with the CMA's proposals. However, the CMA's principles specifically target risks related to competition and consumer protection. Notably, issues surrounding security and data protection, which fall under the purview of the Information Commissioner’s Office, were not covered in this review, as the CMA aims to maintain focus on its regulatory domain.
Adopting a Targeted Approach
A broader issue regulators must navigate is how to support the growth of innovative AI technologies while swiftly addressing emerging challenges. The U.K. plans to refresh its competition regime by introducing specific rules for powerful platforms, as highlighted by Hayter’s role at the DMU.
In contrast, the European Union has implemented the Digital Markets Act, aimed at major tech companies like Alphabet and Amazon, while Germany has revamped its competition regulations targeting Big Tech. Consequently, the U.K. finds itself lagging behind its counterparts in addressing concerns related to tech giants’ market control.
Addressing potential negative market impacts, Hayter noted that foundation models might prompt similar issues to those that currently plague dominant tech firms. Yet, he underscored the unpredictability of AI market dynamics: “We could see scenarios where these models empower new challengers to disrupt existing incumbents, which would be beneficial. Conversely, they could entrench the positions of already powerful companies if access to key inputs becomes restricted.”
In Hayter's view, strategic regulation should be carefully considered and not rushed. Future regulations should be “very closely targeted,” aiming to maximize technology's potential while remaining vigilant about risks.
He reiterated the CMA's commitment to understanding specific market dynamics without presuming the outcome. As new developments unfold, the CMA will emphasize the importance of access to critical resources—data and computing power—while remaining alert to the evolving landscape.
With the goal of facilitating positive advancements from AI technology, the CMA continues its proactive approach, aiming for a well-calibrated response to emerging issues in this complex and evolving field.