Regulating AI Responsibly: No Need to Reinvent the Wheel

We are currently experiencing one of the most profound technological revolutions in the past century. For the first time since the tech boom of the 2000s and perhaps even since the Industrial Revolution, essential societal functions are undergoing significant changes through tools that some see as groundbreaking and others as concerning. While the advantages of artificial intelligence (AI) continue to generate public debate, there is widespread consensus regarding its extensive influence on the future of work and communication.

Institutional investors are increasingly aware of this trend. In the past three years, venture capital investment in generative AI has surged by 425%, reaching $4.5 billion in 2022, as reported by PitchBook. This rapid influx of funding is largely fueled by technological convergence across various sectors. Consulting giants like KPMG and Accenture are investing billions in generative AI to enhance their client offerings. Airlines are implementing new AI technologies to refine route optimization, and even biotech companies are harnessing generative AI to advance therapies for life-threatening illnesses.

As a result, this transformative technology has quickly drawn regulatory attention. Figures like Lina Khan from the Federal Trade Commission have highlighted the societal risks associated with AI, including increased fraud, automated discrimination, and collusive pricing if not appropriately managed.

One prominent example of AI's regulatory concern is Sam Altman’s recent testimony before Congress, where he stated that “governmental regulatory intervention will be crucial to mitigate the risks posed by increasingly powerful models.” As the CEO of one of the largest AI startups globally, Altman is actively engaging with lawmakers to ensure that regulatory discussions involve both public and private sector stakeholders. He has collaborated with other industry leaders to release an open letter asserting that “[m]itigating the risk of extinction from A.I. should be prioritized globally alongside other significant societal risks such as pandemics and nuclear threats.”

Technologists like Altman and regulators like Khan agree that effective regulation is necessary to ensure safe technology practices, but they often differ in the scope of such regulations. Generally, founders and entrepreneurs advocate for minimal restrictions to foster an innovative economic climate, while government officials tend toward more comprehensive regulations to protect consumers.

However, both parties often overlook areas where regulation has been effectively implemented for years. The rise of the internet, search engines, and social media prompted governmental oversight through laws like the Telecommunications Act, The Children’s Online Privacy Protection Act (COPPA), and The California Consumer Privacy Act (CCPA). Instead of imposing a sweeping set of restrictive regulations that may stifle innovation, the U.S. employs a patchwork of policies grounded in existing fundamental laws, including intellectual property, privacy, contract law, cybersecurity, and data protection.

These frameworks draw from established technological standards and encourage the adoption of best practices in evolving technologies. They also ensure the presence of trusted organizations that apply these standards in practice.

For instance, Secure Sockets Layer (SSL) and Transport Layer Security (TLS) protocols play a crucial role in securing data transferred between browsers and servers, ensuring compliance with encryption requirements outlined in laws like CCPA and the EU’s GDPR. These protocols protect sensitive customer information, credit card details, and personal data from potential exploitation. SSL certificates, issued by certificate authorities (CAs), validate the authenticity of data being transferred.

A similar mutually beneficial relationship can and should be developed for AI. Implementing overly stringent licensing frameworks from government bodies may stall innovation and favor large players like OpenAI, Google, and Meta, fostering an anti-competitive atmosphere. A streamlined, user-friendly certification standard akin to SSL, governed by independent CAs, would safeguard consumer interests while fostering innovation.

Such standards could enhance transparency in AI usage, clarifying operational models, foundational frameworks, and the credibility of their sources. In this model, the government would collaborate to establish and promote these protocols, making them widely recognized and accepted benchmarks.

At its core, regulation aims to protect essential rights such as consumer privacy, data security, and intellectual property, not to limit technology that users actively choose to engage with daily. Similar protective structures have already proven effective for the internet and can be adapted for AI.

Since the birth of the internet, regulation has successfully balanced consumer protection with innovation incentives. Policymakers should continue this approach despite the rapid pace of technological advancement, as AI regulation need not reinvent the wheel amidst polarized political discussions.

Most people like

Find AI tools in YBX