Why Global AI Regulations May Fail to Mitigate Risks of Dangerous Technology

Governments around the world are diligently exploring measures to mitigate the risks associated with artificial intelligence (AI), but skepticism regarding the feasibility of effective control remains among experts. The recent AI Safety Summit in the U.K. convened leaders from the U.S., China, the European Union, Britain, and 25 other nations, resulting in a united approach to safeguard against issues such as misinformation and both intentional and accidental harm.

However, the ambitious goals outlined at the summit may not be entirely realistic. "AI encompasses a broad spectrum of technologies, from expert systems to traditional machine learning, and most recently, generative AI," stated Kjell Carlsson, head of data science strategy at Domino Data Lab. "This diversity complicates the creation of regulations that can adequately govern all these technologies and their numerous applications."

**The Drive for Safety**

During the summit, policymakers emphasized the critical need for ongoing AI research, focusing on safety through the Bletchley Declaration. This collaborative stance comes amid dire warnings from prominent figures regarding AI's potential threats, ranging from existential risks to significant job displacement. However, some experts argue that such doomsday predictions are exaggerated or sensationalized. "AI doomism is quickly becoming indistinguishable from an apocalyptic religion," remarked Yann LeCun, Meta's chief AI scientist, on X (formerly Twitter).

The Bletchley Declaration proclaimed that “Artificial Intelligence (AI) presents enormous global opportunities: it has the potential to transform and enhance human well-being, peace, and prosperity.” The declaration stresses the importance of a human-centric, trustworthy, and responsible approach to the design and deployment of AI technologies, urging international collaboration to identify and understand AI risks based on scientific evidence.

"Many risks arising from AI are inherently international in nature, and so are best addressed through international cooperation," the declaration noted, highlighting the shared responsibility of nations.

**Individual Actions and Skepticism**

In response to these global efforts, individual governments are also taking steps to regulate AI. Prior to the summit, President Biden issued an executive order aimed at fostering collaboration between government agencies, businesses, and academia to guide the responsible development of AI while safeguarding jobs and consumer privacy. "My administration cannot — and will not — tolerate the use of AI to disadvantage those who are already too often denied equal opportunity and justice," Biden asserted.

However, experts like Milan Kordestani, an AI regulatory specialist, express concerns that current proposals do not sufficiently address the risks involved. He criticized the vague nature of Biden's executive order, which directs federal agencies to implement AI safeguards without detailing specific measures. "The proposed regulations fail to constrain the private sector's actions and overlook how individuals will interact with AI technologies," he argued. He further emphasized the need for regulations that engage the academic community in discussions about AI risks.

Kordestani believes that effective AI regulation should go beyond merely addressing the technology itself. It should encompass the broader implications for society, including shifts in workforce dynamics, distribution networks, and cognitive processes. "Legislators from the late 1980s could not have foreseen the necessity for regulating misinformation on social media. Similarly, AI regulation must evolve dynamically to continuously address emerging risks," he stated.

**Global Challenges and Ethical Concerns**

The challenge of regulating AI is underscored by its global reach. Richard Gardner, CEO of Modulus, compared the regulation of AI to controlling nuclear weapons. "Regulating AI within national borders is complex, as adversarial nations may continue development despite international agreements," he noted. The possibility of rogue developers creating unregulated AI products adds another layer of difficulty.

For issues concerning copyrighted material, Gardner advocates for using robots.txt files—standard protocols that instruct search engine bots on website traversal—to govern AI training and data usage, ensuring that protected materials are excluded. "We must allow innovators the freedom to pursue research and development," he stressed, cautioning against overreaching regulatory measures that could hinder innovation.

Defining ethical standards for AI is no simple task. Current human rights laws can form the foundation for challenging AI initiatives, but there is an urgent need for precise regulations that translate those rights into actionable guidelines. Rob van der Veer, an expert in AI and application security, pointed out, "Existing privacy regulations set limits on the purposes of AI systems using personal data, but security regulations often overlook the new vulnerabilities that AI engineering introduces."

**Adapting to an Evolving Landscape**

Given the rapid progression of AI technology, regulation must be a continuous process, with ongoing assessments of emerging advancements. Kordestani suggests that this might entail new legislation or international treaties that adapt to innovative training methods and applications. Governments must work collaboratively with businesses to balance the imperative of innovation with the need for safety in AI development.

Furthermore, involving academics in this discourse is crucial for ensuring public safety from moral and ethical perspectives, fostering conversations about equitable access, data ethics, and social bias. The ultimate goal is to protect citizens from potential threats posed by AI in various aspects of daily life.

Kordestani warns that bad faith actors could exploit AI for malicious purposes, underscoring the necessity for robust international regulations. "Establishing a multi-stakeholder approach involving public, private, and academic sectors is essential for addressing current and emerging threats," he asserted.

Carlsson advocates for a targeted regulatory approach that focuses on specific use cases rather than blanket regulations for all AI technologies. For instance, addressing the misuse of deepfakes for fraud can prove more beneficial than broad mandates calling for watermarks on generative AI outputs.

While regulation will inevitably lag behind the latest AI developments, it is vital for creating a responsive framework that evolves alongside technology, ultimately ensuring that human considerations remain at the forefront of AI innovation and its integration into society.

Most people like

Find AI tools in YBX