US, UK, and EU Endorse Council of Europe’s High-Level AI Safety Treaty for Enhanced Global Standards

We're still in the early stages of understanding how AI regulations will be implemented, but a significant step forward occurred today when several countries, including the U.S., the U.K., and the European Union, agreed to a treaty on AI safety established by the Council of Europe (COE), an organization dedicated to international standards and human rights.

The treaty, known formally as the Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy, and the Rule of Law, has been heralded by the COE as “the first-ever international legally binding treaty designed to ensure that the use of AI systems aligns with human rights, democracy, and the rule of law.”

At a meeting in Vilnius, Lithuania, the treaty was officially opened for signature. In addition to the key markets mentioned, other signatories include Andorra, Georgia, Iceland, Norway, Moldova, San Marino, and Israel. This diverse list encompasses several nations where some of the world’s largest AI companies operate. However, it’s noteworthy that countries from Asia, the Middle East, and Russia have not yet signed.

The treaty focuses on three critical areas: upholding human rights—particularly in protecting against data misuse and discrimination while ensuring privacy; safeguarding democracy; and maintaining the rule of law. The commitment to the rule of law mandates that signatory countries establish regulatory bodies to address "AI risks," albeit without detailing what those risks may be, thus referencing the other two focus areas.

The COE outlines the treaty's ambitious goals: “The treaty provides a legal framework that encompasses the entire lifecycle of AI systems. It promotes AI innovation while managing potential risks to human rights, democracy, and the rule of law. To effectively endure, it remains technology-neutral.”

To provide context, the COE is not a lawmaking body; it was established after World War II to uphold human rights, democracy, and legal systems throughout Europe. It drafts treaties that bind signatories legally and enforces compliance, with the European Court of Human Rights being one of its notable arms.

Regulating artificial intelligence has become a hot topic in the tech world, involving a complex web of stakeholders. Various agencies responsible for antitrust, data protection, finance, and communications are proactively attempting to establish frameworks to navigate AI effectively. The urgency stems from a recognition that while AI is transforming global operations, unregulated change could yield unforeseen consequences, necessitating careful oversight.

However, there is also apprehension among regulators about stifling innovation by acting too hastily or broadly. Early on, AI companies expressed interest in participating in discussions around AI Safety, illustrating a mix of motives—some view it as regulatory capture, while others believe collaboration can lead to more informed policies.

Politicians play a significant role in this landscape, sometimes backing regulatory efforts, whilst at other times prioritizing business interests to bolster their economies. (The previous U.K. government, for example, leaned towards AI-driven economic growth.)

This complex mix has led to various frameworks and statements, such as those emerging from the U.K.’s AI Safety Summit in 2023, the G7’s Hiroshima AI Process, and a recent UN resolution. Additionally, individual countries are establishing AI safety institutes and regional regulations like California's SB 1047 and the E.U.’s AI Act.

The COE’s treaty aims to unify these diverse efforts. The U.K. Ministry of Justice highlighted the treaty's potential to ensure that countries monitor AI’s development within strict boundaries. “Once ratified in the U.K., existing laws will be enhanced,” they noted.

COE Secretary General Marija Pejčinović Burić emphasized, “We must ensure that the rise of AI upholds our standards, rather than undermining them. The Framework Convention is designed to guarantee this. It is a robust and balanced document, benefiting from a collaborative drafting process that included various expert perspectives.”

"The Framework Convention represents an open treaty with potential global implications. I hope this marks the start of many signatures, leading to rapid ratifications so the treaty can take effect as soon as possible," she added.

Although the original framework was negotiated in May 2024, it will formally come into effect three months following the ratification by at least five signatories, including three Council of Europe member states.

In essence, countries that signed today will need to ratify the treaty, after which there will be an additional three-month wait before it takes effect. The timeline for this process remains uncertain. For instance, the U.K. has indicated a commitment to pursuing AI legislation, but specifics on a draft bill remain unspecified, with updates on the COE framework's implementation promised "in due course."

Most people like

Find AI tools in YBX