The U.K.’s AI Safety Summit has convened for its second day, drawing attention from heads of state and prominent figures in artificial intelligence. In contrast, the People’s AI Summit—a more grassroots initiative—launched simultaneously to challenge what it perceives as the U.K. government’s inadequate response to immediate and pressing AI risks. This summit was streamed live on YouTube and organized by The Citizens, a nonprofit organization dedicated to upholding democracy, advocating for data rights, and combatting misinformation.
The Citizens claim that Prime Minister Rishi Sunak and his administration are too cozy with major tech companies while ignoring the harmful impacts of AI that are already evident. The organization has actively raised concerns about issues such as algorithmic bias and election disinformation.
Renowned personalities made appearances at the People’s AI Summit, including Alex Winter, known for his role in "Bill & Ted" and an advocate in the ongoing Hollywood writer strike. He was joined by algorithmic bias activist Deb Raji and producer Gale Anne Hurd, famous for her work on the "Terminator" series. Speakers at the event chose to focus not on hypothetical doomsday scenarios but rather on tangible, harmful outcomes currently caused by AI technologies.
In a bid to amplify their message, The Citizens launched geotargeted, AI-generated advertisements directed at Bletchley Park, the venue hosting the AI Safety Summit. Aimed at highlighting the perceived inadequacies of Prime Minister Sunak's leadership, these ads criticize his dependence on tech giants like Twitter (now known as X), Meta, and others, asserting that these platforms are shaping the narrative around AI regulation to their advantage.
Clara Maguire, executive director of The Citizens, emphasized that the risks associated with AI are not futuristic but are already manifest. She remarked, “AI risks aren’t frontier, they’re here, but Prime Minister Sunak is more interested in chatting with Elon Musk and letting (Meta's) Nick Clegg offer future fixes to AI harm when their platforms are rife with them now.” Following the conclusion of the two-day AI Safety Summit, Sunak is expected to engage in a live discussion with Musk on X.
A key ally in this discourse is Yann LeCun, the chief AI scientist at Meta, who expresses skepticism about the doomsday scenarios presented at the U.K. summit. He has voiced his concerns on X, arguing that the existential fears articulated by renowned figures like Yoshua Bengio and Geoffrey Hinton are exaggerated and serve the interests of large tech corporations aiming to establish a monopoly in the AI sector. LeCun advocates for the open-sourcing of AI models and applications as a means to foster innovation and competition.
Similarly, Andrew Ng, co-founder of Google Brain, echoed this sentiment, asserting that big tech companies are leveraging fears of AI-driven human extinction to stifle competition from open-source initiatives. He described these tactics as tools for lobbyists to push for damaging legislation against the open-source community.
In this dynamic landscape of AI dialogue, the contrasting viewpoints on the regulation, risks, and future of artificial intelligence illuminate the complexities of ensuring technology benefits society while curbing potential harms. As discussions unfold, the emphasis remains on fostering transparent communication, promoting ethical standards, and addressing the real-world implications of AI technologies today.