Post-Davos 2024: Transforming AI Hype into Tangible Reality

AI emerged as a prominent topic at Davos 2024, with over two dozen sessions focused on its implications, ranging from education to regulation, as highlighted by Fortune.

Attendees included a notable lineup of AI leaders, such as OpenAI CEO Sam Altman, Inflection AI CEO Mustafa Suleyman, AI pioneer Andrew Ng, Meta’s chief AI scientist Yann LeCun, and Cohere CEO Aidan Gomez.

The discussion at Davos underwent a significant shift from the enthusiastic speculation seen in 2023. As Chris Padilla, IBM’s VP of government and regulatory affairs, noted to The Washington Post, “Last year, the atmosphere was one of wonder; now it’s about assessing risks and ensuring AI's trustworthiness.”

Key concerns at this year's event included the potential for rampant misinformation, job displacement, and a growing economic divide between affluent and impoverished nations. The most pressing issue appeared to be the rise of misinformation facilitated by deepfake technology—manipulated photos, videos, and audio that can distort reality and undermine public trust. A recent incident before the New Hampshire primary, involving robocalls mimicking President Joe Biden’s voice, exemplified this threat.

As Carnegie Mellon University professor Kathleen Carley stated, “This is just the tip of the iceberg in terms of potential voter suppression or attacks on election integrity.” Enterprise AI consultant Reuven Cohen also warned that advancements in AI could lead to an influx of deepfake content during the 2024 elections.

Despite ongoing research, a reliable method for detecting deepfakes remains elusive. As reported by Jeremy Kahn in Fortune, “We need a solution quickly; distrust is detrimental to democracy.”

This change in focus from optimism to caution prompted Suleyman to advocate for a “cold war strategy” to address AI-related threats. He emphasized that as AI technologies become more accessible, they could be misused by hostile entities, leading to chaos that outpaces verification efforts.

Concerns surrounding AI have persisted for decades, notably depicted in the 1968 film “2001: A Space Odyssey.” Even consumer technologies, like the Furby toy, have raised alarms about security vulnerabilities, with the NSA banning them due to fears they could function as surveillance devices.

The conversation has intensified with predictions that Artificial General Intelligence (AGI) could be achieved soon. While AGI—defined as AI that surpasses human intelligence in a variety of tasks—remains a contentious topic, figures like Altman and Gomez express optimism about its imminent arrival, while others, including LeCun, urge caution, arguing that significant breakthroughs are still required.

Public sentiment towards AI remains mixed, as shown in the 2024 Edelman Trust Barometer, revealing that 35% of respondents reject AI while 30% accept it. Awareness of AI's potential benefits is tempered by concern for its risks. People are more inclined to embrace AI advancements when guided by experts and assured of control over their implications.

Ultimately, the path forward involves not just rapid responses to AI developments but also a balanced view of long-term consequences. As Roy Amara famously stated, “We tend to overestimate the effect of a technology in the short run and underestimate it in the long run.”

Despite ongoing exploration and trials, widespread success in AI is not guaranteed. Rumman Chowdhury, CEO of the AI-testing nonprofit Humane Intelligence, predicted a looming “trough of disillusionment” in 2024, where expectations may not align with reality.

This year may prove critical in assessing AI's transformative potential. As organizations explore generative AI for personal and professional use, it’s crucial to connect technological excitement with tangible value. According to Accenture CEO Julie Sweet, workshops now being offered to C-suite leaders represent essential steps toward realizing AI's potential.

As we navigate the complex landscape of AI, prudent decision-making and innovative thinking will be key to ensuring that AI amplifies human potential while upholding our shared values and integrity. The responsibility lies with us to shape a future where AI enhances human experience rather than dictates it.

Gary Grossman is EVP of Technology Practice at Edelman and global lead of the Edelman AI Center of Excellence.

Most people like

Find AI tools in YBX