The popularity of ChatGPT remains high, recently reignited by the Zhiyuan Conference, which showcased prominent figures in the AI industry. Esteemed experts such as Zhang Bo, Zhang Hongjiang, Turing Award winners Geoffrey Hinton, Yann LeCun, Yao Qizhi, Joseph Sifakis, and OpenAI founder Sam Altman gathered to discuss the future of AI. While the conference is revered in academic circles for its professional discourse, it often appears elitist to the general public.
During his address, Sam Altman emphasized that OpenAI must lead transformative research to clarify the development of general AI technology. This assertion faced pushback from notable AI figures. Stuart Russell, a professor at UC Berkeley, criticized ChatGPT and GPT-4, arguing that they do not fundamentally "answer" questions, as they lack genuine understanding. Similarly, Yann LeCun pointed out that current autoregressive models, like GPT, lack planning and reasoning skills, raising concerns about their future relevance.
Beyond spirited discussions, the conference also tackled the regulation of AI and its future trajectory. In 2023, the rapid advancement of generative AI has heightened societal concerns, particularly in China, where "AI scams" have drawn significant public attention. One case reported by police involved Mr. Guo from Fuzhou, who was defrauded of 4.3 million yuan in ten minutes through AI deepfake technology. In another incident in Changzhou, Jiangsu, a victim was tricked into borrowing 6,000 yuan after being convinced they were speaking to a college classmate on a video call.
The rise of AI scams underscores the connection between rapidly evolving technology and diminishing barriers to tech synthesis. As advancements persist, future developments may evolve from face synthesis to fully realistic 3D impersonations.
In the United States, concerns about AI's impact on elections are escalating. Advanced generative AI tools can clone voices and likenesses, generating significant disinformation. Coupled with social media's powerful algorithms, this technology poses unprecedented risks to electoral integrity. With the 2024 presidential election approaching, both political parties are expected to leverage AI for campaigning and fundraising, potentially using ChatGPT to draft speeches.
In light of these concerns, over 350 AI executives, including Hinton and Anthropic’s CEO Dario Amodei, have signed a joint statement advocating for the mitigation of existential risks posed by AI, prioritizing these alongside global issues like pandemics and nuclear threats.
Amid discussions on AI regulation, Altman revealed that OpenAI is exploring strategies for responsible AI governance. A May 2023 initiative offering a $1 million reward for effective governance proposals exemplifies this approach. Altman noted the difficulty in identifying malicious models and emphasized investments in complementary areas to foster breakthroughs. He highlighted the necessity for scalable oversight to utilize AI systems responsibly in identifying flaws in other systems. While advances in AI explainability are crucial, OpenAI remains optimistic about enhancing this technology aspect.
Altman stressed that future developments must focus on creating more intelligent models that provide significant benefits while advancing general AI goals and minimizing risks. Although OpenAI does not plan to release GPT-5 imminently, it anticipates that more powerful AI systems will emerge globally over the next decade, necessitating proactive measures. Training large models remains central to OpenAI's mission, alongside plans for a global database that reflects shared values and preferences in AI.
Furthermore, Altman called for global cooperation in AI regulation, recognizing China's exceptional AI talent and the need for collaborative efforts to tackle alignment challenges. Max Tegmark supported this notion, noting that China currently leads in AI regulation, with Europe following close behind, while the United States lags.
Altman acknowledged the complexities of fostering international collaboration in AI oversight, viewing it as a unique opportunity to establish systematic frameworks and safety standards. However, rising geopolitical tensions and contrasting government attitudes toward generative AI could hamper international regulatory cooperation, impacting the market dynamics for AI companies.
Europe has taken proactive measures in AI regulation, with the EU nearing the passage of comprehensive legislation that could set a benchmark for AI governance in advanced economies. EU Commission President Ursula von der Leyen has asserted a commitment to ensuring AI systems are accurate, reliable, safe, and non-discriminatory, irrespective of their origin. Such legislation could significantly affect OpenAI’s future operations within the European market.
As the industry navigates regulatory and ethical challenges, continuous adaptation of AI models to align with evolving global policies will be essential for OpenAI and the broader sector.