At the recent 12th Internet Security Conference, academician Wu Shizhong from the Chinese Academy of Engineering provided a summary that highlighted the current state of security in the AI (artificial intelligence) era. The rapid development of technology has brought with it significant security challenges, which have consistently attracted attention across various sectors. During the conference, several academicians, organizational leaders from the China Internet Association, the China Academy of Information and Communications Technology, the World Internet Conference, and representatives from Chinese and foreign enterprises gathered to discuss the theme of "Building Secure Large Models." They shared insights from government, industry, academia, research, and application sectors.
In the last two years, the surge of AI large models following the global success of ChatGPT has led to increasing enthusiasm in China, prompting major internet companies to hasten their investments in large model applications. Currently, there are over 180 generative AI service large models in operation, with registered users surpassing 564 million. Experts at the conference noted that AI, through radical technological innovation, is driving a new wave of technological revolution and empowering various aspects of life, leading to profound changes in technology, economy, and society. The emergence of large models signifies a new phase in AI development, characterized by powerful language generation capabilities, natural human-computer interactions, and advanced adaptive learning.
"We have established a relatively comprehensive industrial system for AI technology, with more than 2,500 digital workshops and smart factories. AI has reduced the average research and development cycle by 20% and boosted production efficiency by 35%," shared Wei Liang, deputy director of the China Academy of Information and Communications Technology. He emphasized the growing demand for high-quality computing infrastructure and data sets due to the advancement of large model technology, urging the need to accelerate the foundational development of large models to facilitate industry applications and broader access for small and medium-sized enterprises.
The shift from “model wars” to empowering various industries has been notable, with an emphasis on exploring specific application scenarios. "2024 should be the year of practical applications; businesses need to identify 'star scenarios' and design functionalities accordingly for targeted training of large models," suggested Zhou Hongyi, founder of 360 Group. The importance of collaboration was also underscored, as foundational large models are essential for enterprises undergoing digital transformation, facing challenges including robust computing power, abundant data, and advanced algorithms. This necessitates tight cooperation between large model providers and specialized industries.
On the dual-use nature and potential risks of large models, Wu Shizhong noted that since the advent of technologies like ChatGPT, various warnings have emerged regarding risks, as these technologies possess both beneficial and harmful aspects. He observed that international AI companies are intensifying their investments in safety and regulatory measures, indicating a consensus that increases in large model capabilities must be complemented by advancements in safety research.
Experts have pointed out several shortcomings in current AI development. “Generative AI is prone to 'hallucination,' where it may provide inaccurate information without recognizing its mistakes. This inherent error, unlike machine-generated mistakes which are often controllable, presents significant challenges that must be addressed in application development,” stated Zhang Bo, an academician at the Chinese Academy of Sciences and a professor at Tsinghua University.
Wu Jiangxing, an academician and director of the National Digital Exchange System Engineering Technology Research Center, warned of an imbalance in the responsibilities and risks of AI application systems, citing inherent vulnerabilities in software and hardware design. Addressing the complexities of security research, he noted that many challenges remain, including insufficient transparency of large models, diverse security technologies, ethical dilemmas, and complexities in safety testing and risk assessment.
Calls for collaborative governance and the convergence of various efforts are vital to tackle the ongoing digital landscape challenges. The need for new protective strategies to cope with complex digital environments is paramount, as internet technology encounters continuous offensive and defensive confrontations. Traditional security measures must transition towards cloud and service-based frameworks, integrating technologies like big data, AI, and the Internet of Things to bolster security capabilities effectively.
Moreover, Huang Chengqing, vice president of the China Internet Association, emphasized that stagnation in development leads to insecurity, advocating for regulatory frameworks that facilitate sustainable development. He encouraged internet companies to leverage AI to enhance security measures and to deepen cooperation between industry, academia, and research.
Since the implementation of the EU's Artificial Intelligence Act on August 1 this year, over a dozen countries have released national strategies and regulatory documents concerning AI. Core concerns around AI include transparency, accountability, privacy, and ethical integrity. Effective governance will require incorporating these principles across all stages of model deployment, development, computation, data handling, and application.
In terms of international collaboration, Ren Xianliang, secretary-general of the World Internet Conference, highlighted the importance of synergy in governance, calling for a collective effort to promote digital security and innovative integration of next-generation information technology. This collaboration aims to foster a productive ecosystem for theoretical exchange, technical cooperation, and talent development, ultimately ensuring the safe and beneficial advancement of AI in society.