On February 11, 2025, during the 2025 Paris AI Action Summit, an official side event was held concurrently. Hosted by China AI Development and Safety Association and organized by the Shanghai Qi Zhi Institute together with Institute for AI International Governance at Tsinghua University, the event received strong support from multiple network member institutions. Centered on the theme "Progress in AI Technology and Its Application," the conference brought together top experts and scholars from the global AI community. The event aimed to foster discussions on leveraging international cooperation platforms to bridge the AI divide, ensure that all countries can benefit from the rapid development of AI, and advance an open, inclusive, and mutually beneficial global governance framework for AI. Together, participants sought to embrace the opportunities and challenges of future development.
The one-day side event on "Advancements in AI Technology and Its Applications" brought together over 60 experts and scholars specializing in the field of AI from around the world. On-site registrations surpassed 150 attendees, medias including CGTN,Nouvelles D'Europe,IFENG.COM,weibo.com,SOHO.com delivered live broadcast services and with more than 140000 online audiences. Senior experts and scholars from institutions such as Tsinghua University, Peking University, UC Berkeley, University of Oxford, University of Cambridge, Massachusetts Institute of Technology, as well as representatives from research organizations and think tanks such as, Chinese Academy of Sciences, China Academy of Information and Communications Technology, China Electronics Information Industry Development Academe, Beijing Academy of Artificial Intelligence, Shanghai Artificial Intelligence Laboratory, Shanghai Qi Zhi Institute, Carnegie Endowment for International Peace, Brookings Institution, Future of Life Institute, Centre for International Governance Innovation, Frontier Model Forum, engaged in in-depth discussions on advancing international collaboration in AI governance.
Distinguished attendees included Professor Andrew Chi-Chih Yao, Winner of Turing Award, Dean of the Institute for Interdisciplinary Information Sciences and Dean of the Institute for AI at Tsinghua University; Professor Xue Lan, Distinguished Professor and Dean of Institute for AI International Governance at Tsinghua University; Professor Shen Weixing, Professor of Law at Tsinghua University and Director of the Institute of AI and Law at Tsinghua University; Professor Tang Jie from the Department of Computer Science at Tsinghua University; Hu Guodong, Director of the Key Laboratory of AI Scenario Application and AI System Evaluation at the Ministry of Industry and Information Technology of China; Wei Liang, Vice President of the China Academy of Information and Communications Technology and Vice Chair of ITU-T SG17; Professor Max Tegmark, professor at the Massachusetts Institute of Technology and the President of the Future of Life Institute; and Duncan Cass-Beggs, Executive Director of the Global AI Risks Initiative at the Centre for International Governance Innovation.
Xue Lan chaired the meeting, introducing that, to implement the Global Artificial Intelligence Governance Initiative released by China in 2023, the "China AI Development and Safety Association" brings together China's strengths and intellectual resources in the field of AI development and security. This organization is China's equivalent to the Artificial Intelligence Safety Institute (AISI), representing the Chinese side in dialogues and collaboration with AISI around the world. Against the backdrop of 2024 France-China Joint Statement on Artificial Intelligence and Global Governance, this conference focus on enhancing understanding of China’s AI development and safety governance approaches while seeking insights into other countries' developments in this field, thereby contributing to global cooperation in AI development and governance.
Andrew Chi-Chih Yao introduced AI technology is developing at an astonishing pace. For example, in the recent period, DeepSeek has led the innovation in low-cost development and use of large models. AI not only provides more possibilities for driving new innovations but also brings many risks and challenges. In the era of interconnection, strengthening international cooperation is of crucial importance. Currently, there is a sound foundation for global cooperation on AI security and significant progress has been achieved. Especially in China, related research has been booming, the research teams have been continuously expanding, and a number of international cooperation infrastructure and network platforms are taking shape. Additionally, the international scientific community has reached a consensus. Finally, Yao emphasized that scientific innovation is created by humans and serves the beautiful future of humanity. We should work together to jointly create a favorable environment for the development of AI so that AI can enhance human well-being.
Shen Weixing outlined China's systematic development of a multi-layered AI legislative framework, emphasizing three strategic dimensions. From a macro perspective, China’s AI legislation seeks to balance innovation and regulation, ensuring legal clarity, fostering industry growth, and aligning with existing laws. Substantively, China’s AI legislative need to clarify the legal definition of AI, encompass the fundamental principles of AI law, and establish self-regulation mechanisms, risk management systems, transparency measures, and AI damage relief mechanisms that align with AI risk classification. Guided by the Global Artificial Intelligence Governance Initiative, China will strengthen international cooperation to jointly advance the formulation of fair, inclusive, and innovation-friendly global AI governance rules. This collaborative effort aims to enhance the security and reliability of AI systems while promoting technological innovation and inclusive development through coordinated global governance frameworks.
Tang Jie proposed that the ultimate goal of large model research is to teach machines to think like humans (i.e., AGI), a goal that may be very long-term or could be achieved relatively soon. On the issue of the "AGI definition", he provided a five-tier definition of AGI, ranging from low to high levels, including pretraining large language models, alignment and reasoning, self-learning, self-perception, and consciousness. He believes that the current stage is at the intersection of the second and third levels. Regarding the "AGI trends" in 2025, he suggested that autonomous, agent-capable large language models will become the core of daily life and work. The potential for collaboration between humans and agents will be a transformative force, with the most groundbreaking changes likely to occur in the field of scientific research. On the issue of "AGI safety," he emphasized the need to develop comprehensive security strategies at the model, individual, and national levels. Finally, he concluded that the path to AGI is still long and requires joint efforts to achieve steady development, a steady development that is safe, controllable, and free of hallucinations.
Hu Guodong proposed that we should develop AI for the benefit of humanity and for good causes, when AI transitions from large language models to large world models and evolves from AI to agent intelligence. In his introduction of the application of AI systems in Chinese agriculture, business & industry, consumption, and healthcare, the effectiveness of AI application includes bringing hope to the field, taking business & industry into a digital & intelligence world, creating new scenarios for consumers, and allowing doctors to cure more people. On the basis that all people are born equal, AI should serve the people of all nations, by the people, and for the people.
Wei Liang pointed out that China has fully leveraged its industrial strengths to promote innovation in AI technologies and their deep integration with the real economy, making beneficial explorations in AI security governance standardization and industrial practice. China has released the "New Generation Artificial Intelligence Development Plan" (2017), the "National Artificial Intelligence Industry Comprehensive Standardization System Construction Guide" (2024), the "Artificial Intelligence Security Standard System 1.0" (Draft for Comments), and actively contributed to international standardization organizations such as ISO, ITU, and IEEE. The industrial sector has also responded proactively, taking the initiative in AI safety governance. This includes the release of "Artificial Intelligence Security Commitment", "Artificial Intelligence Risk Governance Report", and the initiation of the AI Safety Benchmark for large models.
Duncan Cass-Beggs emphasized the crucial role the United Nations can play in advancing AI development and leveraging AI to promote sustainable development. However, the role of the United Nations in addressing such challenges remains limited, adopting different international governance mechanisms tailored to specific scenarios and issues would be a more rational approach. He noted that extreme risks, such as the potential loss of control over frontier AI, are more dependent on coordination among major countries that possess cutting-edge technologies. In contrast, more general AI-related risks, such as deepfakes and misinformation, rely predominantly on domestic governance. Governments should utilize diplomatic policies to jointly design international cooperation mechanisms, as intergovernmental collaboration can have a greater impact on the well-being of citizens. Establishing a global network of AI safety research institutes will be key to ensuring the safe governance of AI, serving as a foundation for developing a broader global AI governance framework.
Max Tegmark has raised concerns regarding safety issues and control mechanisms of advanced AI technologies. He argues that while global powers and major tech corporations are actively pursuing the realization of Artificial General Intelligence (AGI), insufficient attention has been paid to researching control mechanisms. Establishing robust safeguards for advanced AI technologies is imperative to prevent extreme, existential risks caused by uncontrolled systems. To address this, Tegmark advocates strengthened collaboration between enterprises and governments. Drawing on the experience from pharmaceutical development and nuclear facility management, he proposes that powerful AI systems should undergo rigorous safety standard evaluations and implement control mechanisms prior to deployment to ensure their safe operation. Meanwhile, Tegmark calls for global collaborative discussions to align AI development with human interests and ethical values, ensuring that technological progress remains a force for collective benefit rather than unintended harm.
In the Q&A session, the audience raised questions on how to strengthen global coordination and cooperation in AI governance, how to ensure the participation of developing countries, and how to address differences in development and cognition among countries. Regarding this, Andrew Chi-Chih Yao proposed to establish a dialogue mechanism between developing countries and developed countries, as well as countries with leading AI capabilities, to send students or scholars from developing countries to technologically advanced regions for learning through academic exchanges. At the same time, he called on the media to convey more of the voices of third world countries. Duncan Cass-Beggs mentioned that the development of AI should shift from competition to cooperation and achieve a balance between security and innovation through international cooperation. Shen Weixing introduced that China has always paid attention to the key issue of balancing security and development in the process of drafting relevant laws. He believes that focusing on security is to create an environment conducive to innovation and development, rather than using security as an excuse to sacrifice development. Hu Guodong proposed the newly added concept of "Together" for the 2021 Olympics, and AI global governance should also follow this spirit. While pursuing faster and stronger computing power, security governance goals should be achieved through more united cooperation. Max TegMark also mentioned that people should regain the Olympic spirit and rethink the future relationship between humans and machines. The development of AI should not be measured by the "win or lose" between countries, and technology should transcend national borders.
Under the support of the Chinese government, the major Chinese AI research institutions have gathered the country’s technological capabilities and intellectual capital in AI safety and development research and established the “China AI Safety and Development Association”. This organization is China's equivalent to the Artificial Intelligence Safety Institute (AISI), representing the Chinese side in dialogues and collaboration with AISI around the world. Members of the association include Tsinghua University, Peking University, Chinese Academy of Sciences, China Academy of Information and Communications Technology, China Electronics and Information Industry Development Research Institute, Beijing Academy of Artificial Intelligence, Shanghai Artificial Intelligence Laboratory, Shanghai Qi Zhi Institute, etc.
