On April 30, a group of leading scholars from the United States and China issued a joint warning at a public meeting on Capitol Hill, Washington D.C. about the potential existential AI risks, calling for stronger international cooperation to address what they described as a shared global challenge. The event, titled “Existential Risks of Artificial Intelligence and International Cooperation,” was convened and chaired by Senator Bernie Sanders. Speakers included Xue Lan, Dean of Tsinghua University’s Institute for AI International Governance (I-AIIG); Max Tegmark, a Professor at the Massachusetts Institute of Technology and Founder of the Future of Life Institute; Zeng Yi, President of the Beijing Institute for AI Safety and Governance; and David Krueger, an Assistant Professor at the University of Montreal.
The following text is a reconstructed speech draft based on Professor Xue Lan’s remarks at the event and is not a verbatim transcript.
Strengthening AI Governance and International Cooperation to Address Global Challenges
Distinguished Senator Sanders, fellow panelists, ladies and gentlemen,
First of all, let me thank you, Senator Sanders, for organizing this important discussion. As someone who has studied science and technology policy for most of my life, I see AI development as a transformative change that humanity must learn how to manage responsibly. I am grateful for this opportunity to learn from all of you and to share some thoughts on governance and international cooperation.
Getting back to your question, I think the international community has certainly been trying, but so far, its efforts have not been sufficient or very effective.
Today, there are various multilateral mechanisms addressing AI governance and safety. We have seen the AI Safety Summit process, which began in the UK in 2023 and continued in Seoul, Paris, and this year in Delhi, with future discussions possibly taking place in Geneva. There are also various United Nations mechanisms, including scientific advisory bodies and multilateral dialogues scheduled for later this year. In addition, many regional and bilateral initiatives are emerging worldwide.
These are all important efforts. However, overall, the current landscape remains fragmented and has not yet achieved the level of coordination or effectiveness that the world needs.
There are several reasons for this.
First, there remains great uncertainty about AI risks. People may not yet fully understand the full range of risks ahead, and therefore may not always grasp the implications of their own actions and decisions.
Second, there is the so-called “pacing problem.” Technological change in AI is moving much faster than governments can respond. This gap between innovation and governance is a common challenge across countries.
Third, the broader geopolitical environment makes it difficult for major AI powers to collaborate and to design effective mechanisms and guardrails against AI-related risks.
Against this backdrop, China also recognizes the risks associated with AI development and seeks to balance innovation and safety through an agile, adaptive governance approach.
First, on agile governance: because regulations and policies are almost always slower than technological change, governments may need to give up the assumption that regulation can be perfectly accurate and comprehensive from the start. Instead, governance needs to act quickly, even if some gaps remain initially, and then continue to update and improve over time.
Another important point is that governments and companies should move beyond a purely adversarial “cat-and-mouse” relationship and instead work together to identify and address risks.
Regarding governance, it should avoid overreliance on punishment when guidance and incentives may yield better outcomes, except in cases of clear violations of stakeholder interests.
On adaptive governance, China did not seek to build a comprehensive governance framework in a single step. Instead, China has adopted a “learning by doing” approach.
The process began with the development of governance principles and general guidance. Later, China gradually established foundational legal infrastructure for AI governance, including laws on personal information protection, data security, and cybersecurity. These laws created the broader framework within which AI systems operate.
As AI technologies continued to evolve, China has also introduced more targeted regulations in response to specific technological developments. For example, following the emergence of large language models, China introduced the Interim Measures for the Management of Generative AI Services. These regulations continue to be updated as technologies evolve.
At the same time, Chinese companies have also developed voluntary commitments to AI safety practices. Last year, during the World Artificial Intelligence Conference in Shanghai, several Chinese AI companies signed an updated version of those commitments.
By combining these elements, China has gradually developed a multi-layered governance system to address AI-related risks. The system still has weaknesses and areas that need improvement, but it has helped support China’s AI development while maintaining a balance between innovation and safety.
Looking ahead, I believe the real competition in AI is not simply a race between China and the United States. Rather, it is a global effort to determine who can develop AI systems and services that are more capable, safer, more reliable, and more beneficial to society.
In this area, countries share many common interests. Even amid broader geopolitical competition, I believe there is still room to establish “safe zones” for AI safety cooperation. Scientists and researchers from China, the United States, and other countries can work together on issues such as AI safety technologies, mutual recognition of standards, interoperability protocols, and early-warning mechanisms for risk.
At the same time, major AI countries should also work together to support developing countries in strengthening AI capacity-building and narrowing the global AI divide.
Artificial intelligence is a global technology. Its opportunities and risks transcend national borders. Only through sustained dialogue, mutual learning, and international cooperation can we effectively balance innovation and safety for the benefit of humanity.
Thank you very much!
