The International AI Cooperation and Governance Forum 2025 launched at the University of Melbourne on November 27 (Wednesday), bringing together leading scholars, policymakers, industry executives, and representatives from various international organizations to discuss the role of global collaboration in shaping a safer, more inclusive AI future.
Co-hosted by Tsinghua University, the University of Melbourne, the National University of Singapore (NUS), and the Hong Kong University of Science and Technology (HKUST), this year’s theme was “Inclusive AI: Who Builds, Who Benefits?”

Group photo of forum participants
In her opening remarks, Professor Jeannie Paterson, Co-Founding Director of the Centre for Artificial Intelligence and Digital Ethics (CAIDE), stated the forum’s goal to “move beyond narratives of competition toward a model of shared responsibility,” noting that meaningful AI governance requires cooperation across borders and sectors.
Speaking at the ceremony, Jean Todt, UN Secretary-General’s Special Envoy for Road Safety, cautioned that road accidents remain the leading cause of death among young people worldwide.
“AI can be a game changer for road safety—from real-time detection of dangerous driving to smarter traffic systems,” he said. “But the real test is whether these life-saving technologies become accessible to all. Innovation must go hand in hand with strong public–private partnerships and global regulatory frameworks.”
Professor Yang Bin, Vice Chairperson of the Tsinghua University Council, highlighted China’s commitment to responsible AI development. He noted that Tsinghua considers AI a strategic priority and is advancing research, education, and capacity-building in governance. “AI should not simply be a multiplier—it belongs in the exponent,” Yang remarked. “We are ready to work with our global partners to ensure AI empowers our collective future and contributes to a shared community for humanity.”
Professor Tshilidzi Marwala, Rector of the United Nations University (UNU) and UN Under-Secretary-General, emphasized that inclusive AI “will not emerge automatically.” “It requires humility and collective action,” he noted. “AI is not merely a technological question but one of ethics and society. That is why cross-disciplinary and cross-national dialogue—exemplified by this forum—is so critical.”
The plenary session, chaired by Professor Xue Lan, Dean of Schwarzman College at Tsinghua University, featured leading voices, including Stuart Russell (UC Berkeley), Gong Ke (Haihe Laboratory of Information Technology Application Innovation), Jeannie Marie Paterson (University of Melbourne), Simon Chesterman (NUS), and Celine Yunya Song (HKUST).
Stuart Russell warned that current AI models are trained to operate as opaque black boxes and are “increasingly unpredictable and difficult to verify.” He outlined three policy pathways—product-based regulation, high-risk systems oversight akin to pharmaceuticals and nuclear power, and a fundamental redesign of AI architectures to enable verifiable alignment. “If we continue deploying systems we don’t fully understand, we will face serious accidents—perhaps on the scale of Chernobyl,” Russell said.
Gong Ke provided an overview of China’s August 2025 national strategy, the “AI+ Action Plan,” calling it a “milestone document positioning AI as a core driver of economic and social development.” The plan establishes targets for the widespread adoption of intelligent applications by 2027 and the full integration of AI into high-quality development by 2030.
Paterson, speaking from a social sciences perspective, urged attention to the “real and present risks” of AI misuse. “AI is now incredibly easy to deploy. We must understand its social impacts—from youth mental health to labour disruption. Regulation does not hinder innovation; it makes societies safer.”
Simon Chesterman addressed the paradoxes of global AI governance: “Developers warn of existential risks while continuing to push the boundaries. At the same time, companies publicly welcome regulation yet often resist concrete proposals.” Effective governance, he argued, must act like seat-belt laws—“essential safety infrastructure that enables high-speed progress.”
Celine Yunya Song presented research on cross-jurisdictional “value audits” and an integrated AIGC governance prototype. “The challenge is not that AI is too intelligent for us to deploy,” she said, “but that it becomes deeply embedded in our institutions and cultural systems. Governance must therefore evolve as a dynamic, adaptive capability.”
During the forum’s roundtable discussions, experts examined emerging global cooperation models, the risk of an “AI plateau,” and how to ensure AI benefits are distributed fairly.
Participants raised concerns about inequality in digital access, governance gaps across jurisdictions, and the cultural and psychological dimensions of AI use. Speakers also emphasized the need to strengthen regulatory capacity, expand global participation, and close emerging “intelligence divides.”
The two-day forum featured one plenary session and seven thematic panels. Nearly 100 experts from China, Australia, Singapore, the United States, the United Kingdom, and New Zealand participated in the event, alongside representatives from the UN, leading universities, and major technology companies, including Microsoft, Google, and China Telecom.
The discussions will continue on frameworks for responsible AI deployment, cross-border governance cooperation, and the foundations for an internationally trusted AI safety baseline.