Home: Home > About > Newsletter > Content

Newsletter January 2021

January 7, 2021

AI International Governance Newsletter

The Institute for AI International Governance of Tsinghua University (I-AIIG)


Frontier View


Hosted by Institute for AI International Governance of Tsinghua University (I-AIIG), with the United Nations Development Programme (UNDP) as international supporting organization,the InauguralTsinghua University International AI Cooperation and Governance Forum 2020 wassuccessfullyheld from December 18th to 19th in Beijing, China. Themed “International AI Cooperation and Governance in the Post-COVID World”, the forum focused on both the opportunities and challenges posed by AI technologies. The representatives shared understandings on AI cooperation and governance, exchanged the wisdom of international governance, and proposed academic agendas for future AI governance research.


Wang Qinmin,Vice Chairman of the 12th National Committee of the Chinese People’s Political Consultative Conference (CPPCC) and former Chairperson of the All-China Federation of Industry and Commerce (ACFIC), gave the opening remarks at the forum. He noted that, in the development and application of AI technologies, in addition to addressing the challenges surrounding data quality, algorithms and computing power, we need to pay attention to AI-associated prominent problems that call for prompt solutions, such as the impacts of AI applications on public security, ethics, employment, market competition and privacy protection, as well as governance issues in the development of digital governments and smart cities. As a world-renowned, high-level research-led university, Tsinghua University should leverage its global academic and policy clout in the field of AI governance, assume its deserved role in exploring the academic frontier and serving the major needs of China, constantly enhance its capacity to innovate and strive to advance the academic frontier, thus actively offering advice on national development and contributing wisdom to human civilization.


Qiu Yong, President of Tsinghua University held that, universities—where the AI ideas originate, where new trails blazed for social progress and where the future of mankind is nurtured—need to open up new prospects and make new accomplishments in AI cooperation and governance: they should lead the development of AI, constantly innovate AI theories and technologies, and vigorously advance global AI cooperation. In June this year, Tsinghua University set up the I-AIIG. Dedicated to the research on major theoretical issues and policy demands concerning international AI governance, the I-AIIG strives to construct a fair and reasonable international AI governance system that provides intellectual support for coping with global major challenges.


Beate Trankmann,Resident Representative at UNDP China started her welcoming remarks by emphasizing the importance of international cooperation and multilateralism in the pandemic response. Similarly, the discussions on AI need to be global. She pointed out that the effectiveness of AI systems relies on data, which can lead to data abuse and infringement on individual privacy, a problem that has only been exacerbated by the proliferation of data at an exponential rate in the digital age. We must be cautious, to ensure that AI supports human development, rather than expanding inequalities and creating new challenges, which would consequently undermine global progress towards achieving the Sustainable Development Goals. She urged investments in both digital infrastructure and education to ensure that people can benefit from technological innovations and have the enhanced capabilities necessary to engage in the new world of work being shaped by AI and automation.


Fabrizio Hochschild,UN Under-Secretary-General, Special Adviser for the UN Secretary-General on digital cooperation believes that the onset of the COVID-19 pandemic has only accelerated and accentuated the use of AI, and the most pronounced proliferation will probably be in the healthcare sector. Despite the massive potentials, he laid out several challenges including our increasing reliance on AI’s predictive capabilities which makes us vulnerable to systemic errors, cyber-attacks, lethal autonomous weapons, and the widening digital divide. Of all the emerging technologies, artificial intelligence stands alone as the one with the greatest potential to empower, but also to disrupt. He urged international collaboration to ensure that AI is used in a transparent and trustworthy manner that upholds human rights and human dignity, that promotes our safety and security, and fosters inclusive peace.


Zhao Houlin, Secretary-General of the International Telecommunication Union (ITU), presented via video link, said that as the international governance environment and framework building is critical to the making of AI market rules and social and ethical norms, all parties concerned should be mobilized to actively engage in and push forward the cause. He pledged that the ITU will work together with Tsinghua University to advance technological innovation and collaborative governance of AI globally, with an aim of delivering greater benefits out of AI for the mankind.


Xu Jing, Director-General of the Department of Strategy and Planning, the Ministry of Science and Technology (MOST), gave a keynote speech on the occasion. He noted that, the Chinese government has attached great importance to AI governance issues. In handling AI governance, we need to stay committed to scientific and technological advances, work towards the goal of improving people’s wellbeing, hold the bottom line of ensuring security and controllability, and uphold the philosophy of global opening-up and co-governance. He hoped that Tsinghua University will, on the basis of the I-AIIG, conduct thorough research and advance exchanges and dialogues, so as to pool global wisdom for the building of a consensus on cooperation.


Wang Xiaolong, Director-General of the Department of International Economic Affairs, the Ministry of Foreign Affairs (MOFA), said in the keynote speech that, effective AI governance entails the involvement and concerted efforts of governments, the scientific research community, industries, think tanks and other stakeholders. Since its founding, the I-AIIG has worked to build a bridge for cooperation and carry out international cooperation, and thus played an important role in consolidating AI cooperation between China and other countries. In the future, when working on AI governance, we need to set inclusive and fair rules, uphold the principle of extensive consultation, joint contribution and shared benefits, respect the governance sovereignty and legislations of different countries, and create space for the iteration of AI technologies.


Andrew Chi-Chih Yao, Turing Award Winner and Dean of the Tsinghua Institute for Interdisciplinary Information Sciences (IIIS), took the Secure Multi-Party Computation (MPC) technology as an example to pinpoint that encryption can play a bigger part in data governance, and recommended that we combine a series of credible algorithms with relevant real institutions, and join hands to lay the foundation of credible data governance.


Zhang Bo,Academician of the Chinese Academy of Sciences (CAS) and Honor Director of the Tsinghua Institute for Artificial Intelligence (IAI), started by talking about the current vulnerabilities of AI. Besides introducing the latest work of the IAI on third-generation AI, he argued that AI development and governance should grow in tandem and reinforce each other, and called on countries to unite in developing the secure, reliable, credible and expandable AI technologies of the third generation.


Fu Ying,Honorary Dean of I-AIIG and former Vice Foreign Minister of China, called for establishing an inclusive international governance committee for AI, and making concerted efforts to study, discuss and follow the sound recommendations and advice offered by different parties concerned and put in place a common set of international norms. She said, “China has taken practical steps in the governance and legislation of artificial intelligence. Of course, from a broader perspective, this is a challenge for all of humanity, not an issue that one or two countries can solve alone. It is vital that international agencies and countries work together on this topic, and we believe that artificial intelligence should ultimately benefit all of humanity.”


Xue Lan, Dean of I-AIIG, believed that the biggest challenge for AI development at present is that, the AI technologies are growing too fast, while governance principles, as a social system, develop in a slow and gradual manner. Therefore, we need to resort to “agile governance”. “In handling AI, we can learn from the existing international regimes, and constantly intensify international cooperation on AI in the future.What’s more, we need to identify commonalities of all stakeholders and engage in continuous discussion and exchanges to find universally applicable principles and rules of AI, thus ensuring that AI develops soundly and benefits the mankind,” he said.

Zhang Yaqin, Dean of the Institute for AI Industry Research at Tsinghua University (AIR), highlighted three dimensions for AI governance. The first is to, through technological innovation, develop responsible AI technologies, including data analysis models, prediction-based epidemic prevention, and applications which help improve health. The second is to enhance international cooperation and carry out in-depth dialogues and exchanges at the international level, in a bid to cope with the most urgent issues. The third is to promote the use of AI at a large scale while ensuring the security of relevant technologies at the same time.


There were five parallel sessions held concurrently during the forum, with the theme of:“ Role of AI in combating Covid-19: what are the lessons?”, “AI governance for Sustainable Development Goals”, “AI and International Security: Challenges and Opportunities”, “International cooperation on AI governance” and “ Security and Safety of Data and AI”.


The thematic session 1 - “Role of AI in combating Covid-19: what are the lessons?” , organized by I-AIIG, was successfully held on 18th December.Yu Yang,Director for International Academic Exchange at I-AIIG and Assistant Professor of the Institute for Interdisciplinary Information Sciences at Tsinghua University, believes that information and data technologies used for epidemic governance in China, as represented by AI, feature two characteristics—comprehensiveness and rapidity. China’s epidemic governance is achieved through an integrated system of co-governance by an AI-adaptive government and proactive and capable enterprises. AI technologies’ comprehensive and rapid penetration into China’s epidemic governance has relied on three key factors: proactive corporate engagement, an algorithmically-thinking government, and a significant presence of pivotal government and corporate units.


Effy Vayena,co-chair of the WHO’s expert advisory group on Artificial Intelligence health ethics and governance and a Professor at the Swiss Federal Institute of Technology (ETHZ) for Big Data and Artificial Intelligence Ethics. She noted the urgency for digital transformation and governance, and called for clear collaboration and coordination across countries due to different security systems and varying levels of digitization in individual states. As AI technologies advance, however, AI ethics also requires our attention. For example, European governments have sought to protect people from the pandemic without violating the rule of law, democracy and fundamental rights.


Shi Jun, President of the Asia Pacific Business Group and VP for Strategic Planning at SenseTime, believes that AI will continue to emerge as the center of gravity in post-pandemic global competition. Given the accelerated development of AI technologies and data governance as we have seen in countries, there is a need to further refine regulation and governance rules.


The thematic session 2 - “AI governance for Sustainable Development Goals” , organized byUNDP, was successfully held on 18th December.Thomas Davin, the Director of Innovation at UNICEF, believes that there is much AI can do - if directed in the right way. He laid out two main challenges that AI brings. First is the capacity gap, that there is not yet enough expertise in the world to unleash AI’s potential, and the AI talents tend to concentrate in developed countries. Second is the technology gap, that half of the world is not connected digitally today. To tackle these challenges, Thomas brought up two potential solutions. The first is to democratize the use of AI and invest in building expertise and AI tools in low- and middle-income countries. The second is to establish a governance system that allows external third parties to understand how the algorithms function.


Max Tegmark, MIT Professor and co-founder of the Future of Life Institute elaborated two necessary prerequisites for successful international AI cooperation: a shared positive vision and establishing clear red lines. He believes that AI is not a zero-sum game. Just as a shared positive vision has allowed us to eradicate smallpox even during geopolitical tension in the 1980s, the SDGs is another shared positive vision we need to strive for. A study by Prof. Tegmark and collaborators found that AI may help achieve 79.3%, 92.6%, and 63.3% of the societal, environmental, and economic-related SDGs, while it may hinder 37.8%, 29.5%, and 31.7% of them. As a result, AI scientists can learn from biologists and chemists to set clear red lines for AI applications and steer towards globally beneficial AI.


Zeng Yi, the co-director of the China-UK Research Center for AI Ethics and Governance at the Institute of Automation, Chinese Academy of Sciences. He started by sharing that according to an analysis of all the research cases collected by the AI4SDGs Think Tank, only 0.1% of all AI-related publications contribute directly to SDGs. Of those, most cases are associated with health and education, while few focus on environment-related SDGs. He shared three cases where AI can be beneficial or harmful to social good: 1) Emerging AI monitoring in the classrooms to regulate students’ behavior is concerning but could prevent campus bullying. 2) AI can be used to monitor and analyze biodiversity data to better protect wild animals and improve farm animal welfare. 3) Facial recognition has helped contact-tracing to combat COVID-19 but has also raised privacy concerns.Yet, he believes thatin the future, a sustainable symbiotic society will see humans, animals, other living beings, and the environment in harmony.


The thematic session 3 - “AI and International Security: Challenges and Opportunities” , organized by the Center for International Security and Strategy ofTsinghua University (CISS), was successfully held on 18th December.John Allen, President of the Brookings Institution, noted in his keynote speech that the US-China strategic competition in AI and other novel technologies has become increasingly visible, and narratives of an “AI arms race” have not been in short supply. He hopes that, to avoid a runaway arms race, the security concerns discussed in the AI and International Security project will be further explored at both the governmental level and in non-governmental fora, particularly with regard to the challenges and risks posed by AI in four areas: the limits of AI technologies, the risks of AI-inflicted conflict escalation, risks created by AI technology proliferation, and humanitarian risks, in order to reduce the likelihood that those risks come to pass. As we are at a critical juncture in regulating the development and application of these technologies, he called on both governments and expert communities in the US and China to examine the opportunity that now exists to develop agreements around practical steps to reduce national security risks posed by AI.


Fu Ying,Chair of CISS and Honorary Dean of I-AIIG, focused her keynote speech on China’s position on international cooperation regarding AI governance, recalling Chinese President Xi Jinping’s message at the G20 Summit in November, 2020, which emphasized China’s support for further dialogues and meetings on AI to push for the implementation of the G20 AI Principles and set the course for the constructive development of AI globally. In September, 2020, Chinese State Councilor and Foreign Minister Wang Yi proposed a “global initiative on data security” in hopes that the international community would reach an agreement on AI security on the basis of universal participation, and expressed his support for the affirmation of the commitments in the initiative through bilateral or multilateral agreements. He laid out three principles for effectively addressing data security risks: upholding multilateralism, balancing security and development, and ensuring fairness and justice. Fu Ying pointed out that the joint China-US research project on AI security governance is a success story from which both sides have benefited greatly, and the findings are particularly relevant today. As we come to terms with the inevitable weaponization of AI technologies, right pathways to AI governance must be identified.


Li Bin, the Chinese Team Lead for the joint project, noted during his presentation on the progress and findings of his team’s research, that the basic norms of international law should be embedded into the design and deployment of AI-enabled weapons to ensure compliance with international legal norms when such weapons are used during armed attacks. Speaking of the application of the Law on the Use of Force and the Law of Armed Conflict, he advocated for the use of the basic principles of international law in steering the development of AI-enabled military technologies. On top of that, both developers and operators of AI-enabled weapon systems need to be trained to ensure better understanding of and full comportment with the principle of proportionality in the Law on the Use of Force and the Law of Armed Conflict.


The thematic session4 - “International cooperation on AI governance”, organized byUNDP, was successfully held on 19th December.Zia Khan, the Senior Vice President of the Rockefeller Foundation started by stating that while many AI ethics principles have been published, few concrete governmental initiatives have been implemented. Furthermore, ethics protocols alone are insufficient in ensuring the development of ethical AI. We need to develop rules tailored for AI, or otherwise, it will be governed by outdated principles. He shared few takeaways from his work: 1) each stakeholder speaks a different AI language, we need to be mindful of how these laws may be perceived in different socio-political environments; 2) AI technologies are being developed by multinational companies, yet most regulatory structures are at the national level; 3) we need urgent consensus for AI governance in life-threatening situations like pandemic and war; 4) independent agencies need to assess safety, efficacy, and fairness of AI applications.


Yeong Zee Kin,the Assistant Chief Executive of the Infocomm Media Development Authority of Singapore (IMDA) believes that the international AI governance framework should be forward-thinking, open and interoperable, and at the same time commercially sensible, practical, and realistic. He elaborated Singapore’s effort in AI governance with two pillars: laws that emphasize personal data protection and tools to encourage trust in AI development, such as a “trusted data-sharing framework”, “data regulatory sandbox” etc. A suite of AI governance initiatives has been implemented to support industry adoption of responsible AI, including self-assessment guidelines and case studies. In addition, Singapore has started training and certification programs for professionals in AI governance. He believes none of these efforts can be done in silos, but in collaboration. Moving forward, these frameworks will be reviewed actively to ensure relevance, with “human-centricity” as the guiding North Star.


The thematic session5 - “Security and Safety of Data and AI”, organized bythe Beijing Academy of Artificial Intelligence (BAAI), was successfully held on 19th December. Danit Gal,Associate Fellow at the Leverhulme Centre for the Future of Intelligence at the University of Cambridge, drew a distinction between two important AI concepts——safety and security: AI safety is more internally oriented, referring to preventive measures against unintentional harm that protect the environment from within the system, whereas AI security deals more with defense against external attacks that would inflict intentional harm to individuals, organizations and property to protect the system from the environment. She believes that technology affects us all and so does its safety and security, yet our ability to address these impacts is utterly outpaced by the advances and application of the technologies. She suggested to reduce vulnerabilities and to promote and galvanize global governance of data security and safety through enhanced communication and collaboration.


Vincent Müller, Professor of Philosophy at Eindhoven University of Technology, gave a keynote presentation on the long-term risks of superintelligence and general AI in two moves: from superintelligence to technological singularity, and from that to existential risks. Superintelligence is defined as artificial intelligence that greatly exceeds the best cognitive performance of humans in virtually all domains of interests. According to Professor Müller, the shift from superintelligence to technological singularity is based on three presuppositions: accelerated speed and more data, no need for cognitive science, and imperative exigency; and the shift from technological singularity to existential risks depends on two presuppositions: the applicability of the rational choice theory to AI, and the orthogonality of intelligence and final goals.


Roman V. Yampolskiy,professor from the University of Louisville commented on the future of AI, security and defense in his keynote speech, noting that superintelligence is coming and with it comes SuperSmart, SuperComplex, SuperFast, SuperControlling, and SuperViruses. Touching upon concerns with AI, he highlighted some recent researches including “Taxonomy of Pathways to Dangerous Artificial Intelligence,” “Safe AI—is this possible?” “Mitigating Negative Impact,” and “Limitations of AI,” etc. He argued that the timeline of AI failures has an exponential trend, and AI failures will increase in frequency and severity proportionate to AIs’ capability.


Expert Perspectives


1. Fu Ying: Discussion on Paths for AI and International Security Governance


The rapid development of AI technology in recent years has brought huge opportunities, however, technological revolutions are often accompanied by unpredictable security challenges as well. Therefore, we need to pay particular attention to the ethical and technical risks brought by the weaponization of AI technology. Experts and scholars in many countries have called for the prohibition of developing intelligent weapons that can independently identify and kill human targets. What is more, it should not be allowed to have such intelligent weapons control human life and death. However, it is difficult to reach a global consensus on imposing a comprehensive ban on AI weapons, and even if relevant discussions and negotiations could be initiated, it will be long-drawn-out ones.


In light of the current trends, the weaponization of AI is inevitable. A more feasible approach may be to require all AI-enabled weapons to be developed in accordance with the existing norms of international law. Therefore, countries need to seek consensus on how to prevent risks and work together to build a governance mechanism. During the track-two discussion with the US, our focus was on how to set the “forbidden zone” of attacks for AI-enabled weapons, how to carry out supervision of AI weapons in accordance with international laws and regulations, and how to encourage people to exercise restraint so as to limit the militarized abuse of AI data.


· Military security challenges of AI


There are many potential challenges in AI-enabled weapon systems. First, the inherent technical defects of AI make it difficult for the attacker to control the damage range of the attack, so it is likely for the victim to suffer from excessive collateral damage, which might cause the escalation of the conflict. AI-enabled weapons should not only be able to distinguish between military and civilian targets when carrying out attacks, but also need to be able to prevent and avoid excessive collateral or indirect damage to civilian targets. However, it is uncertain whether the existing AI technology could meet the above-mentioned conditions during the use of force.


Second, the current development of AI technology driven by machine learning requires a lot of data. It is inevitable that training algorithms and training data sets based on big data will bring biases into real application systems. Therefore, the possibility of AI providing wrong suggestions to decision-makers cannot be ruled out. Furthermore, when the training data set is tainted by other countries and causing the system to provide wrong detection information, it is also possible for military decision-makers to make misjudgments and corresponding inappropriate military deployments.


Third, man-machine coordination is the ultimate challenge of AI militarization. Machine learning and big data processing mechanisms have limitations. Neither the reinforcement learning of behaviorism, the deep learning of connectionism nor the expert system of symbolism can faithfully and accurately reflect human cognitive abilities, such as intuition, emotion, responsibility, value, etc. The military application of AI is a comprehensive and coordinated process of man, machine and environment, and the lack of interpretability, learnability, and common sense of machines will amplify the risks of a conflict on the battlefield and even may spur international crisis to spiral.


· Discussion on AI security governance path


Both sides of the dialogue agreed that all countries need to exercise military restraint to avoid the weaponization of AI from causing major damage to mankind. Countries should ban auxiliary decision-making systems that have no responsibility and risk awareness. When using AI-enabled weapons, it is necessary to limit the damage range of attacks to prevent collateral damage and avoid the escalation of conflicts. In addition, the content of military restraint should also be reflected in public education. Since AI technology is easy to be diffused, it may end up in the hands of some hackers, who might then use it in actions that are harmful to public security.


It is the focus of security governance research to study how to ensure the use of AI-enabled weapons is in accordance with the basic principles of international laws. The UN Charter stipulates that unless authorized by the UN Security Council, member states shall not use force or can only use force for the purpose of self-defense. Therefore, when a country uses force for the purpose of self-defense, the intensity and scale of the force used must be commensurate with the severity of the attack or threat received. In the discussion, Chinese experts specifically pointed out that countries must assume legal responsibilities and actively promote and realize the establishment of international norms in military operations involving AI. At the same time, it is necessary to determine the threshold value of human participation to ensure that the use of intelligent force will not cause excessive harm. Since it is difficult for AI-enabled weapon platforms to judge what is a necessary, appropriate, and balanced attack, the subjective initiative of human commanders should be respected.


In addition, the security of AI data must be guaranteed. The process of data mining and collection, data labelling and classification, data use and supervision should be regulated and restricted. The process and methods of collecting intelligent weapon training data should comply with international laws, and the amount of data collected should reach a certain scale. It is necessary to ensure the quality and accuracy of data labelling and classification to avoid the formation of wrong models and causing decision-makers to make misjudgments. In the process of data use, attention should be attached to the targets and tainted data. Some Chinese scholars suggest to grade the autonomous level of AI weapons. For example, AI weapons can be divided into five levels, namely, semi-autonomous, partially autonomous, conditionally autonomous, highly autonomous and fully autonomous. The grading of autonomous level is conducive to better confirm and guarantee the role of human beings so as to effectively manage and control AI and autonomous weapon systems.


· China-US cooperation on AI international governance


The current stage is a critical window period for the formulation of international safety regulations regarding AI. At present, China and the United States are the fastest-growing countries in the research and application of AI technology. The two countries need to strengthen coordination and cooperation in this field. Other countries have also expressed concerns about the security of AI applications, indicating that AI governance is a common problem for mankind and cannot be solved by one country or two countries. It is of crucial importance for China and the United States to carry out dialogue and cooperation which will contribute to international AI governance cooperation. Therefore, China and the United States should conduct formal discussions on promoting the formulation of international norms and systems, explore areas of cooperation based on their respective interests and concerns, exchange and translate relevant documents, and reduce potential risks of the two countries in this field which might affect bilateral relations and international security through policy communication and academic exchanges.


In recent years, China has actively sent signals of cooperation. On November 21, 2020, President Xi Jinping emphasized at the 15th G20 Leaders’ Summit that China supports increased dialogue on AI, and proposes a meeting on this in due course to advance the G20 AI Principles and set the course for the healthy development of AI globally. On September 8, 2020, State Councilor and Foreign Minister Wang Yi proposed the Global Initiative on Data Security, which includes three principles that should be followed to effectively respond to data security risks and challenges. The hope is expressed in the initiative that the international community would reach international agreements regarding the issue of AI security on the basis of universal participation. It also calls on all States to support and confirm relevant commitments in the initiative through bilateral or multilateral agreements. While developing AI technology, China also attaches great importance to and actively promotes the development of relevant domestic governance. In 2018, China issued the White Paper on Artificial Intelligence Standardization which listed four ethical principles, including the principle of human interests, the principle of liability, the principle of transparency, and the principle of consistency of rights and responsibilities. China is well prepared to cooperate with the United States and other countries and regions in AI governance. We believe that AI should not become a zero-sum game, and technological breakthroughs should ultimately benefit all mankind.


2. Xue Lan: On Challenges and Prospects of AI Governance


· The benefits and risks of AI to human society


At present, we can often see some simple robots being widely used, such as in hotel services, and in particular, in anti-epidemic activities. From the pictures, we can see that the image recognition ability of AI can be utilized to identify certain cancer. In addition, there are some more complex AI application scenarios. For example, Chongqing is promoting a project called the Cloud Valley, which is actually using AI and the Internet of Things to form intelligent communities. This is a more complex application. People also apply AI for scientific discovery. Recently, there are some specific applications in the field of life sciences. For example, AlphaFold has made breakthrough in predicting protein structure, which is a more complex application scenario. From all these, we can be aware of the continuous evolution of AI application, which marks a process from simple to complex.


While bringing many social benefits, AI also generates some risks that are of great concern. One is the issue of algorithm discrimination. Recent studies have found that some hospitals in the United States are using AI to screen cases and to determine whether to admit patients to hospital. However, due to algorithmic problems, there is relatively obvious racial discrimination shown in this process. Under the same condition, the black people are diagnosed to be milder, so they are more likely to be refused by the hospital. The second risk is related the issue of security. In the field of automatic driving, if the data quality is not high, the automatic driving system would make mistakes and fail to recognize the obstacles ahead, hence causing accidents. After such accidents, responsibility distribution is also an issue worth discussing. The third risk is about privacy. For example, facial recognition may cause the infringement of privacy, and it may also bring about discrimination. Recently, there has been a lot of discussions regarding this issue in China. The fourth is about social influence. One of the issues is the replacement of employment by AI. The latest research shows that a considerable number of jobs will be replaced by AI-related technology from 2020 to 2025. A professor of MIT published an article last year comparing and analyzing two indexes, namely, employment creation and employment elimination by AI and IT. This article found that the employment elimination index is actually higher. In the long run, the impact of AI on employment is indeed worth our great attention. Another issue is information cocoons. Information push can bring convenience to people. There is nolonger anyneed to actively search for all kinds of information. Algorithms can push all the information we need based on our preferences. However, the potential consequence of this is that we will receive more relevant information on what we believed true, making us to be more confirmed on such belief. Will this process polarize our views and social perceptions? These issues deserve further attention and consideration.


· Basic rules, policies and systems required for AI governance


The goal of AI governance is to ensure that the development of AI can benefit mankind and technology can be used for social good. First, AI governance is still in the process of continuous development and evolution, and it is still an immature field. AI itself has different definitions. For example, when compiling the ethical guidelines for AI, we found that the basic definitions of AI in different organizations were not exactly the same. In addition, AI also has a black box feature. Its algorithmic mechanism has unexplainable parts, and its social impact or possible risks are uncertain. Second, AI governance includes elements such as value, mechanism, participants, objects, and effect. Value is a very important element. It should be clear which basic values should be adhered to when developing AI technology. Mechanism refers to the institutional system that is implemented to govern effectively. Participants refer to the participants needed to implement AI governance, which may include individuals, organizations and countries. The object of governance may involve the behavior of enterprises and users. The last element is the effect, that is, how to evaluate the effect of governance. Third, there is a complex of mechanisms in AI governance, which means that there are different governance mechanisms regarding the same type of problem when implementing of global governance. There are overlaps among these governance mechanisms, there are also conflicts of views and values, and there is no supervisor-subordinate relations among them. Therefore, there is a big challenge to coordinate different governance mechanisms. AI governance also has the same problem which requires further discussion. Fourth, AI governance involves dimensions such as technology, society, and results. As Mr. Yao Qizhi and Mr. Zhang Bo mentioned in their speeches, AI governance faces many challenges. In fact, many challenges can be solved through technology, such as those issues dealing with security, standards, and infrastructure. Ethical issues are mainly involved in the social aspect by setting rules to influence the development and application of AI. In terms of results, the focus is on coping with negative effects, including privacy, security, discrimination, and the gap between the rich and the poor.


· Challenges of AI governance


First, there are difficulties in regulating AI. One of the biggest challenges of AI governance is that AI develops too fast, however, as a part of the social system, the development and evolution of the governance system is slow, resulting in an inconsistent pace. We are concerned that this inconsistency of pace will further intensify. Last year, the Artificial Intelligence Governance Committee issued eight governance principles. The last one is agile governance, which is to change the traditional governance model. The traditional model needs to go through many policy formulation procedures. Now this model does not catch up with technological changes. Therefore, it is necessary to introduce policies in a timely manner to guide technological development even though the policy formulation process might not be complete. Second, the contradictory role of the public sector. AI technology can be widely used in the public sector, which might also be abused. In addition, the public sector faces various risks and challenges in the process of applying AI. Third, we need to promote the responsible innovation of enterprises. Enterprises are the most active subjects, and they would play games with the government in the governance process. Therefore, we need self-disciplined enterprises, and enterprises who emphasize onhow to better communicate with the public and the society.


· Why there is a need of international AI governance


Artificial intelligence has a long-term impact on the future development of mankind. The development of AI in the past is a history of international cooperation. We analyzed the cooperation between Chinese and US scholars in the field of AI, and found that the research results achieved by China-US cooperation account for a relatively large proportion. Close international cooperation was a very important factor for the development of AI in the past, and it is necessary to continue cooperation in the future. But on the other hand, AI may also bring major risks. Ambassador Fu Ying also mentioned in her speech that AI may bring devastating effects, such as producing large-scale autonomous weapons of destruction. No single country can solve such problems alone. It is impossible to estimate whether certain organizations could develop such technology and pose a threat to sovereign countries. Therefore, it is necessary for each country to seek common ground while reserving differences, and jointly deal withthe problems and find solutions.


We need to explore and establish a global platform or mechanism on the basis of previous research and discussions to coordinate on the pressing issues facing the future development of AI. First, we can learn from some existing mechanisms, including nuclear technology governance mechanisms, space law, climate change governance mechanisms, etc. Second, we need to strengthen international cooperation in the field of AI research, including cooperation not only in the field of scientific research and technological development, but also in social and ethical aspects. As Ambassador Fu mentioned, the cooperation between Tsinghua University and the Brookings Institution is a very good example. Common ground such as values and governance principles were found in such cooperation, together with some differences. Based on these similarities and differences, and through constant discussions and communications, we could finally find common principles to guide the healthy development of AI, and hence benefit all mankind in the end.


3. Zeng Yi: How does AI promote global sustainable development?


According tothecurrent data analysis, there are more than 8 million scientific research papers surrounding AI, but only 0.1% are linked to sustainable development goals. Therefore, it is the responsibility of scientists to consider how to integrate AI with sustainable development goals. This is why we released the Artificial Intelligence for Children: Beijing Principles this year, which is also the first AI development principles for children in China. An artificial intelligence and biodiversity workgroup was also established to promote the using of AI to protect animals.


Although AI is a useful tool that we invented, it is currently only applicable to information processing, rather than to truly understand the world. Turing raised the ultimate scientific and philosophical question, “Can machines think?”. However, the modern AI is obviously unable to think. Because of this, the current AI cannot be the main body of responsibility. The people and organizations involved in the design, research and development, application, and deployment of AI will continue to be the main body of responsibility for a long time. In the future, only when AI has acquired a certain degree of advanced cognitive functions such as self-perception and cognitive empathy, can we have the basis and possibility to discuss whether AI can shoulder part of the responsibility.


Japan advocates AI to be a partner of mankind. The ethical guidelines of the Japanese Society for Artificial Intelligence states: “If artificial intelligence wants to become a quasi-member or member of society, it must abide by the principles formulated by mankind.”


Western society generally regards AI as a tool, and the vast majority of AI act as the enemy of mankind in science fiction movies. What role AI plays in society in the future really depends on how we implement and in what way we implement general intelligence and super intelligence.


I hope that our future society will be a sustainable and symbiosis one in which humans, animals, AI, other lives, and the natural environment could form a harmonious symbiosis, and in which humans and AI could collaborate deeply to achieve long-term sustainable development in its true sense.


4. Yu Yang: China's experience of AI governance during the epidemic period---integrative coordination governance by the government and the society


In the practice of China’s governance during the epidemic period, there are two features shown in the involvement of information and data technology presented by AI. One is comprehensive. During the epidemic, AI has a full penetration into all aspects of social governance, public governance, and economic governance at all levels. The other feature is fast. During the epidemic, the IT-based epidemic monitoring system for people flow was built in a very fast speed. The reason why AI technology can fully support China to fight COVID-19 is that China has formed a structure of integrative coordination governance by the government and the social. The speech focused on the concept, mechanism, necessity and advantages of integrative coordination governance by the government and the social, and introduced China's important experience in implementing AI in the fight against the epidemic.


The reason why China's AI can be involved in epidemic governance is that such governance is done by an integrative coordination governance body formed by an AI-adaptive government and active and capable enterprises. The integrative governance model is the reason why AI can fully participate in the prevention and control of the epidemic in China. He particularly pointed out that the integrative governance model could not be functioning without the following three basic components, namely, proactive and technologically capable enterprises and NGOs, AI-adaptable governments, and hub departments that are widely distributed among governments and enterprises.


Compared with traditional governance, integrative governance has many advantages in fighting theepidemic. The first is that it can adopt algorithmic thinking in governance. The second is agile governance. Because the enterprise has a governance mindset, it can transform its project development capabilities into the abilities to discover public governance obligations. The third is the ability to deal with complex issues. In the public governance of the epidemic, there are a large number of issues that require complex calculations to be dealt with. Only when they are dealt with can they be managed.


Based on this conclusion, Yu Yang finally put forward three policy recommendations. First, any country and region should clarify that AI enterprises and NGOs are important reserve forces for national governance, and the system should be built to accommodate and encourage AI enterprises and NGOs to govern voluntarily. Second, we should cultivate the algorithmic governance thinking of the government, and build an AI-adaptive government in terms of thinking, system, and structure. Third, we need to strengthen and promote the hub departments in the government, enterprises and NGOs who have the ability to coordinate and integrate AI.


Major Events


1. Cybersecurity Standard Practice Guide---Guidelines on Artificial Intelligence Ethical Security Risk Prevention officially released


According to the news released on the official website of the National Information Security Standardization Technical Committee on5th January, the Cybersecurity Standard Practice Guide---Guidelines on Artificial Intelligence Ethical Security Risk Prevention (hereinafter referred to as the Guidelines) compiled by the Secretariat of the National Information Security Standardization Technical Committee was officially released. The Institute of AI International Governance of Tsinghua University organized the preliminary study and took part in the compilation of the Guidelines.


Regarding the ethical security risks AI may generate, the Guidelines provides standard guidelines for organizations or individuals to carry out AI research and development, design and manufacturing, deployment and application and other related activities. According to regulations, application deployer should provide users with mechanisms to reject, intervene, and stop using AI in a clear and easy-to-operate manner, and provide non-AI alternative solutions to the best of its ability.


2. Seminar held on the Experience and Governance of AI in Anti-epidemic Applications


On18th November, 2020, a seminar on the Current Hot Topics in AI International Governance---Experience and Governance of AI in Fighting the Epidemic was held at Tsinghua University. The event was presided over by Professor Liang Zheng, vice president ofI-AIIG. Professor Xue Lan, dean ofI-AIIG introduced the background of the conference. Yu Yang, director of the international academic exchange program, reported on key messages of the special report, China’s Experience in AI Governance in Epidemic Prevention and Control and Resumption of Work and Production. Representatives from companies such as Ali, Tencent, China Unicom, Meituan, Bytedance, Megvii, SenseTime, JD.com and Create, and government sectors such as Ganzhou Municipal Health Commission, Hangzhou Xiaoshan District Data Resources Bureau, and experts and scholars from the New Generation Artificial Intelligence Development Research Center of the Ministry of Science and Technology, Tsinghua University, and Peking University conducted in-depth discussions on related topics.


3. Seminar held on the Current Hot Issues in AI International Governance


On 24th September, 2020, aseminar on the Current Hot Topics in AI International Governance was held at Tsinghua University. The seminar was presided over by Professor Liang Zheng, vice president of the Institute for AI International Governance, Tsinghua University. Professor Xue Lan, dean of the Institute for AI International Governance and Schwarzman College of Tsinghua University, delivered a speech. Leaders from the Ministry of Science and Technology, the Ministry of Foreign Affairs, Office of the Central Cyberspace Affair Commission put forward their questions on the directions that need to be paid attention to in AI international governance, and expressed the hope that the Institute for AI International Governance could combine research with the actual work of their departments so to meet the service requirements of major national strategies and policies. Representatives from companies such as Meituan, Megvii, SenseTime, ZTE, Alibaba Research Institute, and Tencent Research Institute shared their practical experience and reflections on AI governance, and discussed the possibility of cooperation with the Institute for AI International Governance in the future. Experts from the New Generation Artificial Intelligence Development Research Center of the Ministry of Science and Technology and Tsinghua University conducted in-depth discussions with participants on the theme of the seminar.

上一条:Newsletter February 2021

CLOSE