Highlights
•Examining public expectations and knowledge of AI through the theoretical lens of technological frames.
•There are similarities and differences in the public’s technological frames before and after the launch of ChatGPT.
•Public's technological frames are contentious, preliminary, complicating the conceptualization of guidance and solutions.
•AI governance should focus on effectiveness of public participation in addition to the avenues for involvement.
•A human-machine collaborative approach combing LDA and content coding is applied to complements each other.
Abstract
Public participation is crucial for the governance of artificial intelligence (AI). However, the public is typically portrayed as a passive recipient in practice, and little is known about their expectations, assumptions and knowledge about AI. Based on the theoretical lens of technological frames, and using Latent Dirichlet Allocation (LDA) and content coding on 114,393 relevant comments, the article explores public perceptions of the nature of AI, AI strategy and AI in use, and further compares the discourse before and after the launch of ChatGPT. The findings show that ChatGPT amplifies public enthusiasm and expectations of AI, as well as concerns and fears, and draws attention to national competition. However, public discourses are generally conflicting and preliminary, characterized by abstract and grandiose narratives, posing challenges in reaching consensus and generating practical insights. Therefore, this analysis raises serious concerns over public participation in AI governance, as the public not only faces limited channels for substantive involvement but also struggles to articulate effective discourses in the public sphere.
Introduction
At the Dartmouth Conference in 1956, the concept of artificial intelligence (AI) was originally articulated. Half a century later, after much disillusionment and the ensuing “AI winter”, AI has made a comeback on the technical scene with the promise of machine learning (ML) algorithms and rapidly innovative applications (Kerr, Barry, & Kelleher, 2020). The recent launch of ChatGPT has brought AI even further into the spotlight for society at large, as ChatGPT and its derivatives have become as intelligent as humans, if not more so (Taecharungroj, 2023). However, the hype around AI also heralds the obvious double-edged nature of the technology. While AI has very great potential for economic development and social progress, it also poses serious risks to the ethical and moral system (Dignam, 2020; Taeihagh, 2021; Zhang, Wu, Tian, Zhang, & Lu, 2021). In response, a famous event was the open letter by Future of Life Institute calling on “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4”,1 signed by tens of thousands of prominent technology leaders and researchers, yet this does not seem to have stopped the rapid evolution of AI altogether. As a result, there is still debate about how to balance the potential benefits and harms of AI.
One increasingly prominent heuristic in this regard is the notion of AI governance. Inherited the concept of governance as the interaction of multiple actors with and through networks (Rhodes, 2007), AI governance refers to the discursive process of diverse social actors, where the public sectors, private sectors and the public participate in dialogue and negotiation to generate broader social values throughout the technology lifecycle of AI (Roberts et al., 2021; Ulnicane, Knight, Leach, Stahl, & Wanjiku, 2021; C. Wilson, 2022). Therefore, AI governance is considered as a promising approach to dealing with the double-edged sword of AI and has received significant support in both national strategies and academic research (Djeffal, Siewert, & Wurster, 2022; Fatima, Desouza, & Dawson, 2020). For example, AI strategies have identified an active and collaborative role for the state and a multi-stakeholder approach for non-state actors (Cath, Wachter, Mittelstadt, Taddeo, & Floridi, 2018). In particular, much research has reached consensus on public participation as an important foundation in AI governance (Kerr et al., 2020; C. Wilson, 2022), as it is a key approach for citizens to collectively promote the common good and protect their rights from political elites (Scaff, 1975). These efforts, while worthwhile, are not sufficient. In fact, formulations about public participation in national strategies are ambiguous and secondary without any clear framework, and the public is considered more as users of services and labors than as participants in a democratic society (C. Wilson, 2022). As a result, public participation in AI governance is a performative or rhetorical gesture rather than an explicit commitment in AI strategies (Kerr et al., 2020; Schuelke-Leech, Jordan, & Barry, 2019). In a similar vein, many researches treat the public as a passive recipient of AI, and discuss their perceptions and views on AI applications under such a premise. This leads to public participation becoming an afterthought to check the legitimacy of the technology, rather than being an integral part of AI governance throughout the technology lifecycle (Gesk & Leyer, 2022; Kodapanakkal, Brandt, Kogler, & van Beest, 2020; Willems, Schmid, Vanderelst, Vogel, & Ebinger, 2022; Willems, Schmidthuber, Vogel, Ebinger, & Vanderelst, 2022).
Accordingly, despite the widespread consensus on the significance of public participation in AI governance, there remains a lack of effective mechanisms for public participation, as well as insufficient investigation on public perceptions in this context. This gap raises a serious concern, as the public not only plays a crucial role in AI governance, but is also the primary and ultimate recipient of AI.2 In particular, the recent launch of ChatGPT represents an important milestone in the advancement of AI, signifying the shift of AI from a specialized field to the broader social sphere. Therefore, ChatGPT has dramatically changed the technological context and is expected to place influence on public perceptions about AI. Against this backdrop, this article proposes the following research questions: (1) What are the public's assumptions, expectations, and knowledge about AI? (2) What are the similarities and differences between the technological contexts before to and after the launch of ChatGPT)?
This article applies the theoretical lens of technological frames, specifically the three analytical dimensions of the nature of technology, technology strategy and technology in use (Criado & O. De Zarate-Alcarazo, 2022; Orlikowski & Gash, 1994), to reveal the assumptions, expectations, and knowledge of the Chinese public regarding AI in different technological contexts.3 Specifically, this article collected 114,393 public comments about AI through a web crawler on Chinese social platforms, including 40,618 before and 73,775 after the release of ChatGTP, and used a human-machine collaborative approach, including Latent Dirichlet Allocation (LDA) and content coding, to analyze the above analytical dimensions for topic clustering. Among them, LDA is an unsupervised ML method for discovering underlying topics in large amounts of unstructured text data, while content analysis is the manual process of systematically classifying text with theoretical guidance. The insights from LDA and content coding were integrated to provide a comprehensive understanding of public perceptions of AI.
The article consists of seven sections. Following this introduction, the second section presents the literature on AI governance and public participation, and the third section presents theoretical lens of technological frames. The fourth section then explains the methodology, including the LDA and content coding methods, data collection and data analysis. The fifth section shows the public's technological frames of AI in different contexts and how they compare. This is followed by a discussion of the results in the sixth section. Finally, the seventh section briefly concludes with contributions and limitations.
Read more: https://www.sciencedirect.com/science/article/abs/pii/S0740624X24000315