Abstract

This study investigates the association between social media use and attitudes toward AI technologies. A nationally representative two-wave longitudinal survey () examined the mediating roles of perceived AI fairness and threat concerning three AI technologies: algorithms, facial recognition technology, and driverless passenger vehicles. Hypotheses were derived from media effect theories and the heuristic and systematic model of human-AI adoption. The results showed that social media use predicted more positive attitudes toward the three AI technologies indirectly through increased perceived AI fairness and reduced perceived AI threat. The findings contribute to our understanding of social media effects on attitudes toward AI and the underlying psychological mechanisms, providing valuable theoretical insights and practical implications.

1. Introduction

Artificial intelligence (AI) has been applied and programmed in a wide range of technologies (AI technologies hereafter) such as voice assistants, social media algorithms, chatbots, facial recognition technology, and driverless passenger vehicles [1]. At the heart of the “fourth industrial revolution” [2], AI is reshaping our lives and society. Approximately 77% of the daily-use devices now integrate some form of AI, with a quarter of companies leveraging AI to address labor shortages [3]. Moreover, it is estimated that by 2030, one in ten cars will be self-driving [4]. These trends make it imperative to examine lay users’ attitudes toward AI technologies which will significantly impact the development, implementation, and acceptance of new solutions utilizing AI in society [5]. A growing body of research has investigated the antecedents of attitudes toward AI (e.g., [68]). There are still gaps in the literature, however.

The first gap pertains to the role of social media in shaping attitudes toward AI. Social media have become a primary channel through which people access and engage with information about AI. AI-related news, research articles, tutorials, and industry updates are increasingly disseminated and discussed on social media platforms [911]. This raises the following question: How does social media usage influence people’s perceptions about and attitudes toward AI? While media effect theories and evidence suggest that social media can directly influence people’s perceptions about the reality [12, 13], it is more interesting and valuable to study the indirect effect of social media and its mediating mechanisms [14]. Yet, existing research has rarely investigated the impact of social media on attitudes toward AI and the underlying psychological mechanisms [15].

Drawing upon the heuristic-systematic model of human-AI adoption (HSM; [16, 17]), the current study focuses on two potential mediators underlying the association between social media use and attitudes toward AI technologies: perceived AI fairness and threat. AI has been widely involved in many high-stake decision-making scenarios (e.g., hiring, healthcare, and criminal justice). Evidence suggests a divergence of views regarding the fairness of AI [1820]. Individuals tend to respond favorably to AI when they believe it can make fair decisions, but negatively when they perceive it as incapable of doing so [16]. The study argues that social media use can affect attitudes toward AI by first influencing perceived AI fairness. This process may further be mediated by perceived AI threat. AI’s agency has raised concerns about its potential threats, ranging from its potential to replace human jobs and compromise user privacy and data security to its capability to affect social connections [2123]. Perceiving AI as unfair can render people to regard its threatening; heightened AI threat perception can lead to negative responses to and thereby negative attitudes toward AI technologies (e.g., [24, 25]). Taken together, social media’s association with attitudes toward AI may be serially mediated by perceived AI fairness and perceived AI threat.

Another gap in the literature concerns the role of social media across different AI technologies. The linkage between social media use with attitudes toward AI may vary across different AI technologies. AI has been applied across various domains such as education, healthcare, and transportation, in various forms [26, 27]. People may differ in how they perceive and evaluate specific AI technology, depending on how they perceive and evaluate its threats and fairness in different scenarios [28, 29]. As a result, the relationships among social media use, perceived AI fairness and threat, and attitudes may vary depending on the specific AI technology being considered. However, existing research tends to focus on a single AI technology in a single domain without comparing multiple technologies (e.g., [3033]), which has limited our understanding of media effects on AI technologies.

This study has three objectives. First, it investigates the association between social media use and perceptions and attitudes toward AI technologies. Second, it seeks to understand the psychological mechanism underlying the linkage between social media use and attitudes toward AI technologies by testing the mediating roles of perceived AI fairness and threat. Third, it explores whether and how the above relationships vary across three AI technologies including algorithms, facial recognition technology, and driverless passenger vehicles.

1.1. Conceptual Basis

This study argues that social media use can influence individuals’ attitudes toward AI technologies by fostering both heuristic and systematic processing of AI. This argument draws upon media effect theory, particularly the cultivation theory [12], as well as recent theoretical advancements in human-AI adoption, such as the heuristic-systematic model of human-AI adoption (HSM; [16, 17]).

Social media use can influence individuals’ perceptions about AI, according to the cultivation theory [12]. As a classic media effect theory, the cultivation theory posits that repeated exposure to media content shapes and cultivates individuals’ perceptions of reality. The more media people consume, the more likely they are to perceive the real world as aligning with portrayals presented in the media. Applying this logic to the current study, exposure to AI-related content on social media can influence individuals’ perceptions and attitudes toward AI.

The HSM provides a useful conceptual framework for understanding the antecedents and mechanisms of attitudes toward AI. It posits that AI acceptance and adoption involve several factors from both heuristic and systematic processes [17]. While users often consider limited informational cues and derive preliminary evaluations based on heuristic processing of AI, systematic processing requires users to deliberatively process all relevant information to form rational evaluations about AI [16]. This study proposes that perceived AI fairness is a factor deriving from heuristic processing and perceived AI threat is a factor relying on systematic processing. Since it is often unfeasible for lay users to judge the fairness of AI, perceived AI fairness relies mostly on heuristic processing [17, 34]. However, assessing threats related to AI demands intentional and thoughtful contemplation of ethical, social, and personal concerns, as well as the associated benefits and risks linked with AI [1, 27]. The HSM further proposes that heuristic processing of AI predicts systematic processing of AI, which, in turn, predicts AI acceptance and adoption, forming a sequential process.

Building upon the cultivation theory and the HSM, the current study is aimed at explaining social media effects on attitudes toward AI by considering social media use as an antecedent of both heuristic and systematic processing of AI. Specifically, it predicts that social media use may predict, first, heuristic processing of AI fairness and then systematic processing of AI threat, which, in turn, predicts attitudes toward AI. Below, we discuss rationales for this prediction.

1.2. Social Media Use and Perceived AI Fairness

Perceived AI fairness refers to the extent to which AI is capable of making unbiased, indiscriminatory, or fair decisions [35]. The ability of AI to make efficient and data-driven decisions is a main driver of AI adoption in various industries and scenarios such as hiring [36], lending [37], and criminal justice [38]. According to the HSM, that perceived fairness plays a pivotal role in shaping people’s attitudes toward AI [16]. AI fairness can afford users a sense of trust which fosters positive attitudes [16]. When AI is perceived as capable of making fair decisions, people evaluate AI more positively; when AI is perceived as incapable of making fair decisions, people develop negative attitudes toward AI [16].

Social media use can foster fairness perceptions about AI for two reasons. First, social media use can cultivate positive perceptions and evaluations of AI. Cultivation theory posits that exposure to media content can shape people’s perception of reality [12]. Recent research showed that social media content about AI is mostly characterized by positive sentiment and evaluations. For example, Qi et al. [11] analysis of Reddit data found that overall, the public perceived AI as a beneficial force that can contribute to societal improvement; Zeng et al. [39] study of AI discourse on WeChat revealed strongly positive evaluations on the economic potential of AI with little critical content. Considering this, if social media consistently portrays AI or technology-related topics in a positive manner, people can develop corresponding positive perceptions and attitudes.

Second, social media exposes individuals to AI-related information from various sources, enabling them to explore new ideas, trends, and resources about AI [40, 41]. For example, social media offer access to educational resources and expert insights on AI, as well as user-generated content that includes personal experiences with AI, including insights shared directly by experts [10, 11]. Exposure to diverse content about science issues, including AI, contributes to a more nuanced understanding of relevant issues [42]. This understanding can allow individuals to weigh different viewpoints and develop more balanced perspectives about AI’s ability to make fair decisions [43].

Together, exposure to a diverse range of AI information on social media fosters positive evaluations of AI and a more comprehensive understanding of AI, both of which may facilitate fairness perceptions about AI. Hence, this study hypothesizes the following:

H1. Social media use is associated with increased perceived AI fairness.

1.3. Perceived AI Fairness and Threat

Perceived AI threat refers to the subjective perceptions about AI’s precarious impact on individuals and society [27]. While optimism abounds regarding AI’s potential to improve our life [1, 44], significant ethical and social concerns have emerged concerning how AI developments will unfold and the potential threats it may bring to individuals and society [45]. These concerns stem from various sources, such as the fear of job displacement due to automation, privacy and security concerns related to AI’s data handling, ethical implications for AI decision-making, and potential threats associated with AI’s autonomy [21, 27, 4648].

According to HSM, perceived AI fairness not only serves as a crucial heuristic for establishing trust but also plays a foundational role in guiding users’ systematic evaluations of AI [17]. This study contends that perceived AI threat can be a systematic processing that is critical for user evaluation of AI. Accumulating research has demonstrated the critical role of perceived AI threat in the acceptance and adoption of AI (e.g., [1, 21, 48]). While perceived fairness can result from heuristic processing as users typically lack expertise in evaluating specialized AI features, AI threat perceptions require more effortful and deliberate consideration of ethical, social, and personal concerns associated with AI (as discussed above). The HSM proposes that heuristic and systematic processing sequentially predicts users’ evaluations and attitudes toward AI [17]. In light of this, the greater users perceive AI as fair, the more likely they believe it can make responsible and ethical decisions. Consequently, they experience reduced fear and perceive less threat from AI.

H2. (Higher) perceived AI fairness is associated with lower perceived AI threat.

1.4. AI Perceptions and Attiudes toward AI Technologies

Perceived AI threat can influence individuals’ attitudes toward AI for two reasons. First, perceived AI threat inevitably triggers negative responses, such as resistance, skepticism, and distrust of AI technologies [49]. These negative responses increase dissatisfaction with the technologies, as per the HSM model [17], thereby reducing the attitudes toward AI [50]. Second, perceived AI threat reduces the attitudes toward AI and its applications, primarily due to negativity bias [1]. Negativity bias denotes the tendency for negative information to hold greater salience and influence in information processing and decision-making [51]. Humans typically exhibit strong responses and attentiveness to unpleasant, especially threat-related information [52]. When individuals perceive AI as a threat, such as in the context of job displacement, they are expected to react strongly to this perception, consequently diminishing attitudes toward AI technologies.

Accumulating evidence supports the prediction that increased perceived AI threat leads to more negative attitudes toward AI technologies, while decreased perceived AI threat leads to more positive attitudes toward AI technologies. For example, Vu and Lim [25] found that higher perceived threat of job loss due to AI adoption in workplace was associated with more negative attitudes toward AI and robots. Similarly, Granulo et al. [53] found that people had negative emotional reactions when they imagined people’s jobs being replaced by robots. Hasan et al. [24] showed that privacy and data security concerns and risks associated with voice assistant Siri had a significantly negative influence on intention to purchase Apple devices.

Together, theory and evidence suggest that reduced perceived AI threat would be associated with more positive attitudes toward AI. Hence, this study hypothesizes the following:

H3. (Lower) perceived AI threat is associated with more positive attitudes toward AI.

In the above, we hypothesized that increased social media use would be associated with higher perceived AI fairness, which in turn predicts lower perceived AI threat, leading to more favorable attitudes toward AI. We consider the above hypotheses together and further examine the mediating roles of perceived AI threat and fairness. Of interest is whether social media use predicts attitudes toward AI indirectly through perceived AI threat and fairness. This process is in line with the postulations of the HSM that heuristic and systematic processing of AI serially predicts AI acceptance and adoption [17]. Shin [16] experiment found that heightened perceived AI fairness, as a heuristic processing, led to increased perceived usefulness and convenience of AI, as the systematic processing. As a result, these factors serially predicted higher user satisfaction with AI. Taken together, it is reasonable to expect social media use to first lead to increased AI fairness perceptions and then reduced perceptions of AI threat which, in turn, predicts more positive attitudes toward AI technologies.

Investigating the underlying mechanism through perceived AI fairness and threat is theoretically and practically important because intervening variables provide valuable explanations for how and why the association between social media use and AI attitudes occurs, and hence, they can be helpful when designing intervention programs [54].

H4. The association between social media use and attitudes toward AI technologies is serially mediated by perceived AI fairness and threat.

AI has found applications in a wide range of scenarios [27], spanning from transportation and home services to medicine and healthcare, education, and more, and leverages various technologies such as machine learning, robotics, computer vision, natural language processing, and Internet of Things (IoT) [55]. However, existing research tends to focus on single AI technology such as facial recognition (e.g., [30]), driverless passenger vehicles (e.g., [32]), voice assistants (e.g., [33]), service robot (e.g., [31]) or in a single industry such as higher education (e.g., [56]), consumer service (e.g., [57]), and healthcare (e.g., [58]). Rarely have multiple AI technologies been compared in a study, which has limited our understanding of how people perceive and evaluate different AI technologies.

The difference in the representation, capabilities, and application scenarios of AI poses interesting questions regarding the lay public’s reactions to different AI technologies. Research has identified different forms of AI technologies including robot, virtual, and embedded and argued that the forms of AI and the level of machine intelligence (i.e., capabilities) affect people’s trust in AI differently [26]. AI that pushes tailored advertisements on social media might be perceived vastly differently from an AI that drives people around for errands. Considering this, scholars have called for systematic comparisons across different application areas and technologies [59]. To this end, this study explores whether and how the linkage between social media use and attitudes toward AI technologies may differ across different applications of AI in different industries.

RQ. Are there differences among different AI technologies in the mediating process proposed in H4?

2. Method

2.1. Overview

A secondary analysis of a two-wave nationally representative survey was conducted. Data were from the American Trends Panel, a nationally representative panel of randomly selected US adults created by the Pew Research Center [60]. Panelists participated via self-administered web surveys. Interviews were conducted in both English and Spanish. The panel is managed by Ipsos.

Given that panelists are surveyed repeatedly at various time points, two limitations regarding the sample and data may arise. The Pew Research Center has implemented measures to mitigate these limitations and potential biases. Firstly, attrition—where respondents may drop out over time—can render the sample less representative of the target population. To counteract this, ATP conducts annual recruitments to enroll new panelists from diverse regions across the country. Secondly, repetitive interviewing may induce panel conditioning, wherein panelists alter their beliefs or behaviors merely by continually responding to a variety of questions over time. However, research has not identified any significant conditioning effects within the panel [60]. Previous studies have employed data from the ATP panel for investigation of various topic including public perceptions about driverless passenger vehicles (e.g., [61]) and public health issues (e.g., [62, 63]).

This study selected variables from two waves of surveys that were three-month apart: wave 93 (entitled “Social Media Update”; , field dates: July 26-August 8, 2021) and wave 99 (entitled “Artificial Intelligence (AI) and Human Enhancement”; ; field dates: November 1-7, 2021). Social media use variables were extracted from wave 93, and artificial intelligence-related psychosocial variables were extracted from wave 99. The two waves were merged using a unique identifier assigned to the respondent’s QKEY. A total of 5110 respondents who completed all relevant questions across the two waves were included in the analysis. Hence, the data allowed a longitudinal examination of social media use’s association with attitudes toward AI technologies. Since responses were merged across two waves, the analysis did not use respondents’ weights provided in the datasets [61].

The final sample of respondents included individuals from various age groups: 8.75% were aged 18-29, 32.55% aged 30-49, 28.88% aged 50-64, and 29.81% aged 65 or above with 54.98% female, 45.02% male, 69.81% White, 7.88% Black, 14.21% Hispanic, 3.47% Asian, and 3.22% other races. The sample had a median education attainment level of “college graduate or above” and a median political ideology of “moderate” (1 = very conservative, 5 = very liberal).

2.2. Measure
2.2.1. Social Media Use

Respondents were asked “Do you use any of the following social media site?” and indicated whether they used the eight social media platforms including Twitter, Instagram, Facebook, Snapchat, YouTube, LinkedIn, Reddit, and TikTok on a binary scale (1 = yes, use this; 0 = no, do not use this). Their responses were summed across the eight platforms to create an additive index for social media use so that higher scores indicated greater use of social media (, , ).

2.2.2. Attitudes toward AI Technologies

This study focuses on three types of AI technologies: algorithms, facial recognition technology, and driverless passenger vehicles. These three AI technologies covered in Pew’s American Trends Panel ranged from embodied (e.g., driverless passenger vehicles) to embedded (e.g., algorithms and facial recognition technology) [26]. This enables a comparison of different types of AI technologies for a more comprehensive, systematic understanding of social media’s influence on their perceptions about and attitudes toward AI technologies. The questionnaire first described each AI technology and then asked respondents about their attitudes toward each application in daily life settings.

Attitude toward algorithms was measured using four items. The questionnaire first stated that “computer programs can be trained to review large amounts of information and learn to identify patterns. These programs, called algorithms, are widely used by social media companies to find false information about important topics that appear on their sites.” Respondents were then asked “Computer programs like the ones used by social media companies to find false information could be used for several purposes. Would you favor or oppose the use of computer programs to make final decisions about each of the following?” and rated on four items, “which people should be approved for mortgages,” “which patients should get medical treatment,” “which job applicants should move on to a next round of interviews,” and “which people should be good candidates for parole”, on a three-point scale (1 = favor, 2 = oppose, and 3 = not sure). Principal component analysis results showed that one component with an eigenvalue larger than one, explaining a variance of 68.8%. Responses were recoded so that higher scores indicated more positive attitudes toward algorithms (, , ).

Attitude toward facial recognition technology was measured using four items. The questionnaire first stated that “Facial recognition technology can identify someone by scanning their face in photos, videos or in real-time. This technology could be used by police to look for people who may have committed a crime or monitor crowds in public spaces.” Respondents were asked “Facial recognition technology could be used for a number of purposes. Would you favor or oppose the use of facial recognition technology for each of the following purposes?” and rated on four items, “companies automatically tracking the attendance of their employees,” “social media sites automatically identifying people in photos,” “retail stores enhancing credit card payment security by confirming account holders at checkout,” and “apartment building tracking who enters or leaves their buildings”, on a three-point scale (1 = favor, 2 = oppose, and 3 = not sure). Principal component analysis results demonstrated one component with an eigenvalue greater than one, explaining a variance of 52.7%. Responses were recoded so that higher scores indicated more positive attitudes toward facial recognition technologies (, , ).

Attitude toward driverless passenger vehicles was measured using four items. The questionnaire first stated that “Driverless passenger vehicles, sometimes called self-driving cars, are equipped with software allowing them to operate with computer assistance. In the future, driverless passenger vehicles are expected to be able to operate entirely on their own without a human driver.” Respondents were asked “The technology used to operate driverless passenger vehicles could be used for a number of purposes. Would you favor or oppose the use of this technology in each of the following purposes?” and rated on four items, “taxis and ride-sharing vehicles,” “18-wheeler trucks,” “buses for public transportation,” and “delivery vehicles” on a three-point scale (1 = favor, 2 = oppose, and 3 = not sure). Principal component analysis results demonstrated one component with an eigenvalue larger than one, explaining a variance of 62.3%. Responses were recoded so that higher scores indicated more positive attitudes toward driverless passenger vehicles (, , ).

2.2.3. Perceived AI Fairness

One item was used to assess perceived AI fairness. Respondents were asked “Do you think it is possible or not possible for people to design artificial intelligence computer programs that can consistently make fair decisions in complex situations” on a three-point scale (1 = possible, 2 = not possible, and 3 = not sure). The responses were recoded so that higher scores indicated greater perceived AI fairness (, ).

2.2.4. Perceived AI Threat

Six items were used to measure perceived AI threat. Respondents were asked “How excited or concerned would you be if artificial intelligence could do each of the following” on six items including “know people’s thoughts and behaviors,” “perform household chores,” “make important life decisions for people,” “diagnose medical problems,” “perform repetitive workplace tasks,” and “handle customer service calls.” Responses were recorded on a five-point Likert scale (1 = very excited, 5 = very concerned). Principal component analysis results demonstrated one component with an eigenvalue larger than one, explaining a variance of 54.5%. Responses were averaged to create an index for perceived AI threat (, , ) such that higher scores indicated greater perceived AI threat. A follow-up open-ended question was asked about the main reason respondents are more concerned than excited about the increased use of artificial intelligence in daily life. The top five perceived threats are loss of human jobs (19%); surveillance, hacking, and digital privacy (16%); lack of human connection and qualities (12%); AI getting too powerful, outsmarting people (8%); and people misusing AI (8%).

2.2.5. Covariates

Sociodemographic variables including age, gender, race, education, and political ideology from the first wave, wave 93, were included as covariates in the analysis. To control for the potential confounding influence of mass media use, three items assessing the frequency of mass media use for news were included as additional covariates, also derived from the first wave (wave 93) of data. Respondents were asked to indicate how often they obtain news from television (, ), radio (, ), and print publications (, ), respectively, on a four-point scale (1 = never, 4 = often).

2.3. Analysis Strategy

H1 through H4 were tested using PROCESS Macro for R model 6 (i.e., serial mediation; see Figure 1). PROCESS is a computational tool designed to facilitate the implementation of mediation analysis with manifest variables [64]. Given the current study’s focus on examining the mediating roles of perceived AI fairness and threat, PROCESS offers a useful tool for performing the analysis. Research indicates that PROCESS yields results similar to those obtained from structural equation models (SEM) [65]. In the mediation model, 5000 bootstrap samples and 95% bias-corrected confidence intervals were used. The covariates noted above were entered in all models. Analyses were performed for each AI technology. To test the RQ, we compared social media use’s direct and indirect associations with attitudes toward each AI technology using the formula, [66]. The strength of the relationships was considered significantly different if the absolute value of the -score exceeded 1.96.

3. Results

Bivariate correlations between variables are summarized in Table 1.

H1 predicted that social media use would be negatively associated with perceived AI fairness. The results showed that social media use was associated with higher perceived AI fairness (, ). In terms of covariates, gender (female; , ) and education (, ) were each associated with lower perceived AI fairness. Political ideology (liberal; , ) was associated with higher perceived AI fairness. Race (Whites; , ) or age (, ) was not associated with perceived AI fairness. Television news use (, ) and print media use (, ) were each associated with higher perceived AI fairness, while radio (, ) was not associated with perceived AI fairness. H1 was supported.

H2 predicted that perceived AI fairness would be negatively associated with perceived threat. The results showed that perceived AI fairness would be negatively associated with perceived threat (, ). H2 was supported.

H3 predicted that perceived AI threat would be negatively associated with attitudes toward AI technologies. The results showed that perceived AI threat is associated with more negative attitudes toward algorithms (, ), facial recognition technologies (, ), and driverless passenger vehicles (, ). H3 was supported.

H4 predicted that perceived AI fairness and threat would serially mediate the association between social media use and attitudes toward AI technologies. The results showed that perceived AI fairness and threat serially mediated the association between social media use and attitudes toward algorithms (indirect , 95% ; completely standardized indirect , 95% ; Figure 1(a)), facial recognition technologies (indirect , 95% ; completely standardized indirect , 95% ; Figure 1(b)), and driverless passenger vehicles (indirect , 95% ; completely standardized indirect , 95% ; Figure 1(c)). The direct effects between social media use and attitudes toward algorithms (, ) and driverless passenger vehicles (, ) were statistically significant but not for facial recognition technologies (, ). The results for H3 are summarized in Table 2. H4 was supported.

RQ asks whether there are differences among different AI technologies in the mediating process proposed in H4. As indicated above, perceived AI fairness and threat mediated social media effects on attitudes toward all three AI technologies. The indirect effect was notably greater for driverless passenger vehicles compared to algorithms (.014 vs. .011, ) and facial recognition technologies (.014 vs. .009, ); moreover, it was significantly larger for algorithms than for facial recognition technologies (.011 vs. .009, ). Overall, the indirect effect was most pronounced for driverless passenger vehicles and least pronounced for facial recognition technologies. Additionally, the direct relationship between social media use and attitudes was stronger for driverless passenger vehicles compared to algorithms and facial recognition technologies (.01 vs. -.009 vs. .008).

4. Discussion

This study is aimed at investigating the effects of social media on people’s attitudes toward three AI technologies and the mediating roles of perceived AI fairness and threat. A secondary analysis of a two-wave nationally representative longitudinal survey was performed, controlling for demographic factors and consumption of mass media content. The results showed that social media use predicted more positive attitudes toward AI indirectly through increased perceived AI fairness and reduced perceived AI threat for all three AI technologies including algorithms, facial recognition technology, and driverless passenger vehicles. This study is among the first to investigate social media’s effects on AI attitudes and to identify the psychological mechanism through perceived AI fairness and threat. The findings provide valuable theoretical and practical implications.

4.1. Interpretation of Findings

As H1 predicted, social media use predicted reduced perceived AI threat. Exposure to a diverse range of content about AI facilitated a more comprehensive and balanced understanding of AI, which in turn led to increased perceived AI fairness. In line with HSM, perceived AI fairness was negatively associated with perceived AI threat. Consistent with previous research (e.g., [24, 25]), perceived AI threat was negatively associated with attitudes toward AI technologies (H2). Together, perceived AI fairness and threat served as psychological mechanisms through which social media use impacted attitudes toward AI technologies. In other words, social media promoted positive attitudes toward AI technologies by increasing AI fairness perceptions and reducing individuals’ AI threat perceptions. This mediating process was found to be statistically significant for all three AI technologies, lending robust support to the hypothesized social media effect and its underlying mechanism, developed based on media effects theories and HSM.

This finding underscores the importance of considering distinct AI-related perceptions, such as perceived threat and fairness, in understanding how social media influence people’s attitudes toward AI technologies. It also highlights the need to explore the interplay between these factors to gain a more nuanced understanding of the complexities involved in social media’s effects on AI perceptions and attitudes.

The results suggest that the mediating process between social media use and attitudes toward AI through perceived AI fairness and threat varies in the size of effect across three different technologies. The differences may be due to several factors, such as differences in how the public perceives and interacts with each AI technology [28, 33, 67], variations in the level of transparency and explainability of AI algorithms used in each domain [29], or different ethical concerns associated with each technology [68]. Overall, this finding highlights the importance of considering the specific characteristics and contexts of different AI technologies when examining their relationship with social media, perceived AI threat, and attitudes. More research is needed to further investigate the reasons behind the variations. Potential findings can help inform targeted communication strategies and policies to foster critical attitudes toward AI technologies across various domains.

4.2. Theoretical and Practical Implications

This study makes valuable theoretical contributions by proposing a conceptual framework that integrates the media effect theory and the HSM to understand the relationships between social media, perceived AI threat, perceived AI fairness, and attitudes toward AI technologies. First, this study identified the psychological mechanisms through which social media promote positive attitudes toward AI, drawing upon media effect theories and the heuristic and systematic processing of human-AI adoption [16]. This contribution is important, as social media’s effect on attitudes toward AI and its underlying mechanisms has been underexplored in existing research. By identifying this psychological mechanism through perceived threat, the study contributes to a deeper understanding of how social media shape perceptions and attitudes toward AI technologies.

Second, this study contributes to the heuristic-systematic model of human-AI adoption (HSM; [16, 17]) by recognizing perceived AI threat as an additional crucial element in the systematic processing of AI. While the original HSM emphasizes perceived usefulness and perceived convenience as the primary components of systematic processing, these factors might not fully capture the negative perceptions people hold about AI, such as fear and concerns. As a growing body of research underscores the important role of perceived threat in influencing AI acceptance and adoption, it becomes essential to incorporate this aspect into the HSM framework. This inclusion offers a more comprehensive understanding of the cognitive processes involved in AI acceptance and adoption, providing insights into the nuanced interplay between perceptions and attitudinal and behavioral outcomes.

Third, this study goes beyond previous research by testing the underlying processes across different AI technologies. As AI technologies range from embedded to embodied and elicit distinct perceptions and reactions among the public [26], understanding how perceived fairness modifies the linkage between perceived threat and attitudes for each technology is critical. This contribution adds nuance and context-specific insights to the understanding of social media effects on attitudes toward AI.

The findings provide important practical implications. First, the findings underscore the importance of social media in fostering attitudes toward AI technologies. Companies developing AI technologies and policymakers can leverage social media to disseminate accurate and reliable information about AI to promote attitudes toward beneficial AI technologies (e.g., AI in healthcare). Second, the findings showed that perceived AI fairness and threat acted as mediators between social media use and attitudes toward AI. Efforts to foster attitudes toward certain AI technologies should involve stakeholders such as AI researchers and educational institutions. They can work toward promoting perceived AI fairness and mitigating perceived AI threat through initiatives focused on education, transparency, and critical thinking about these AI technologies. Additionally, companies developing AI technologies should prioritize fairness to enhance public trust and support. Third, the indirect effects of varying sizes for the three AI technologies suggest that AI should not be treated as a monolithic entity but, rather, differentiated in terms of their attributes, technologies, domains, etc. Consumer advocacy groups and communication practitioners should carefully consider the nuances when crafting messages and strategies to effectively communicate about AI technologies.

4.3. Limitations

This study has some limitations. First, social media use was not measured at wave 2 (time of measurement), which prevented the inclusion of a lagged term for a longitudinal causal analysis. As a result, making causal claims regarding social media’s influence on AI attitudes became challenging. Future research should employ more rigorous designs, such as longitudinal studies or experimental designs. Second, social media use was measured in this study as the extent to which people use different social media platforms. Although scholars have argued that the diversity of media content, rather than the amount of exposure time, matters more for media effects [42], the measure did not directly capture the frequency of social media use or the amount of relevant information consumed (cf. Brewer et al., 2024; [69]). Future research may examine whether the frequency of social media use or the time spend on each platform predicts attitudes toward AI technologies in a similar fashion. Third, this study replies on a secondary dataset. While the measures assessing perceived AI threat, perceived AI fairness, and attitudes toward AI technologies exhibit good face and construct validity, they have not been validated by prior studies. More research is needed to develop and validate relevant measures for AI-related perceptions including perceived threat and fairness. Fourth, while this study focused on three specific AI technologies, it is important to recognize that the landscape of AI is continuously evolving and expanding. The three technologies examined in this research are not exhaustive, and AI’s integration into various aspects of our daily lives will continue to grow. As a result, future research should consider comparing a broader range of AI technologies in diverse contexts.

Data Availability

The data used to support the findings of this study are publicly available at https://www.pewresearch.org/american-trends-panel-datasets/.

Conflicts of Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.