Abstract

Artificial intelligence (AI) is a rapidly developing technology that has the potential to create previously unimaginable chances for our societies. Still, the public’s opinion of AI remains mixed. Since AI has been integrated into many facets of daily life, it is critical to understand how people perceive these systems. The present work investigated the perceived social risk and social value of AI. In a preliminary study, AI’s social risk and social value were first operationalized and explored by adopting a correlational approach. Results highlighted that perceived social value and social risk represent two significant and antagonistic dimensions driving the perception of AI: the higher the perceived risk, the lower the social value attributed to AI. The main study considered pretested AI applications in different domains to develop a classification of AI applications based on perceived social risk and social value. A cluster analysis revealed that in the two-dimensional social risk × social value space, the considered AI technologies grouped into six clusters, with the AI applications related to medical care (e.g., assisted surgery) unexpectedly perceived as the riskiest ones. Understanding people’s perceptions of AI can guide researchers, developers, and policymakers in adopting an anthropocentric approach when designing future AI technologies to prioritize human well-being and ensure AI’s responsible and ethical development in the years to come.

1. Introduction

Artificial intelligence (AI) is emerging as one of the fundamental sources behind the fourth industrial revolution, promising to redefine societies and create unprecedented opportunities for progress [1]. Especially in the last five years, computing power has further expanded AI technologies and their capacities, which are becoming increasingly relevant in our daily lives, whether we are attempting to read our emails, browse social media, receive driving directions, listen to music, or get product and movie suggestions [1]. AI systems work by ingesting large amounts of labeled training data and exploring for correlations and patterns, which are used to predict future states. Thanks to these features, AI algorithms can automate repetitive and time-consuming tasks, allowing individuals to focus on more critical activities, whereas generative AI techniques can also create from a simple prompt realistic text, images, music, and other media [2, 3]. Concerning this, scholars have already recognized advancements brought by AI in various areas [4], such as finance [5], retail [6], healthcare [7], and education [8, 9].

Given the pervasiveness and the rapid development of AI, in April 2021, the European Commission proposed a set of regulations called the “Artificial Intelligence Act” (AIA; [10]). This proposed legislation is aimed at regulating the deployment of AI systems within the European Union (EU). The main objectives of the AIA are to ensure the ethical development and usage of AI technologies while fostering innovation and competitiveness in the EU market. On June 14th, 2023, the EU became the first political entity to enact laws officially regulating AI development. The AIA focuses primarily on strengthening rules around data quality, transparency, human oversight, and accountability by proposing a risk-based approach, categorizing AI systems based on their potential societal risks. In particular, the AIA defines four levels of risk: unacceptable, high, limited, and minimal. The higher the risk of the AI, the more stringent the regulations and requirements that, for instance, imply extensive documentation, including risk assessments, datasets used, and technical documentation, to ensure transparency for the citizens. The AIA also sought to prohibit specific harmful or undermining fundamental rights AI practices, such as social scoring and real-time remote biometric identification systems and is aimed at addressing ethical questions in various sectors ranging from healthcare and education to finance and energy.

The expansion of AI technologies has garnered mainstream attention, making understanding how people perceive and interact with these systems essential. Social psychology is now starting to explore how people perceive AI technologies [11]. Such knowledge is crucial to facilitate their societal introduction and promote a human-centered design of such applications. However, systematic empirical investigations of people’s perceptions about the various AI algorithm-based technologies are still a few. Accordingly, the overall goal of the present work is to provide empirical evidence about these perceptions, by considering the crucial dimensions of social risk and value.

2. People’s Perception of AI Technologies: Social Risk and Social Value

The expression “artificial intelligence” represents a generic term to describe a heterogeneous variety of technologies and applications, which can also be very different. Consequently, to understand how people relate to AI, it is crucial to consider it not as a single entity but as a heterogeneous set of tools that may be perceived differently. Indeed, a recent study suggested that the relationship between people and AI technologies can differ at the individual level and depends significantly on the rapid evolution and variety of such a technology [11]. Today, major AI systems interact with humans primarily as support tools. However, in the future, people may interact with AI-like social actors [11]. Thus, accepting AI technologies will depend on the types of AI systems and the interaction people can have with them. For instance, the perception of an AI operating as a teacher might differ from that of an autonomous agent facilitating quality control in industrial production. In this regard, McKee et al. [11] have adopted the stereotype content model (SCM) [12] to analyze how people evaluate AI in terms of competence and warmth, the two fundamental dimensions of social perception. The authors found that warmth and competence guide the impression formation of AI systems, showing that individuals perceive AI tools that optimize interests aligned with human interests as warmer and AI agents that operate independently from the human direction as more competent [11].

Similarly, Gray et al. [13] suggested that AIs are perceived to have a moderate degree of moral agency. However, as AI agents become more and more expressive and advanced, people may increasingly perceive them as experiential and, in turn, attribute to them capacities for moral evaluations [14]. In this regard, Shank and DeSanti [15] investigated the impact of real-world moral violations committed by AI on human attributions of mind and morality to these systems. They found that when AI systems are involved in real-world moral violations (such as racist and discriminatory judgments), people attribute moral responsibility to the AI entities but not to their programmers. These moral violations led to a significant increase in the perceived culpability of the AI systems as if they were accountable agents capable of moral judgment and ethical decision-making. Depending on the experimental conditions, the authors also observed a tendency among participants to attribute mental capacities and human-like traits to the AI systems, like intentionality, consciousness, and emotional states, as if the AI possessed cognitive and emotional aspects akin to human minds [15].

Besides the research mentioned above, some recent opinion polls [16, 17] have suggested that people’s evaluations of AI technologies rely on two fundamental dimensions: perceived social risk and social value. In this regard, one public report [16] investigated the opinions of a sample of U.S. citizens about AI, revealing that 37% of them were particularly concerned about it, whereas 45% of the respondents reported being equally concerned and excited. Concerns included the potential loss of jobs, human and social connections, privacy considerations, and the prospect that AI’s capabilities might surpass human skills. Additionally, 18% of the sample reported being excited about AI development, believing that this technology will bring societal improvements to daily life, such as increasing efficiency in various areas, saving time in repetitive tasks, and increasing workplace safety.

Similarly, a European survey [17] highlighted people’s generally positive outlook toward machine learning and AI while raising concerns about their potential societal risks. On the one hand, participants expressed concerns about the potential risks and negative consequences of machine learning technologies. These concerns were related to data privacy, security, and the potential for algorithm biases. The need for more transparency in how machine learning models make decisions also contributed to risk perceptions. On the other hand, the report also highlighted the potential benefits of AI. Many respondents recognized the potential for machine learning to improve various aspects of society, such as healthcare, transportation, education, environmental protection, and inequality reduction. The ability of AI to automate tasks and enhance efficiency was seen as a positive factor contributing to its social value [17].

These surveys are indeed coherent with previous research on technology acceptance ([18]; see [19] for a review), which extensively demonstrated how the willingness to adopt a new technology depends on its perceived usefulness and value, but also on the perceptions of risks associated with adopting such technology that can negatively influence the willingness to use it. Perceived usefulness and ease of use are critical factors influencing technology adoption [18]. However, people carefully assess the potential benefits—at the individual and societal levels [20]—a new technology offers in everyday life before effectively adopting it.

The perceived value is generally defined as an individual assessment comparing the benefits (economic, social, and interpersonal) derived from a product, service, or relationship with the perceived costs (i.e., price, time, effort, risk, and convenience) [21]. However, the risk perception also influences the adoption of technology [22]. Individuals might be hesitant to adopt a new technology if they perceive it as risky or uncertain, particularly regarding its potential negative consequences or compatibility with their existing habits and lifestyle. The fear of losing privacy, financial security, or control over their information can lead to reluctance to embrace novel technologies (e.g., Ratten [20]). The type of risk can have varying impacts on the overall perceived risk or intention to use. For example, in information technology, the perceived psychological risk is identified by people in terms of financial, social, and privacy risks, with the latter being the most impactful on technology acceptance [23, 24].

Considering this evidence and the empirical findings from public reports [16, 17], as well as the guidance proposed in the EU AIA, we speculate that the social risk and social value attributed to AI agents may represent two primary evaluative dimensions of AI acceptance that deserve further investigation. Indeed, by adopting a qualitative approach, previous reports [16, 17] focused on the perception of AI in general. However, AI describes a variety of possible applications, differing in context of use and functionalities. Therefore, the present research is aimed at providing for the first time a comprehensive, systematic, and quantitative analyses of people’s perceptions of different types of AI-based technologies. This would provide a new classification of AI applications based on perceived social risk and value.

3. Overview

Since there are no specific measures for assessing risk and value perceptions toward AI in the literature, the goal of the preliminary study was first to operationalize these two dimensions, by developing reliable items. In doing so, we also investigated the relationship between social risk and social value attributed to artificial intelligence in general. We adopted a cross-sectional approach. In the main study, 95 different AI-based software applications adopted in several contexts of daily life were pretested. Then, perceived social risk and value were investigated by considering the applications for which people reported greater awareness in the pretest. Participants’ evaluations were cluster analyzed, and a two-dimensional space based on social risk and social value scores was created.

Both studies were conducted after receiving the approval from the local Commission of the Department of Psychology for minimal risk studies. All procedures performed in the studies were in accordance with the APA ethical guidelines and the ethical principle of the “Helsinki Declaration” and the Oviedo Convention on human rights and biomedicine. Full informed consent was obtained before participants started the studies.

4. Preliminary Study

4.1. Method
4.1.1. Participants and Procedures

Data were collected through the Prolific web platform using the Qualtrics survey web system. The survey was administered in English, and participants received £0.60 for their participation. For the sake of reliability, we intended to collect data on a large sample (at least ). This guarantees high power for small and medium correlations and stability of correlations [25]. At the beginning of the survey, participants were asked for their consent and were informed that the study would last about 5 minutes.

We included one attentional check item to obtain a reliable sample of respondents and identify participants who failed to pay close attention (i.e., “Please answer 6 to this question”) (see Oppenheimer et al. [26]). None of the participants failed this check, and the final sample was composed of 291 participants (120 females, 170 males, and one other; years, ; age range 18-65), mainly from Poland (25.5%), Portugal (13.8%), Italy (11.7%), and the UK (12.8%; 36.2% other countries from all over the world; see also Table A in the Supplementary materials for the complete list).

In terms of educational level, 4.8% of the participants reported having less than a high school degree, 25.8% were high school graduates, 19.2% had some years of college but no degree, 5.8% had an associate degree, 27.8% had a bachelor’s degree, 13.1% a master’s degree, 1.4% a doctoral degree, and 2.1% a professional degree.

4.1.2. Measures

(1) Overall Knowledge of Artificial Intelligence Technologies. Even if the term “artificial intelligence” was coined in 1956 (see Moore [27]), AI has become popular only recently. Not all people may be aware that such technologies are already present in many of the applications and devices they use daily. Therefore, as a control variable, we created three ad hoc items to assess participants’ overall knowledge of AI. The first item was “How much do you think you know about Artificial Intelligence?” (1 = no information at all, 7 = know very well). The other two items were as follows: “Do you think you can distinguish between different Artificial Intelligence technologies?” and “Can you distinguish when a computer system uses Artificial Intelligence technology?” (1 = not at all, 7 = very much).

(2) Perceived Social Risk. Three items were created to assess participants’ perceptions of social risk regarding AI. Participants evaluated to what extent they thought AI was (1) risky, (2) a risk for privacy and/or personal data violation, and (3) a risk for our society (e.g., for the labor market, public health, and education).

(3) Perceived Social Value. We created three items for assessing participants’ perception of AI as socially valuable. The items were “To what extent do you think artificial intelligence (1) is valuable for our society (e.g., for the labor market, public health, and education), (2) is helpful for the progress of our society, and (3) increases the living standard.”

The six items were anchored with a 5-point scale (1 = not at all, 5 = very much).

5. Results

Before conducting the analyses, data were inspected for normality and outliers. Standardized residuals, skewness, and kurtosis values were all <1.0, indicating a normal distribution of the residuals [28].

Exploratory factor analyses were conducted with Jamovi (version 1.6) on the six items adopted for measuring the perception of social value and social risk for AI technologies. Consistent with our hypothesis, EFA revealed a bifactorial structure with two distinct factors (Bartlett’s test of sphericity , ). The maximum likelihood extraction method was combined with an oblimin rotation, and the number of factors was extracted based on parallel analysis. The Kaiser–Meyer–Olkin values for individual items were all > 0.70, which is above the recommended threshold of 0.6 [29] and well above the acceptable limit of 0.5 [30]. Bartlett’s test of sphericity indicated that correlations between items were sufficiently large for EFA for all the tested applications. Confirmatory factor analyses were also performed. All fit indices were in line with the cut-off criteria commonly used in psychological research (i.e., , , and ) (see also Hu and Bentler [31]).

Cronbach’s alphas were ≥ 0.78 for all scales. Given the adequate internal consistency, we calculated composite scores for each scale, and correlational analysis was performed on all our variables (Table 1).

Overall, the correlation analysis suggested that individuals reported having a medium knowledge of how AI algorithms work, with a mean not statistically different from the scale midpoint (i.e., 4) (, ). Prior knowledge of how AI works is associated with a more positive perception of AI, ascribing higher levels of social value (, ) but not social risk (, ). People seem to perceive AI as more socially valuable than risky (, ). This aligns with other evidence [17] that people have different perceptions of AI depending on their familiarity. Moreover, confirming previous literature on technology acceptance [32], participants’ gender was associated with negative evaluations of AI, with female participants reporting lower knowledge of AI functioning and lower scores of AI perceived social value.

6. Discussion

Overall, these results suggest that the measures we developed for assessing the social risk and value attributed to AI successfully capture these dimensions. Analyses suggest that perceived social value and social risk represent two significant dimensions driving the perception of AI. Furthermore, social risk and social value appear to be two antagonistic evaluative dimensions, which are thus negatively correlated: the higher the perceived risk, the lower the social value attributed to AI, and vice versa. However, AI represents a very heterogeneous set of applications, often very different from each other and employed in widely varying contexts of use. Consequently, to gain a more detailed understanding of how individuals perceive AIs, it is crucial to investigate whether the relationship between social risk and social value varies when considering specific applications. The main study attempts to answer this question by aiming to understand which AI applications are considered the riskiest and the most valuable to society.

7. Main Study

Recent literature suggests that people may have different perceptions of AI technologies depending on the context of their adoption [11]. Indeed, people seem to have different levels of trust in AI systems depending on the type of application [17]. This suggests that perceptions of AI are shaped not only by the technology itself but also by the context in which it is used. Therefore, the present study is aimed at developing a classification of AI-based applications already used in various daily life contexts.

8. Pretest

Before conducting the main study, we ran a pretest to identify the most familiar AI-based applications to people, thus narrowing the pool of objects to be evaluated in the main study.

Participants () were recruited by adopting a snowball sampling technique among Italian nonpsychology students. All participants gave their full consent for participation. An online survey presented them with 95 different AI-based software applications adopted in different domains (e.g., entertainment, medical, educational, and productivity). Given the large number of applications considered and to avoid participants’ fatigue, exemplars of AI-based applications were divided into three lists. Each participant randomly evaluated one-third of the considered applications. For each application, participants were asked to indicate whether they were aware of the existence of the specific application (yes/no) and if they believed that it was based on AI (yes/no). At the end of the task, participants were thanked and debriefed.

To determine whether an application was familiar to respondents, we considered only those that fell in the third percentile (i.e., >66%), both in terms of familiarity and awareness that such applications are based on AI algorithms. Twenty-five applications emerged as the most familiar and known to be based on artificial intelligence systems by participants (Table 2; see Table B in the Supporting Information for the complete list of applications).

Two independent raters then evaluated the emerged applications to assess any similarities. In particular, the restaurant picker app, the online store picker app for smartphones, and applications able to recommend commercial products or music playlists based on personal preferences were considered similar in terms of functionalities. Thus, for the main study, they were grouped and named “Products and services recommendations (restaurants, music, app, etc.).” A similar approach was adopted for the smart e-mail assistants and smart email clients, considered two exemplars of the same technology, and for the smartphone face recognition unlock and face recognition software, deemed the exact application of AI-based systems for face recognition. Smart plant recognition systems were considered as a particular case of smart photo album managers. Finally, applications for human language understanding and speech-to-text applications were evaluated as the same application. Therefore, these applications were grouped in the main study.

Following this procedure, the remaining applications totaled eighteen. Additionally, we decided to include in the list seven other AI-based applications that we judged to be highly relevant to modern society and to our aims: (1) smart traffic management systems, (2) students’ performance evaluation, (3) assisted surgery, (4) medical diagnostics, (5) drug combination, (6) employees’ recruitment and selection, and (7) thief and suspicious person recognition. Thus, 25 AI applications were used in the main study and evaluated in terms of perceived social risk and social value.

9. Method

9.1. Participants

Data were collected through the Prolific web platform using the Qualtrics survey web system. The questionnaire was administered in English. Overall, 399 participants completed the survey (232 females, 159 males, seven nonbinary, and one preferred not to answer, , ; , ). Regarding nationality, 49.4% of participants were from the US, whereas 50.6% were from the UK (see Table C in the Supplementary materials). In terms of educational level, 3% of the participants reported having less than a high school degree, 12.3% were high school graduates, 23.4% had some years of college but no degree, 9% had an associate degree, 38.7% had a bachelor’s degree, 11.1% a master’s degree, 2.3% a doctoral degree, and 0.3% a professional degree. All participants consented to data processing and participation in the study and received €0.92 for their participation.

9.2. Procedure and Materials

Given the large number of applications to be evaluated by participants and to avoid a high dropout rate, following the procedure adopted in the preliminary study, the 25 AI-based applications were divided into two different lists so that each participant randomly evaluated approximately half of the considered applications. After providing their demographic data, participants were asked to report their general knowledge of artificial intelligence technologies through the following item: “How much do you think you know about artificial intelligence?” (1 = no knowledge to 7 = extensive knowledge). Given that in the preliminary study people appear to lack in-depth knowledge about the technologies on which AI is based, a general description of AI functioning was presented after this item. Then, a brief description of functionality and typical usage context was provided for each application (see p. 10 in the Supplementary Materials for the complete list of descriptions). All descriptions were approximately the same length, ranging from 82 to 105 words, and were based on descriptions provided by the official websites of commercial AI-based applications.

To avoid presentation bias, the selected applications were presented in a random order, and participants were asked to evaluate them in terms of perceived social risk and perceived social value using the scales tested in the preliminary study.

10. Results

10.1. Preliminary Analyses

As for the preliminary study, exploratory and confirmatory factor analyses (the results are reported in Table D in the Supplementary Materials) were carried out, considering the six evaluation items (social risk and value) for each application.

We followed the same approach described above. The Kaiser–Meyer–Olkin values for individual items were all ≥ 0.62 for all the apps. Additionally, Bartlett’s tests of sphericity indicated that correlations between items were sufficiently large for EFA for all the tested applications. CFAs were also performed. All fit indices were in line with the cut-off criteria commonly used in psychological research (i.e., , , and ) [31].

Both EFA and CFA confirmed a bifactorial structure of the scale with the six items saturating on the respective factor, further supporting the validity of the scales developed. Therefore, the three items for each scale were averaged to form a reliable index of social risk and social value for each considered application.

10.2. Participant-Level Correlation Analyses

To test the associations among the considered variables, the item scores of perceived social risk and perceived social value for each application were aggregated to obtain two composite scores for each app, which were then averaged to obtain two overall indices of perceived social risk and value. Correlation indexes, computed by using such means at the participant level, are reported in Table 3 (see also Table E in the Supplementary Materials for correlations between risk and value for each application).

Crucially, correlation analysis confirmed what emerged in the preliminary study. Perceived social value and social risk related to AI technology were negatively associated. Still, people with higher knowledge of AI functioning also reported a higher perception of social value, whereas higher level of education was associated with higher perception of risk. This aligns with other evidence [17] that people have different perceptions of AI depending on their familiarity and education. Perception of social value was also associated with participants’ age: the older the participants, the less the social value.

Finally, confirming previous studies [33], participants’ gender was associated with negative evaluations of AI, with female participants reporting lower knowledge of AI functioning.

10.3. App-Level Cluster Analyses

Social risk and social value scores for each app were averaged across participants to test how the apps’ perceptions fell along the dimensions considered in the current study. According to these means, the 25 apps arrayed on a two-dimensional social value × social risk space that was examined using cluster analyses. The apps were considered as the unit of analysis. Again, perception of social risk and social value emerged as negatively correlated (, ).

Two types of cluster analysis were performed using SPSS (v.26). Following Hair et al. [34], we first conducted hierarchical cluster analyses using Ward’s [35] method by minimizing within-cluster variance to determine the most appropriate number of clusters. Following Blashfield and Aldenderfer’s [36] guidance, we examined agglomeration statistics to decide the number of clusters that best reflected the data, and a six-cluster solution emerged. We then conducted a -means cluster analysis (see Table F in the Supplementary Materials) to determine which groups fell into which clusters. Results are displayed in Figure 1 and Table 4.

One-way ANOVAs were used to compare the cluster centroids on social risk and social value; additionally, paired -tests were conducted to understand whether each cluster was evaluated significantly higher on one dimension or the other. Two clusters (4 and 6) were excluded from these analyses because they included only one app. Cluster 1 scored the highest on social value, whereas clusters 2 and 3 were the highest on social risk. Interestingly, of the four clusters considered in the analyses, two were higher on social value than social risk (i.e., 1 and 5), and the remaining two scored higher on social risk than social value (i.e., 2 and 3). Thus, as we expected, perceptions of AI applications can be described in terms of social risk and social value, offering the possibility to capture the heterogeneity of AI technology-based applications and the different nuances that characterize their social perception.

11. Discussion

The goal of the present study was to create a classification of the best-known applications based on AI technologies that are already employed in many aspects of our daily life, trying to identify how these are perceived by people in terms of risk and value to society.

To do this, in the preliminary study, we developed two scales that well captured social risk and value. Statistical analyses showed that the developed items were reliable, suggesting that these evaluative dimensions are negatively correlated. Overall, the preliminary study offers some insights into how people assess AI apps.

In the main study, we focused on applications most familiar to people, which were then classified in a two-dimensional social risk × social value space. Preliminary analyses highlighted significant correlations between gender and AI knowledge and between educational level and perception of social risk. In other words, women reported to have lower knowledge of AI technology compared to men. This is in line with the findings of a recent meta-analysis on gender gap in technology literacy [33]. The authors concluded that, even if small, gender differences in ICT literacy are significant.

As for the association between educational level and perceptions of social risk, we speculate that higher education can promote greater awareness of how algorithms work, leading to increased risk perception, probably due to increased awareness of how personal data are used in model training. This result is also in line with the association between age and perceived value. In fact, younger participants reported higher levels of social value with respect to AI-based technologies. We believe that this could be due to younger generations being more familiar with digital technologies (see [37]), better able to grasp not only their risks but also their potential benefits to society.

Considering the main results, the AI-based applications perceived as the riskiest were those related to health and medical care (i.e., cluster 2: assisted surgery, drug combination, and medical diagnostics). People attribute some social value to these applications while at the same time attributing the highest levels of risk to society. This is surprising, given that numerous studies show how medical artificial intelligence can outperform human doctors. Indeed, it has been observed that AI exhibits a remarkable capacity for expert-level performance, rendering cost-effective and scalable healthcare solutions. For instance, AI systems have demonstrated superior diagnostic capabilities in detecting heart disease compared to cardiologists [38, 39]. In this regard, one AI software by IBM was compared to human experts for 1.000 cancer diagnoses and found treatment options that doctors missed in 30% of the cases [40, 41]. Today, AI-based mobile apps can accurately detect skin cancer [42], and algorithms have shown promising results in identifying eye diseases [43]. Longoni et al. [41] proposed that individuals may be more reluctant to rely on medical care delivered by AI because they perceive that being cared for by an algorithm—instead of a human doctor—can neglect one’s unique characteristics, circumstances, and symptoms. We speculate that these technologies are also perceived as risky because the human contact that governs the doctor-patient relationship, which goes beyond mere diagnosis, is missing. Likely, this is due to the fact that people are not sufficiently aware that these applications are used to assist and not to replace doctors. Another interpretation could be that when people perceive technology as risky, its value is not considered, suggesting that social risk and social value may be two antagonistic perceptual dimensions, as suggested by the correlational analyses. Subsequent studies should, therefore, explore this possibility.

Another relevant result concerns the perception of chatbots for political propaganda. This technology is perceived as particularly valuable to society and of negligible risk. Indeed, one of the significant benefits of AI chatbots is that they can help voters make informed decisions by providing them with accurate and unbiased information about political candidates and policies. Nevertheless, several studies showed that they are often used to spread fake news and misinformation, manipulate public opinion, and create echo chambers, where people are only exposed to views that align with their beliefs. The spreading of fake news and the creation of echo chambers represent a critical risk for society [44]. Considering the results obtained both in the preliminary study and in the main study, we speculate that people still have a middle to basic understanding of AI technology and might not be aware of the potential impact that AI can have on political propaganda. People might not fully grasp how these algorithms can contribute to echo chamber creation by reinforcing existing beliefs and opinions. Moreover, individuals are susceptible to cognitive biases that can reinforce their pre-existing beliefs and values [45, 46]. By presenting information that aligns with these biases, AI chatbots can be perceived as a helpful tool for leading to a more informed electorate, thus appreciating the social role these play in making individuals more informed but underestimating their potential risks.

However, political chatbots are not the only applications considered beneficial to society. Clusters 1 and 5 encompass the most common AI-based applications that people use daily in several activities (e.g., intelligent photo management, product and service recommendations, and smart e-mail assistant). These features are primarily functionalities present in digital platforms that people already used before the massive advent of AI, such as social media or search engines. This may be why people view these AI-based tools as more socially valuable than risky, simply because they are more familiar with them.

Statistical analyses also revealed that applications for public security (e.g., automatic facial recognition and thief recognition) are perceived as more socially valuable than risky. Indeed, by quickly identifying suspicious activities, these tools can act as a proactive defense mechanism.

We speculate that people perceived this specific family of AI applications as a deterrent to potential thieves, as they may be less likely to attempt theft or fraud. Moreover, in the digital age, protecting personal information is crucial. AI theft detection tools can help safeguard sensitive data, such as credit card details or social security numbers. As a result, we speculate that the social value ascribed by participants to such applications may be driven by the idea that AI can make their personal data, considered one of the most valuable assets in modern society [47], more secure. However, this result turns out to be emblematic; in fact, the same data that AI systems are supposed to protect are also used to train the algorithms on which they are based. In this regard, Doberstein et al. [48] found that concerns over transparency, data safety, and authoritarianism were the most frequent themes associated with AI tools. We speculate that these results could depend again on people’s lack of full understanding of how AI works, and the complexity of its algorithms may lead to ambivalent perceptions.

12. Limits and Future Research

The present work is aimed at describing a first empirical exploration of the psychological dimensions shaping people’s perceptions of modern AI technology and its eventual acceptance. However, the present work has some limitations opening to future research. First, we adopted a correlational design, not allowing us to establish the causality of the relationships between the considered variables. Therefore, future studies should adopt an experimental approach by manipulating levels of risk and value to deepen their effect on AI technology acceptance. It would also be relevant to adopt a longitudinal approach to verify whether greater knowledge and diffusion of these systems, over time, could change the perception of social risk and value.

Furthermore, the participants who partook in the present studies were mostly of Western culture. It is plausible that individuals from other cultures where AI is widespread and already adopted in different social contexts (e.g., China) may have different perceptions. Therefore, future development must design cross-cultural studies, thus offering a complete overview of the investigated dimensions.

Finally, in the present study, we only considered perceived social value and social risk as dimensions influencing the perception of AI-based technology. However, there may be other proximal factors (e.g., emotions, social influence, and personality traits) to consider for understanding the overall acceptance of AI apps. Future studies could also manipulate different levels of social risk and value by verifying their impact on various psychological consequences, such as the perception of threat elicited by AI or other cognitive processes.

13. Conclusion

Artificial intelligence is a rapidly advancing technology that is changing how we live and work. Despite its many potential benefits [49], the public perception of AI is often mixed, with some people viewing it as a threat to employment, privacy, and even human existence [50, 51]. These findings align with the results of the present work, which suggested that social value and social risk perceptions are two relevant evaluative dimensions for understanding how people perceive the rising AI technologies today. One reason for these mixed perceptions may be due to the media’s portrayal of AI in popular culture, which often depicts AI as one technology, either helpful or dangerous. This can lead to unrealistic expectations of AI capabilities and fears of a technology uprising. In this regard, the results of the present work suggest that the perception of AI is highly dependent on the context of use and the specific characteristics of each AI-based application, which are thus likely to influence their perception as risky or socially valuable tools.

Previous research has also shown that people’s attitudes toward AI are influenced by their knowledge about the technology [52]. Our findings showed that familiarity with AI functioning is associated with a higher social value perception. In this regard, one study [52] found that participants with a higher understanding of AI were more likely to view it positively and believe it could positively impact society. Moreover, our results suggest that gender was associated with knowledge of AI systems, with women reporting lower technological knowledge and lower perception of value. These associations align with previous literature, showing that, compared to women, men typically express higher levels of confidence and technological proficiency [33, 53]. This discrepancy can be brought on by early exposure to technology and societal expectations. For gender equality and ensuring that both men and women can fully participate in the AI era, it is imperative to close the gender gap in technical skills. Therefore, policymakers must invest in the digital literacy skills of the new—and current—generations to favor a greater understanding of AI tools’ functioning and their informed adoption to support everyday activities. The development of AI, like any other new technology, depends on their public acceptance. We propose that, to foster the perception of AI’s benefits to society, we should start referring to AI as augmented intelligence instead of artificial intelligence. In this way, intelligent tools may be more widely accepted by people because the human dimension is not lost, and AI-based tools will not be perceived as replacing people in their jobs or activities but as additional support tools.

It is also important to consider people’s concerns about the ethical implications of AI, such as bias and discrimination in decision-making algorithms. Studies have shown that people are more likely to view AI as fair and unbiased when they believe that it has been designed transparently and with the consideration of ethical issues [54]. Therefore, technology developers and marketers must address these sociopsychological dimensions effectively and transparently to encourage widespread adoption. They should emphasize the tangible benefits and ease of use while also working to alleviate perceived risks and fears through transparent communication, data security measures, and support services.

In this regard, the present work made it possible for the first time to classify AI applications to be manipulated in future studies to deepen the role of perceived social risk and social value in shaping the acceptance of AI technologies and to understand why people perceive some AI applications as risky and others not.

Our work also provides a valuable starting point for developers, designers, researchers, and institutions in defining an anthropocentric development of AI technologies. As technology advances and AI becomes more pervasive, future research should keep integrating sociopsychological perspectives into machine learning development to foster positive interactions between people and AI tools. Indeed, we firmly believe that understanding the psychological perception of AI can help researchers and developers in adopting an anthropocentric approach when designing the technologies that will be developed in the near future.

Data Availability

The data that support the findings of this study are available from the corresponding author, upon reasonable request.

Conflicts of Interest

There are no relevant financial or nonfinancial competing interests to report.

Acknowledgments

The present work was supported by the European Association of Social Psychology (EASP) Seedcorn Grant.

Supplementary Materials

Materials used in the main study, confirmatory factor analyses, and supplementary analyses for both the preliminary and the main studies are reported in the Supplementary Materials. (Supplementary Materials)