Abstract

Fake news has been linked to the rise of psychological disorders, the increased disbelief in science, and the erosion of democracy and freedom of speech. Online social networks are arguably the main vehicle of fake news spread. Educating online users with explanations is one way of preventing this spread. Understanding how online belief is formed and changed may offer a roadmap for such education. The literature includes surveys addressing online opinion formation and polarization; however, they usually address a single domain, such as politics, online marketing, health, and education, and do not make online belief change their primary focus. Unlike other studies, this work is the first to present a cross-domain systematic literature review of user studies, methodologies, and opinion model dimensions. It also includes the orthogonal polarization dimension, focusing on online belief change. We include peer-reviewed works published in 2020 and later found in four relevant scientific databases, excluding theoretical publications that did not offer validation through dataset experimentation or simulation. Bibliometric networks were constructed for better visualization, leading to the organization of the papers that passed the review criteria into a comprehensive taxonomy. Our findings show that a person’s individuality is the most significant influential force in online belief change. We show that online arguments that balance facts with emotionally evoking content are more efficient in changing their beliefs. Polarization was shown to be cross-correlated among multiple subjects, with politics being the central polarization pole. Polarized online networks start as networks with high opinion segregation, evolve into subnetworks of consensus, and achieve polarization around social network influencers. Trust in the information source was demonstrated to be the chief psychological construct that drives online users to polarization. This shows that changing the beliefs of influencers may create a positive snowball effect in changing the beliefs of polarized online social network users. These findings lay the groundwork for further research on using personalized explanations to reduce the harmful effects of online fake news on social networks.

Keywords: fake news; influencers; online belief change; online opinion formation; polarization; social network sites

1. Introduction

Fake news has been linked to the rise of psychological disorders [1], the increased disbelief in science [2], and the erosion of democracy and freedom of speech [3]. Americans believe fake news to be a more severe problem than most of the other critical issues in the country [4]. Lately, deepfake technology [5] has been used in the production of videos of artificial intelligence (AI)–created avatars impersonating news anchors reporting on fake news [6]. The advancement of deepfake technology is adding to the realism of fake news artifacts, increasing the difficulty for news consumers to discern disinformation from real news. Generative AI is increasingly being used with malicious intent. There is evidence of scammers producing AI-generated fake voice messages to victims’ family members, giving them dire news of being in immediate danger and requesting monetary help [7].

For these reasons, fake news is a hot research subject at the time of this writing [8]. The state-of-the-art is mainly targeting their detection. Initial approaches attempted to use machine learning classifiers over news content and analyze the news source. However, these methods fell out of favor due to the need for a level of manual annotation that renders the approaches unusable in practical settings. Moreover, as the fake news spread in online platforms sometimes tends to be of viral velocity [9], these classifiers are not well suited to the task. The community identified this problem and has started automatically adopting deep learning techniques to extract features from online news posts. However, these black box models need more transparency to earn the news consumer’s trust in their generated outputs. Detection explainability and visualization research started to gain momentum [1012], leveraging explainable AI (XAI) [13]. Even though research on XAI has evolved significantly, it is still producing explanations that are too technical and suited to machine learning experts rather than the general audience [14].

More recently, the community seems to be developing an understanding that fake news online consumers’ education may be as crucial as fake news detection [15]. In this context, education may require changing their opinions and their preconceived beliefs. Cold factual explanations defending an opposite position can sometimes backfire and further entrench news consumers’ preconceived beliefs, especially in online social network site (SNS) groups where polarization is prevalent [16]. In response to this, the research community has started to explore approaches for changing consumers’ nonfactual beliefs [12] that could be more efficient alternatives to the usual human-generated fact-checked explanation articles [17]. In [18], the authors argue that explanations that nudge readers into a reflective state are more efficient than purely factual ones in changing user beliefs of fake photographs. Analyzing emotions in speeches and consequent explanations using contrastive elements has also been investigated [19]. This motivated researchers to explore novel methods for creating more nuanced and emotionally resonant explanations to encourage people to reflect on their beliefs. Some studies have proposed generating artistic or emotional explanations as an alternative approach to changing user beliefs rather than relying solely on facts [20, 21]. These explanations are aimed at evoking an emotional response from the newsreader and gently nudging them into a reflective state.

Another work proposed a roadmap to personalizing fake news explanation systems [22]. Our general hypothesis is that fake news explanations personalized to some level and evoke an emotional response carry better odds of changing users’ preconceived beliefs than purely factual explanations.

1.1. Computational Creativity (CC)

CC is a subfield of AI research that focuses on computational systems that exhibit behaviors that unbiased observers could deem creative [23]. One of the most popular CC theories offered by the literature [24] has its foundations in what is known as the four Ps [25, 26].

The four P’s theory categorizes the study of creativity from four vantage points: Person, or what about the agent makes them creative; Process, or what sort of actions are performed in the manufacturing of creative work; Product, or what about the output artifact is worthy of being called creative; and Press, or what about the cultural tendencies of the environment drive a given work to be deemed creative.

The product vantage point is of interest when the goal is to produce something useful to humans [27], applicable to the fake news explanation use case. The literature offers several examples of computer-generated creative artifacts, such as music parodies [28], memes [29], anecdotes, poetry [30], and jokes [31]. More recently, the introduction of large language models (LLMs) [32] opened up a new realm of possibilities around AI-generated creative artifacts. LLM fine-tuning is being researched as an approach to specialize LLMs in specific tasks that are better aligned with human-generated tasks [33]. Emotion-based personalized explanations can leverage this research to present explanations in a manner that best aligns with the user preferences and maximizes the value of the experience through emotion evocation. It has been shown that computers can generate artifacts that create an emotional impact [34]. Our longer-term research objective is to use CC-generated fake news explanations to verify our general hypothesis. Figure 1 illustrates the concept. Understanding the current approaches used in explanations to educate fake news readers and how online belief evolves, in general, will provide insight into the research opportunities for validating this idea.

The remainder of the paper is organized as follows. Section 2 highlights the intersections and differences in the current literature review works to our proposed scope. Section 3 presents and explains the research questions that motivate the review methodology. The methodology itself is included in Section 4. This section presents separate discussions on the rationale behind the inclusion and exclusion criteria, the search methodology applied, and a taxonomy organizing the included works in clusters to facilitate analysis. Section 5 presents the findings for each review domain. Section 6 discusses the reviewed papers, highlighting how the findings address the proposed research questions and present other identified patterns. This section also presents the identified grand challenges and related future work. Conclusions are summarized in Section 7.

To the best of our knowledge, this is the first cross-domain systematic literature review that includes the models, methodologies, and user studies focusing on their influence on belief formation and how preconceived beliefs are changed in online and social network platforms. Furthermore, we pay special attention to the role that polarization, an orthogonal dimension to the three chief ones, plays in online belief change. This section reviews related work on each domain and highlights the current gaps our work fills.

2.1. Models

From a model’s perspective, modeling of opinion dynamics has been an active object of study [35], driving the community to generate surveys of online opinion propagation [36] and trust propagation [37]. The author in [36] includes topics of interest to our work, such as stubborn agents, biased agents, and opinion manipulation, the work reviewed papers published before 2019. From a model’s perspective, we offer the community continuation of the work by Noorazar through the review of works published from 2020 and newer. Urena et al. [37] focus on how trust propagates in networks from a vantage point of opinions and recommendations.

Trust propagation in our context addresses how trust in specific opinion formation agents, such as influencers (INFLs), affects belief formation, a completely different approach.

Opinion formation models are another branch of opinion dynamics. Mastroeni et al. [38] specifically focus on agent-based models, which are centered on their mathematical formulation. Similarly, Abid et al. [39] also focus on the mathematical formulation of agent-based models. In contrast, we purposely exclude these works and only include the ones that have some practical validation through either simulation or dataset experimentation.

2.2. User Studies

From a user study perspective, the literature offers a few review works centered on specific online information domains, such as health [40, 41], politics [42], and online marketing [43, 44]. In the health domain, Wang et al. [40] executed a systematic literature review addressing misinformation spreading online health information. This study was performed before the COVID-19 pandemic, which makes it interesting from the standpoint of state-of-the-art before the event that has dominated health misinformation studies since 2020. While the methodology applied in the work was thorough and the findings around misinformation in online health insightful, the work does not address the vantage point of online belief change. The literature also offers works on COVID-19-related disinformation. Conspiracy theories are directly related to belief formation and opinion spread. Different conspiracy theories were born during the first year of the COVID-19 pandemic. Tsamakis et al. [41] performed a systematic literature review on COVID-19-related conspiracy theories. It focused on their prevalence, determinants, and public health consequences. An interesting result presented by the work, albeit somewhat predictable, was the higher prevalence of politically motivated COVID-19 conspiracy theories than other determinants. The work presented studies in the dimensions of demographics, level of income, psychological factors, religion, political orientation, and trust in science. However, it marginally addressed online beliefs related to acceptance of this class of conspiracy theories, a topic covered by our work.

Politics is another online belief-related research hotbed. One particular vantage point is online political participation (OPP). The level of OPP has been linked to disinformation and conspiracy theories [45]. Therefore, the topic is relevant from an online belief standpoint. The literature offers a systematic literature review of definitions and measurements of OPP [46]. The finding most relevant to online belief is that OPP is not an online equivalent of traditional offline political participation. It is instead shaped by and contingent upon the online platform on which the participation is conducted. The work does not elaborate further on the specific characteristics of platforms that enforce political beliefs that influence OPP. We hope our survey will provide further insight into this topic.

Opinion formation is also important in the domain of online marketing. Specific to online marketing, product, and service reviews and ratings are driving forces of online opinion formation. A systematic literature review and comparative study on how reviews and ratings influence opinions on buying and usage of the products were presented in [43]. The work concludes that regular consumer reviews are more influential in opinion formation than recommendations by professionals and paid experts. It may be seen as a use-case example of a social influence–based opinion model for the online marketing domain. Our work will attempt to find approaches that can be applied across domains.

2.3. Methodologies

From the methodology dimension perspective, the authors of [47] review belief dynamics processes from psychology, sociology, economics, philosophy, biology, computer science, and statistical physics perspectives. The work proposes a framework to enable comparisons of different belief-capturing methodologies. Even though individual belief is included as a structural component of the framework and is briefly discussed, the framework limits its modeling to a typical statistical physics approach. Our work is aimed at a more holistic view of the existing methodologies and studies the ones best suited to capture belief change. In our work, we felt that it is appropriate to combine models and methodologies into a single section named opinion dynamics, presented in Section 5.3.

2.4. Polarization

Lastly, for the polarization dimension, a notable systematic review links social media to polarization, synthesizing the contingent factors and underlying processes [48]. The work provides three aspects of polarization conceptualization: the ideological or opinion-based concept, the affective concept of disliking people from outgroups, and the social concept of avoiding the company or linkage with outgroups. This leads to presenting a conceptual framework of social media and polarization. Another study reviews political polarization from a psychology vantage point [49]. The work provides a functional conceptualization of polarization in an attempt to explain how polarization may occur across partisan fault lines. It provides arguments that polarization is most likely to occur in scenarios of belief conflicts in society, such as in politics. Situations of belief conflicts tend to drive the formation of opposed belief groups, which are prone to polarization. Even though these works provide a rich link between online beliefs and polarization, neither studies the effect that polarization may have on constraining belief change. Our findings from that vantage point are presented in Section 5.2.

3. Research Questions

Figure 1 illustrates the conceptual idea for the future research setup to validate our main hypothesis. However, several questions remain unanswered regarding the detailed implementation of this concept. The research questions presented in this section were designed to help answer some of these questions.

3.1. RQ1—What Are the Main Drivers of Online Belief Change?

Understanding the primary motivators that lead online users to change their preconceived beliefs is pivotal for designing personalized explanations that efficiently educate users in the event of fake news beliefs. This understanding is also central to the design of experiments to validate the main hypothesis of this work.

3.2. RQ2—How Are Current Opinion Models Being Used to Capture Belief Changes?

Opinion dynamics covers a wide range of social science phenomena, such as the appearance of fads, consensus building, collective decision-making, rumor spreading, extremist expansion, and even cult propagation [50]. This RQ constrains the analysis of the models to focus on online belief change. RQ2 intends to understand if there are specific models offered by the literature that can be applied to collected experimental data to facilitate the identification of belief change by the participants.

3.3. RQ3—What Role Does Polarization Play in Changing Online Users’ Preconceived Beliefs?

This RQ explores the direct effect of information bias and polarization in changing online users’ beliefs. This is important since the chief objective of fake news systems is to align news consumers with factual news. For that to happen, people with preconceived beliefs in fake news shall be shown that their beliefs are not based on facts and ought to change. Understanding not only if polarization is an important force potentially preventing belief change but also if there are documented approaches to best deal with this driver may provide insight into the explanation content and presentation that carry the best odds of success. Furthermore, it may add a dimension to the experiment as we can compare the polarization effect on purely factual and personalized explanations.

3.4. RQ4—What Alternative Approaches to Offering Fact-Checked Explanations Have Been Pursued by the Literature in an Attempt to Change Preconceived Online Beliefs?

This RQ is aimed at identifying whether the literature offers alternative explanation methods beyond the usual fact-checked textual ones. The discovery of alternate explanations may modify and expand the high-level experiment design in Figure 1 into other dimensions of comparison between creative explanations and other existing types.

3.5. RQ5—How Can the Identified Belief-Changing Approaches Be Generalized to Multiple Belief Domains?

This RQ will seek insights into whether any of the existing belief-changing approaches have the potential to be generalized into a framework that can cover multiple belief domains. We will highlight their strengths and weaknesses from a generalization potential standpoint. We will conclude the analysis by providing recommendations toward the generalization goal.

4. Methodology

This section presents the methodology used to create the final corpus of work reviewed. It explains the inclusion/exclusion criteria, the methodology applied, and the taxonomy used to classify the reviewed works.

4.1. Inclusion and Exclusion Criteria

This work includes peer-reviewed journal articles and conference papers of more than four pages in length, published in 2020 and later found in the following databases: Scopus, ACM, IEEE Xplore, and Web of Science. Nonpeer-reviewed articles and book chapter papers not centered on online belief and opinion change are excluded from the review. Opinion dynamics theory works are included when validated via dataset or simulation-based experiments. Purely theoretical papers are excluded. These papers are deemed too far removed from the goal of hypothesis verification as they would still need to be validated through experimentation. One of this work’s goals is to understand potential psychological and social forces that may constrain acceptance of fact-based explanations. Therefore, polarization papers focusing solely on algorithm bias effects on polarization are excluded. Table 1 summarizes the inclusion and exclusion criteria.

An initial search according to the following keywords (online OR “social network”) AND (“opinion change” OR “belief change” OR “change in belief” OR “opinion formation”) filtering peer-reviewed papers published in 2020 or sooner returned 372 hits. A visualization created with the VOSviewer software application [51] was performed to identify potential clusters. Figure 2 presents the initial visualization results. Four high-level clusters were identified: user studies (green), opinion dynamics related to opinion formation (red), belief related to user intervention (purple), and public opinion (blue). Furthermore, the user studies cluster revealed the specific domains of COVID-19, climate change, education, and politics.

There are some noticeable correlations between the visualization of Figure 2 and the taxonomy presented in Section 4.2. We originally named the red cluster “opinion formation” due to the highest prevalence of the word “formation” in the visualization. However, the reading of the works revealed that the highest prevalence of the word was not necessarily due to the cluster being about opinion formation but because of its relationship with the “opinion dynamics” subject. This is why the word “relationship” is also highly prevalent in the obtained visualization.

Terms with lower prevalence than opinion dynamics but with significance, such as “opinion formation process” and “opinion formation model,” are, in fact, opinion dynamics implementations either through simulation or dataset experimentation. This is the reason that the opinion dynamics dimension is subdivided into simulation and dataset experiment clusters. Other terms such as “structure,” “opinion evolution,” and “network topology” represent applications of opinion dynamics techniques. The techniques can either focus on network structure (NETS) or the dynamics of opinion evolution in different contexts. These contexts are, namely, situations of crisis (CRIS) or traumatic events, the influence of stubborn or strong opinioned neighbors and INFLs, analysis of confirmation bias or homophily (HOMY), analysis of sociological or psychological forces in opinion dynamics, and the study of group or public opinion formation. All of these contexts became orthogonal clusters of the taxonomy as they are relevant to all dimensions of the review.

The blue cluster shows the term “public opinion” as highly prevalent because of its orthogonality with all three taxonomy dimensions. In this context, “public opinion,” “public opinion formation,” and “network public opinion” were combined in a single cluster: group opinion (GRPO). The high prevalence of “impact,” “topic,” and “factor” is somewhat synonymous in our context. The green cluster groups the user study papers. The identified user study domains in the visualization, COVID-19, student, citizen, and climate change became taxonomy clusters. The terms “support” and “exposure” were related to the sex and homosexuality theme, which was turned into a taxonomy cluster.

The visualization motivated adding other keyword searches centered on the following topics: belief and opinion changing user studies, belief formation models and methodologies, opinion formation models and methodologies, and polarization. The set of keywords for each search is illustrated in Figure 3.

Applying these criteria to the corpus yielded a total of 91 papers that were reviewed. These papers received a combined 782 citations at the time of this writing. Sixty-six of these works were published in journals, and the remainder in conferences. Fifty-one percent was published in Q1 journals and 10% in Q2 journals. Seven percent was published in A conferences and 9% in B conferences.

4.2. Taxonomy

The application of the methodology presented in Section 4 drove the organization of this review into three dimensions: domain specific, opinion models, and polarization.

Table 2 shows the taxonomy classification and corresponding works assigned to clusters. Some papers appear in more than one cluster. The description of each cluster is presented below.

4.2.1. TRMA (Trauma)

It includes papers addressing belief change as a result of traumatic events. TRMA has been defined as “the experience of a vital discrepancy between threatening factors in a situation and individual coping abilities” [138]. TRMA can be objective and subjective [139]. Objective traumatic TRMA directly leads to post-traumatic stress disorder (PTSD). Subjective traumatic events may or may not.

4.2.2. INFL

It includes the effect of INFL [140] agents in belief change.

4.2.3. HOMY

HOMY is attributed to people’s natural tendency to associate with people similar to themselves. Studies have documented that even infants as young as 6 months of age already show HOMY [141].

4.2.4. BCHB (Biaschamber)

We dubbed BCHB as a combination of echo chamber and confirmation bias. An echo chamber is defined as the formation of like-minded online users reinforcing a narrative [142]. Confirmation bias is defined as the seeking to interpret evidence in ways that are partial to existing beliefs [143].

4.2.5. PSOC (Psychosocial)

Even though HOMY and BCHB are PSOC phenomena, this cluster includes other sociological and psychological constructs.

4.2.6. GRPO

It focus on belief change of groups and public opinion.

4.2.7. SNSBs (SNS Biases)

It focuses on SNSBs, which include filter bubbles [144] and other bias-inducing algorithms used by social networking sites.

4.2.8. NETS

It focuses on the influence that neighbor agents may have on belief change for groups within the same network.

4.2.9. CRIS

It includes CRISs that did not lead to TRMA.

4.2.10. STROs (Strong Opinions)

It includes papers that address agents with STROs about a subject, stubborn, and zealot agents.

5. Findings

This section presents the survey findings in the context of its taxonomy.

5.1. Domain-Specific Dimension

As shown in Table 2, the domain-specific dimension of the review was split into five classes: COVID-19, climate change, education, politics and policy, and other. This section presents the findings for this dimension. Trust can be defined as “a psychological state comprising the intention to accept vulnerability based upon positive expectations of the intentions or behavior of another” [145]. Trust is, therefore, a psychological construct, and it was one of the central drivers of opinion formation [79, 80] and belief change for the works reviewed in this dimension [52]. Trust in celebrities, namely, parasocial relationships [65], and social network INFLs was exploration topics. It was seen that INFLs can significantly affect people’s opinions on different issues [52, 54] and that trust in the information source is an essential driver of belief change. Trust in government and officials was correlated with the consistency of the public messaging, affecting online belief change [70].

Other psychological constructs were addressed in this dimension. Normative influence [68] is popularly known as herd mentality. Normative influence originates with the basic human desire to not stand apart from a group, i.e., the desire for social acceptance, which varies according to one’s perceived risk of social rejection [146]. Informational influence is defined as the use of group knowledge as a determinant of correct beliefs [147]. In [68], the authors showed that factual information was not the primary driver of voting behavior. Normative influence may be powerful enough to influence belief change toward conformity to group decisions, even in secret voting. HOMY is another. In [67], the authors showed that it is possible to build a profile of former US President Donald Trump’s supporters using HOMY as one of the most predictive signals for the model. Political scientist Elizabeth Noelle–Neumann proposed a theory dubbed the spiral of silence [148], highlighting individuals’ unwillingness to publicly express an opinion when they feel that it may be against the majority opinion. This was verified through the opinion of social media users on LGBT acceptance in Nigeria [72, 79]. Bandwagon effects refer to individuals’ tendencies to conform to predecessors’ decisions [149]. In an online scenario, users are likely to rely on and gravitate toward more popular opinions as a form of mental shortcut. The authors of [74] showed strong evidence of further polarization of preconceived beliefs away from the expert’s presented opinion. TRMA is another psychological event that can drive core belief change [150], with results showing that people who underwent intense TRMA feel that they changed their beliefs toward humanity. Post-traumatic growth (PTG) is defined as how individuals can experience positive psychological change after a traumatic event [151]. The authors of [73] showed deliberate rumination [152] to mediate the relationship between core belief challenge and PTG. Still, in psychology, the authors of [76] attempted to understand whether specific personalities are more prone to be persuaded into changing their beliefs, but their results were inconclusive.

It was observed that the polarization of subjects seems to be cross-correlated. Political affiliation was shown to be a polarization topic to be at the center of cross-correlation with topics such as environmentalism and climate ideology, COVID-19 response, policy preferences, immigration and patriotism, and even beliefs in biological attribution to homosexuality [71], which is directly related to support for homosexual rights. The results obtained in [54] revealed a connection between political viewpoints and misinformation regarding hydroxychloroquine (HCQ) in treating COVID-19 despite not being supported by scientific evidence. The author in [69] showed how political orientation is critical in shaping how public crises are interpreted and how belief changes about them. The authors of [58] show an association between left/right political ideology and environmentalist/skeptic climate ideology, respectively.

The connectivity between the traumatic public events and the arousal of emotional processes was demonstrated. Examples are the historical and institutional racism added to historical TRMAs such as the Tuskegee syphilis study [153] and the unethical and nonconsensual use of cancer cells from Henrietta Lacks [154] providing context for understanding vaccine hesitancy among Black individuals and their distrust of healthcare professionals and researchers [52]. A CRIS also evokes emotional responses that lead to polarization. This was demonstrated in [59] for the climate change topic and the topic of public opinion about police funding [64] at around the time of the murder of George Floyd at the hands of law enforcement in the United States [155]. It was also seen that the level of a person’s stubbornness was shown to be inversely correlated to the probability of belief change [55]. The hypothesized correlation between emotional arousal and polarization was confirmed.

Furthermore, a strong and direct stance stating the content is fake invariably leads to conflict, aligning with the finding that presenting factual explanations defending an opposite position can sometimes backfire and further entrench polarized people in their preconceived beliefs [16]. Stubbornness can lead to the entrenchment of beliefs. This can be seen even in less polarized topics, such as primary school teachers’ beliefs about teaching computer science [63]. The results of this work showed that younger, less experienced teachers showed no signs of belief perseverance. Conversely, older, more experienced teachers demonstrated higher levels of belief perseverance, even when they indicated positive reactions toward the received computer science training.

Multiple studies provided evidence of the efficacy of explanations that nudge people into a state of reflection about their preconceived beliefs [18]. Ruffin et al. argue that attempting to explain how fake photographs were manipulated offers better results if done cautiously [18]. The authors of [62] showed this to be also valid in the context of belief change related to the nature of intelligence. Their results showed that rather than convincing people that intelligence is malleable, gentle mindset interventions may be the most important activity for helping them reflect on intelligence’s malleability. This nudging may happen with the help of an emotion-evoking explanation, for example. Emotion responses were correlated to low knowledge in the process of a layperson acceptance and resultant opinion formation related to climate engineering approaches [57] and the driving of belief change of teachers under online and blended delivery methods [61]. The results showed that increasing knowledge about the topic in both cases drove belief change. This validates the concept that if knowledge is low regarding a given topic, emotional responses are used as indicators for attitudes toward or against a stimulus [156]. It also reinforces the need to balance an emotion-evoking explanation with facts to drive an increase in subject knowledge.

Explanation personalization was also addressed in this dimension. The authors of [77] showed that a personalized online algorithm–based intervention can change beliefs that may lead to inappropriate antibiotic demand by patients. Conversely, results obtained in [75] show that personalization enhances user experience, but the so-called “filter bubbles” favor the emergence of opinion polarization and radicalization through confirmation bias. One final noteworthy comment is about an interesting approach using sentiment analysis (SENTANL) pre- and postevent to capture belief change [56]. This methodology is promising and should be investigated further as a potential approach to validate this work’s central hypothesis.

5.2. Polarization Dimension

As shown in Table 2, the review’s polarization dimension was split into three classes: theoretical studies, models, and user studies. This section presents the findings for this dimension.

PSOC polarization driving forces were identified, namely, normative influence [116, 157], spiral of silence [148], confirmation bias, backfire effect, parasocial relationships, and HOMY. Confirmation bias influences polarization as the intensity of preconceived beliefs is sometimes the controlling aspect of belief change [130]. Arguably, people seek communities with higher chances to confirm their beliefs [136]. The results obtained in [16] showed confirmation bias in combination with the backfire effect to be strong drivers of polarization. The authors in [133] showed evidence of polarization development in another combination of PSOC constructs: parasocial relationships and HOMY. It was shown that people became further entrenched in their preconceived beliefs in the case of a contradicting opinion from a subject matter expert celebrity. In [131], the authors showed that feelings of resentment were the most significant predictor of the Black Lives Matter movement’s support. Low-resentment individuals who expressed themselves on social media more frequently were less supportive.

Some papers demonstrated how some fragmented networks self-organize into multiple echo chambers of consensus and that consensus is a precondition for the emergence of polarization [109, 113, 114, 126, 128]. The authors of [115] looked even further into the correlation between echo chambers and polarization. The authors argued that their results validated the idea that echo chambers create a stable environment of confirmation bias and can even actively alienate some group members from outside contradicting information sources [158]. Similar results were obtained in [117, 129]. Another relevant finding was that if the same argument is presented by two people, one from their community and the other from another network, the likelihood of acceptance of the former is notably higher. This suggests that one possible way to reduce polarization may be to change beliefs from within. Focusing on changing the beliefs of key members, such as INFLs, of a polarized group may trigger a snowball effect in the beliefs of all members of the given community. The results in [123] suggest that this may be the case as they showed that most individuals from a network over time switch to opposite sentiments about the preconceived belief. The results obtained in [111] suggest that another possible way to revert polarization is to shield the members from their corresponding echo chambers, allowing them to access the ideas of members outside these chambers freely.

Evidence also acknowledges that user adherence to misinformation may sometimes be shifted away from accuracy and toward other goals. In [102], the authors concluded that providing subtle accuracy nudges is a promising approach to improving the quality of shared news. The correlation between SNSB and polarization was analyzed and verified [119]. Arguably, there is also a correlation between SNSB and individual PSOC constructs. Correlations between polarization due to the spiral of silence [121] and filter bubbles [122, 159] were demonstrated when people are influenced by strong SNSB. Another study looked at SNSB through the lens of how people change their opinions when exposed to viral content [120]. The results showed that polarization barely increased after a regular marketing campaign and significantly increased upon the spread of polarized content.

The cross-correlation between polarized subjects also becomes evident after the review of this domain. It seems that political ideology is the central topic of polarization, and it can become cross-correlated with other polarization-prone subjects such as minority equality [137], patriotism, welfare policy [135], and the response to health crises [134]. The authors of [132] demonstrated that this cross-correlation directly correlates with emotion. They concluded that a psychological factor that impedes climate change beliefs is not related to climate but is mainly motivated by the feelings of dislike one political group feels toward the opposing group. CRIS events were also connected to the emergence of polarization [127, 137].

5.3. Opinion Dynamics Dimension

This section presents the findings for the opinion dynamics dimension. It includes the research works split into two classes, simulation and dataset experiment, as defined in Table 2.

The study of the effects of PSOC constructs was also present. HOMY is an important one. The results in [104] showed the effects of HOMY in the formation of echo chambers. It also demonstrated a moderate to high resemblance of the echo-chamber phenomenon for network topologies of abortion, capitalism, and feminism. This aligns with trends from other dimensions, suggesting cross-correlation between polarization topics. In [106], the authors show a context of evolving HOMY in political social network interactions. The results in [83] showed how HOMY and the spiral of silence drive people to form online social groups. The bandwagon effect, or herd mentality, influences consensus formation, as verified in [149]. The author in [101] showed a tendency for moderate online users to move toward the average opinion of their online friends. The authors in [98] showed that the bandwagon effect has a stronger driving force than INFLs and that the reach of consensus will be magnified in a scenario of bandwagon effect. However, this does not happen in highly segregated opinion networks [93]. This is an important finding as it suggests that polarization can be avoided if education on fake news posts happens at the initial stages of a social network before its consequent evolution to consensus. It was demonstrated in [81] that it is more difficult for someone to reach a consensus with a person who belongs to a group with a higher proportion of low-educated people than with a higher proportion of high-educated people. Another data point that shows the importance of educating online users on fake news posts. PSOC constructs are part of what forms a person’s individuality. Individuality is also important regarding how personal experiences help shape GRPOs. The authors in [103] showed how GRPO results from the community’s combined individual experiences. The authors argue that the so-called expert agents, or agents that bring strong individual experiences aligned with subjects of interest to the group, are highly influential to group beliefs.

Some studies highlighted the importance of a solid factual foundation to balance emotional arousal that nudges people onto reflective states for a higher probability of changing polarized beliefs. Emotion was confirmed to be an important component in this nudging, especially when balanced with other cognitive functions. Emotion was shown to be correlated with the higher interest people showed in resharing audio messages than purely text messages on social networks [102]. We hypothesize that audio messages have the potential to carry more emotional content than textual messages, driving people to have more interest in resharing them. The results in [85] showed that an online post combining affective and cognitive content increases people’s willingness to share the message. Conversely, effectively weak and mostly cognitive content was shared the least. The nudging also needs to be founded on facts. The authors in [86] showed how removing facts from a post alienates people, and this alienation drives the emergency of nonfactual subtopics. The formed new subgroups can lead to a phenomenon known as information gerrymandering [160], where STRO individuals can keep negatively held opinions alive, even if nonfactual, as demonstrated in [96].

Information alienation was highly correlated with the emergence of polarized subnetworks. Therefore, it is important to share information about a given topic of interest to public opinion as early as possible, especially during a CRIS [94]. However, this needs to be done carefully to avoid a scenario of inconsistent messaging in case the results need to be reviewed later. The information revision may cause a backfire effect as [70] has provided evidence that inconsistent messaging reduces the effectiveness of explanations targeted at changing group beliefs. The constant changing of messaging was shown to generate a breach of trust by the public concerning the source of the message.

Research from this dimension found evidence that large networks with a diversion of opinions evolve into several smaller networks where consensus is reached and then polarization develops [87, 89]. However, Mansouri and Taghiyareh [82] show that when influential leaders exist in a social network, segregation has less impact on opinion formation than the effect created by INFLs. This shows how INFLs are key drivers of belief change in opinion networks [97], including public opinion formation [92]. This effect was also verified when mass media played the role of INFLs [84]. It was shown that even a small percentage of INFL-type agents motivated to manipulate opinion toward a specific goal could shape the majority opinion [100]. Similar results were shown in [99].

Being the intermediate step between opinion segregation and polarization, consensus needs to be understood. The results in [95] reveal that consensus in a multitopic network can be achieved if the number of stubborn agents around the subjects is small. Lastly, natural language processing (NLP) SENTANL on social network posts to identify belief change was also present in this domain [108]. This seems to be the preferred technique for identifying belief change by online users and is used across application domains.

6. Discussion

Section 5 presented the findings for the three dimensions included in this review. It presented the trends found in each dimension separately. As the chief focus of this work is a holistic review of online belief change, we consider the trends that appeared in more than one of the investigated dimensions to be the most relevant to belief change. This section summarizes these trends, and Table 3 shows the number of reviewed papers that addressed each of them in their corresponding dimensions. A given reviewed paper may have included more than a single trend. Conversely, some reviewed papers may have addressed a trend that did not appear in multiple dimensions. The following acronyms define each cross-dimensional trend. SUBCNS (subconsensus): Several works reviewed showed that an initially segregated network naturally self-organizes into multiple echo chamber subnetworks of consensus and then further evolves into corresponding polarization groups.FACBAL (factual balance): Multiple works showed that a combination of knowledge increases positive emotion, gently nudging online users into a reflective state. It became clear the importance of explanations to balance facts and emotional evocation for increased odds of success in belief change.INFL: INFL in SNS was confirmed to be a high driving force of belief change.SENTANL: NLP SENTANL before and after a specific event seems to be the preferred technique for identifying belief changes by online users.PSOC: This cross-dimensional trend includes the PSOC constructs linked to belief change. Psychosociological tendencies such as filter bubbles, spiral of silence, confirmation bias, backfire effect, parasocial relationships, and HOMY facilitate the evolution of social network groups into a scenario of polarization. Trust was also shown to be a psychological construct, arguably one of the strongest drivers of belief change in this group.CRSPOL (cross-polarization): All three dimensions showed that subjects’ polarization is cross-correlated in different subjects, with political ideals being the central polarization topic.ERLPUB (early publication): Several works showed the importance of publishing results debunking false claims as early as possible to counteract public disinformation. However, this needs to be done carefully to avoid a scenario of inconsistent messaging in case the results need to be reviewed later.EMLNDG (emotional nudging): EMLNDG is referred to herein as the alternative to a factual explanation that balances facts with emotional evocation to nudge the online user to a reflective state.

Table 3 shows that four cross-dimensional trends were addressed by reviewed works from all three dimensions: PSOC, INFL, CRSPOL, and EMLNDG. Of the four, PSOC had the most representation, with 32 papers. Several PSOC constructs were present in these 32 works. Confirmation bias was shown to have a positive correlation with the openness personality trait and a negative correlation with neuroticism [161]. These two personality traits are part of the big five personality model [162]. The entrenchment of beliefs is a complex construct that may have several root drivers; however, the personality or character of the believer has been identified as an important factor [163]. Attitudinal HOMY refers to personality and attitude similarities between individuals [164]. In the context of celebrities, the more a person identifies similarities between a celebrity’s attitudes and overall personality and their own, the more this individual will research about that celebrity [165], leading to parasocial relationships. Previous studies have demonstrated HOMY as one of the predictors of parasocial interactions [166]. There is currently no consensus on the meaning of HOMY beyond the broad definition stating that like-minded people tend to form communities. However, some psychology researchers argue that one’s personality defines one’s HOMY nature [167]. Previous research correlated the spiral of silence with the cultural behaviors of individualism and collectivism [168]. Emotion research theory considers culture to be one of the three driving influences of how people perceive and act on emotions [169]. Therefore, it is plausible, albeit still not confirmed, that cultural differences may offer a correlation between the spiral of silence and emotion. Research is evolving toward an irrefutable connection between these psychological constructs and individual personalities. However, the correlations should not be ignored.

The second ubiquitous trend with the most representation in the review is INFL, with a total 19. Trust in the information source is the foundation for the INFL’s driving force in SNSs, its development ranging from parasocial relationships in the case of celebrity INFLs through confirmation biases and other intrinsic individual tendencies. A possible conclusion is that trust in INFLs has its roots in psychological constructs. Therefore, the INFL trend may be considered a corollary of the PSOC trend in our context. Psychology research has also shown a direct relationship between psychology, personality, and emotions [170172]. We argue that this close relationship between personality and emotion may be why EMLNDG also appears in all three dimensions of this work, addressed by 14 papers.

CRSPOL is the last cross-dimensional trend that covers all three domains, albeit with a total number of papers much lower than the other three trends. INFLs have a strong effect in driving polarization, which emerges in good part due to psychological or emotional reasons or a combination of both. It is expected that a topic chiefly influenced by the three main drivers of belief change would also be present in all three review domains.

There is a strong relationship between emotion and sentiment, as sentiment can be construed as a thought, an opinion held by the person based on a feeling. In general terms, sentiment is the effect of emotion [173]. Since emotion plays such a pivotal role in belief change, it becomes natural that SENTANL emerged as a cross-domain trend and the preferred method for evaluating online belief change.

Perhaps contrary to intuition, purely factual explanations are not the most efficient in changing online forged beliefs. EMLNDG’s importance to belief change drives the corollary cross-dimensional trend of FACBAL. FACBAL focuses on balancing facts and emotional arousal in explanations. The ERLPUB cross-dimensional trend is a natural consequence of the potential breach of trust between public opinion and officials who publish erroneous early communications and are forced to review the message later. The evolution of belief change from a fragmented network through the formation of subnetworks of consensus that eventually lead to polarization is the central topic of the SUBCNS cross-dimensional trend.

In summary, we showed that the cross-dimensional trends present in all three dimensions of our work are driven either by PSOC constructs, emotion, or a combination of the two. We showed that the other cross-dimensional trends presented have roots in these two drivers. Therefore, we argue that PSOC constructs and emotions are the two main drivers of online belief change. The following section will present answers to each of the proposed research questions.

6.1. Research Questions Answered

This section provides answers to research questions that emerged from the reviewed works.

6.1.1. RQ1—What Are the Main Drivers of Online Belief Change?

The discussion in Section 6 presented the cross-dimensional trends, and Table 3 shows the breakdown of the number of papers that addressed each one of the trends. A numerical analysis of Table 3 indicates PSOC constructs to be the top ubiquitous trend, addressed by 34.4% of all papers. INFLs were the second most addressed trend by 20.4% of all papers. We did argue, however, that trust is a psychological construct, and it is at the center of the INFL drive for belief change. This argument suggests that both trends can be combined, leading to over half, or 54.4%, of all reviewed papers to have focused on PSOC constructs for belief change. Within the context of each dimension, combining the two trends resulted in 55.8% of the domain-specific works, 55.5% of the polarization works, and 52.1% of the opinion dynamics works. This shows an equivalent balance of relevance within each of the domains. EMLNDG accounts for a total of 15.1% of all reviewed works. This trend is the distant next highest trend, but it is much more prevalent than CRSPOL, the last ubiquitous trend, which appears in just 9.6% of all papers. We argued, however, that CRSPOL, as well as the other identified cross-dimensional trends, was corollary to the two main ones. This numerical analysis indicates that PSOC constructs and emotional arousal are arguably the two main drivers of online belief change.

Psychology research has also shown a strong correlation between individuality, personality, and emotions. Tellegen [174] has proposed that even though environmental changes may influence affective responses, a full appreciation of individual differences in emotional response could only be performed if personalities and how they influence affect are considered. The authors of [175] performed a user study and concluded that personality is an essential determinant of an individual’s emotional response. Moreover, in [176], a user study shows that individuals who present with high negative affectivity are generally more introspective, a personality trait, and are more likely to experience discomfort at all times, even in the absence of stress. This shows that individuals perceive emotions differently.

Personality and individuality have been treated as synonyms by various English-language dictionaries. Personality has been defined as “the incarnation of individuality” [177]. The strong correlation between personality and emotion suggests them to be individual characteristics. Therefore, we argue that individuality is the most critical driver of online belief change, materialized through psychological traits and emotions. This result partially validates this work’s central hypothesis that personalized explanations are more efficient in reducing fake news spread.

6.1.2. RQ2—How Are Current Opinion Models Being Used to Capture Belief Changes?

The current opinion models used in the reviewed works that either performed simulations or used real datasets to perform experiences yielded important conclusions in capturing belief change. Interestingly, the opinion dynamics dimension works contributed to all eight cross-dimensional trends in Table 3. It is important to note how these works help to model the evolution of opinion dynamics, starting from regular social networks into multiple subnetworks of consensus and ultimately into polarization.

The most popular approach for capturing belief change is using NLP SENTANL models in social media posts [178]. The overarching concept is to perform a sentiment temporal analysis [179] of posts before and after an event with the potential to drive belief change to verify sentiment change over a specific subject. In the context of our research, a given fake claim is the subject, and the provided explanation is the event of interest. A secondary approach that has been gaining momentum is the temporal analysis of patterns of emotions associated with social media posts [180]. This approach focuses on performing a lexicon-based analysis measuring valence, arousal, and dominance of social media posts using the VAD Lexicon [181]. The works reviewed also showed how INFLs help drive the evolution of opinion in social networks. It was demonstrated that INFLs are very important in shaping the beliefs of the subnetworks of consensus. Moreover, it was also shown how INFLs who manipulate information for some personal gain seem to have an even greater driving force in the creation of polarized networks. This is critical information for creating explanations that can efficiently change online users’ preconceived beliefs. Since INFLs are key to forging opinions, they can potentially be critical agents to start a snowball effect of belief change toward a well-balanced factual explanation debunking fake news. Therefore, the models that identify social network INFLs are also important to belief change. More specifically, the models that find context-based INFLs that polarize these subnetworks [182] can be used to help focus the application of the explanation on these INFLs, followed by the application of temporal SENTANL models to verify whether the explanation was effective in changing their beliefs.

6.1.3. RQ3—What Role Does Polarization Play in Changing Online Users’ Preconceived Beliefs?

The highest driving forces of belief change were also central to creating or expanding polarized online scenarios. Psychological traits and INFL trends account for 55.5% of all reviewed works in this dimension. These papers confirmed that HOMY, confirmation bias, and trust in the information source are especially influential in belief change within polarized networks. Moreover, the SUBCNS trend, covered by 18.5% of the reviewed works in this dimension, presented an important characteristic of belief change toward polarization. It was concluded that a condition of polarization can be very easily created in SNSs. The evolutionary process of polarized network formation starts in opinion-segregated networks, advances to multiple subnetworks of consensus, and settles in many polarized networks attracted by and formed around INFL agents. It was seen that once polarization is established, entrenchment and backfire effects are typical psychoemotional responses by polarized individuals to factual explanations that contradict their preconceived beliefs. These individuals become somewhat immune to fact-based correcting information and, therefore, much more resistant to belief change. Two important conclusions can be drawn. Firstly, it shows the importance of balancing factual explanations with enough emotional content to gently nudge these individuals into reflective states to work around entrenchment situations. Secondly, changing an INFL’s belief from within a polarized community may trigger a belief change snowball effect. This motivates the hypothesis that a belief-changing approach could combine the two concepts: adding emotional content to the explanation to reduce entrenchment, focus, and personalize these explanations on the INFLs of a polarized network.

6.1.4. RQ4—What Alternative Approaches to Offering Fact-Checked Explanations Have Been Pursued by the Literature in an Attempt to Change Preconceived Online Beliefs?

The literature does not seem to offer many alternatives to fact-checked explanations. Some works evaluate whether fact-based explanations are efficient in changing beliefs; however, they have not attempted to apply alternate methods. Even though we could not find a direct answer to this RQ, it motivated the emergence of an interesting conclusion. The evaluation of efficiency in changing online users’ beliefs by a purely factual explanation was performed in different domains: images, audio, and text messages. The studies come to the conclusion that factual explanations are inefficient in changing preconceived beliefs in all presentation domains and that a gentler approach should be investigated.

6.1.5. RQ5—How Can the Identified Belief-Changing Approaches Be Generalized to Multiple Belief Domains?

The answer to RQ4 showed the inefficiency of purely factual explanations across all presentation domains. The same conclusion was reached by works looking at different subject domains. This aligns with our hypothesis that explanation personalization can be the nudge to drive users toward that deliberate thinking-reasoning process. The original goal of this RQ was to find out if the identified belief-changing approaches can be generally applied to multiple fake news domains. As stated, the studies did not specifically reveal alternate methodologies; however, the fact that studies in many different domains recommended the concept of nudging people into reflective states indicates that an approach of balanced explanations as previously stated may be efficient across domains.

6.2. Grand Challenges and Future Work

This section presents the grand challenges that emerged from the identification of the cross-dimension trends.

6.2.1. Psychology Research Intersection

This work argues that individuality is the chief driver of online belief change through its exteriorization as personality traits and individual emotional responses. Several psychological constructs were presented as being of influence in opinion formation. It would be important to advance online belief change and fake news explanation research to have a more mature foundation of psychology research showing solid relationships between given personality types, emotional responses, and the psychological constructs identified as important for online belief change. Even though the field shows meaningful correlations that should not be dismissed, research in this area is still evolving.

As an example, we have shown that explanations that are balanced between factual and emotional content carry a higher potential to avoid preconceived beliefs entrenchment. However, it is essential to consider that the same content may elicit different emotions in different explanation recipients. It is plausible that a given explanation that is expected to elicit a positive emotion to nudge the recipient into a reflective state may backfire and generate a counter-productive one that may drive entrenchment. A deeper understanding of how people of different personalities react to emotions could provide more qualified information to be used in a deeper personalization of the content to maximize positive emotions. Without it, the results may be negatively biased if an imbalance of personalities is involved in the method evaluation. With these limitations in mind, future research investigating this hypothesis should include extensive demographic and cognitive preference data to characterize the study participants as much as possible. This approach may offer opportunities for cross-correlation of emotional responses with individual attributes that may shed light on potentially unexpected or contrary results.

6.2.2. INFL Detection

This work suggested that targeting well-balanced explanations on INFL agents may cause a positive snowball effect to break polarization. This requires identifying these INFLs in what may be a highly segregated network. Research in INFL identification is in its very early stages. Some works focus on this task; however, they currently propose approaches for specific domains, such as marketing [140] and health [183]. Therefore, identifying INFLs on segregated online community networks is an open research area.

6.2.3. SENTANL Versus Emotion Recognition

SENTANL was shown to be the preferred method for belief change detection. Even though SENTANL and emotion recognition are sometimes used interchangeably, they are, in fact, very different. SENTANL identifies the polarity of the person’s attitude toward a given subject as positive, neutral, or negative. On the other hand, emotion recognition is the task of classifying feelings using an emotion model according to the psychology of emotions theory. This is a much more challenging goal and constitutes an entire subfield of affective computing [184]. Emotion detection could be applied automatically to detect the emotions a given explanation elicits in a given user to provide feedback for improving the explanation generation process. SENTANL can be applied to verify whether a given explanation changed the belief and then cross-referenced to the emotion evoked by the explanation for a deeper insight into the belief-changing process.

6.2.4. Polarization Prevention

It was shown in this work that initially fragmented networks evolve into subnetworks of consensus and then into polarized networks. Changing individuals’ beliefs in a fragmented network may be easier than in polarized networks. For this to happen, the belief-changing system should attempt to prevent the network from becoming polarized by preempting the network’s natural evolution. A preempting belief-changing system would be required to include what is known as real-time fake news detection systems. Real-time methods are receiving the community’s attention [185]. They aim to identify potentially fake news at a speed compatible with the typical fake news spread speed. However, this research field is also in its early stages, offering several opportunities.

6.2.5. Well-Balanced Explanations

This work verified that gentle nudging is more efficient than cold factual counterarguments to change online beliefs. The field seems primed for explanations other than fact-heavy text. Figure 1 proposes creative explanations as a potential alternative to fact-checked text. Creative explanations can be defined as involving some level of the creative process to generate an output that can be considered creative. Creativity is immediately connected to art. Art arguably sparks experiences that simultaneously engage many aspects of an individual’s mental life, including emotions [186]. Art in this context can be expanded to its multiple domains, such as poetry, music, painting, and others. Humor has been shown to include patterns of intercorrelations with several measures of creativity [187] and is a vehicle for emotional arousal [188]. In high-level terms, as long as the explanation is anchored in facts to avoid the risk of misinforming the reader, any explanation that invokes some emotional reaction could be a valid candidate to be investigated.

The applicability of the hypothesis that well-balanced explanations are more effective than purely factual ones needs to be verified in different nonpolarized and polarized subjects. The hypothesis is for beliefs surrounding nonpolarized subjects to be less challenging to change. Furthermore, polarization was shown to have a correlation between multiple subjects. Politics was shown to be a centralizing polarization topic. Therefore, it is expected to be much more challenging for a polarized individual to change their political beliefs than their opinion about another polarized subject of less centralizing power.

7. Conclusion

This work presented a systematic literature review of online belief change from the perspective of three dimensions: domain-specific user studies, polarization, and opinion dynamics. We showed evidence that PSOC constructs and emotional arousal are the two main drivers of online belief change. It was presented that this finding is in line with psychology research and that due to the close relationship of individuality with psychological constructs and emotion, individuality is arguably the single most influential force in online belief change. This finding validates the main hypothesis of this work, which states that personalization of fake news explanations is a needed improvement for fake news systems. It was also shown that all the identified cross-domain trends are rooted in individuality, demonstrating the importance of personalization to changing preconceived beliefs in fake news. Chiefly, the conclusion was that well-balanced explanations between facts and emotionally evoking content that can nudge people into a reflective state are the best candidates for the task. We also presented reasons why these types of explanations may be successful across multiple fake news domains.

Polarization was confirmed to be a strong adverse driver of belief change. We have shown alignment between polarization tendencies and individuality. Entrenchment and backfire effect are two constructs that work against belief change and become especially strong as a reaction to purely factual explanations contradicting preconceived beliefs. Polarization has been shown to be cross-correlated, with politics arguably being the central polarization pole. Polarized individuals with a specific political ideology also tend to be polarized on other subjects such as climate change, immigration, policy, COVID-19 response, minority rights, and other sensitive topics. Trust is one of the strongest psychological drivers of polarization. For this reason, INFLs become critical agents of polarization, especially the ones who purposely manipulate information for some form of personal gain. Furthermore, it was concluded that segregated social networks of opinion evolve through the formation of subnetworks of consensus and ultimately to polarized online social groups. These two findings can potentially be used in favor of fake news debunking systems by delivering well-balanced explanations to polarized network INFLs. Since they are driving forces of opinion formation, changing the preconceived beliefs of just a few of these agents may create a favorable snowball effect in the entire social network toward consensus against fake news.

Data Availability Statement

The findings supporting this systematic review are from previously reported studies and datasets, which have been cited. The processed data are available from the corresponding author upon request.

Conflicts of Interest

The authors declare no conflicts of interest.

Funding

This research was supported by Fundação para a Ciência e Tecnologia (FCT) through (1) INESC-ID Multiannual Funding with reference UIDB/50021/2020 (doi:10.54499/UIDB/50021/2020); (2) Grant 2022.09212.PTDC (XAVIER Project) and project UIDB/50021/2020 (doi:10.54499/UIDB/50021/2020), under the auspices of the UNESCO Chair on AI&VR of the University of Lisbon; and (3) CHIST-ERA within the CIMPLE project (CHIST-ERA-19-XAI-003) which corresponds to the FCT reference CHIST-ERA/0001/2019.