Journalism Studies ISSN: 1461-670X (Print) 1469-9699 (Online) Journal homepage: www.tandfonline.com/journals/rjos20 Why Do Social Media Users Accept, Doubt or Resist Corrective Information? A Qualitative Analysis of Comments in Response to Corrective Information on Social Media Michael Hameleers To cite this article: Michael Hameleers (2024) Why Do Social Media Users Accept, Doubt or Resist Corrective Information? A Qualitative Analysis of Comments in Response to Corrective Information on Social Media, Journalism Studies, 25:7, 776-793, DOI: 10.1080/1461670X.2024.2340591 To link to this article: https://doi.org/10.1080/1461670X.2024.2340591 © 2024 The Author(s). Published by Informa UK Limited, trading as Taylor & Francis Group Published online: 15 Apr 2024. Submit your article to this journal Article views: 1890 View related articles View Crossmark data Citing articles: 1 View citing articles Full Terms & Conditions of access and use can be found at https://www.tandfonline.com/action/journalInformation?journalCode=rjos20 JOURNALISM STUDIES 2024, VOL. 25, NO. 7, 776–793 https://doi.org/10.1080/1461670X.2024.2340591 Why Do Social Media Users Accept, Doubt or Resist Corrective Information? A Qualitative Analysis of Comments in Response to Corrective Information on Social Media Michael Hameleers Amsterdam School of Communication Research, University of Amsterdam, Amsterdam, the Netherlands ABSTRACT ARTICLE HISTORY Although the widespread application of corrective information has been found to lower the credibility of misinformation, there may be important sources of resistance among social media users that potentially limit the effectiveness of fact-checking, warning messages, and community-based verifications. Yet, to date, we lack an inductive and context-bound understanding of users’ responses to these different applications, and the reasons why users distrust or avoid corrections online. Against this backdrop, this paper relies on an in-depth qualitative content analysis of responses to different forms of corrective information on Facebook, Twitter, and TikTok. The study’s main findings inform a typology of resistance consisting of (1) expressing doubts on the selection biases of corrective information; (2) challenging the evidence and conclusions of corrective information; (3) blaming the correction for being biased and/or partisan and (4) labeling the correction or intervention as disinformation itself. The implications for journalism practice and content moderation are discussed. Received 9 September 2023 Accepted 3 April 2024 KEYWORDS Corrective information; content moderation; disinformation; factchecking; misinformation; social media The inadvertent (misinformation) or deliberate (disinformation) dissemination of falsehoods has been regarded as an important threat to democracies across the globe (e.g., Clayton et al. 2020; Lewandowsky et al. 2012). To mitigate the harms of mis- and disinformation, different journalistic platforms and organizations have increased their efforts to counter or prevent the effects of false information, for example, by engaging in fact-checking (e.g., Amazeen 2015). As misinformation is often disseminated via social media (Bridgman et al. 2020), social media platforms such as Facebook and TikTok are also held accountable for the correction of misinformation on their platforms (e.g., Clayton et al. 2020), as also explicated in the 2022 Code of Practice. Platforms can, for example, respond to misinformation by promoting external fact-checks, adding false flags to suspicious content themselves, or allowing users to add context to claims or mark content as suspicious. Despite the widespread application of these different types of corrective information, we lack an understanding of users’ responses to these different applications (but see CONTACT Michael Hameleers m.hameleers@uva.nl © 2024 The Author(s). Published by Informa UK Limited, trading as Taylor & Francis Group This is an Open Access article distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives License (http://creativecommons.org/licenses/by-nc-nd/4.0/), which permits non-commercial re-use, distribution, and reproduction in any medium, provided the original work is properly cited, and is not altered, transformed, or built upon in any way. The terms on which this article has been published allow the posting of the Accepted Manuscript in a repository by the author(s) or with their consent. JOURNALISM STUDIES 777 Brandtzaeg and Følstad 2017). Building further on the findings of Brandtzaeg and Følstad (2017), this paper looks beyond fact-checking as corrective information, also analyzing non-journalistic formats of corrective information. Such practices can be referred to as the embedding or re-purposing of corrective information by platforms themselves, which include users’ own verification interactions online (e.g., Bélair-Gagnon et al. 2023). This paper thus acknowledges that not all misinformation interventions are designed or supported by journalists, or that it is the sole responsibility of journalists and experts to moderate or verify content (also see Gillespie 2023). To comprehensively map responses to corrective information by both journalistic and non-journalistic sources, I raise the following central research question: How do social media users respond to different formats of corrective information? Based on the inductive findings, I offer recommendations for journalists, fact-checkers, social media platforms, and other stakeholders involved in pre- or debunking misinformation on social media. Although corrective information is found to be effective in the controlled environment of several experimental studies (e.g., Walter et al. 2020), it may be avoided or resisted by groups in society that need corrections most (Thorson 2016). Especially on social media, attitudeconfirming fact-checks are much more likely to be selected and shared than fact-checks that oppose people’s existing views (Shin and Thorson 2017; Walter et al. 2020). In light of this, other formats of corrective information can offer a viable alternative. Such interventions may include pre-bunking interventions that warn people about the threats of mis- and disinformation (see e.g., Roozenbeek and Van der Linden 2019), but may also comprise of content moderation practices initiated by social media platforms themselves. Based on this variety of corrective information encountered online, I distinguish between two major categories of corrective information: Non-journalistic formats of corrective information and fact-checking by journalistic platforms independently of social media platforms. Yet, to date, we know markedly little about why social media users accept or reject corrective information, and whether the different formats of corrective information available to citizens online are accepted or resisted for the same reasons. In order to better understand the nature of people’s acceptance, avoidance or resistance to corrective information, this study relies on a qualitative content analysis of responses to corrective information on social media. Hence, although Brandtzaeg and Følstad (2017) have arrived at an important inventory of themes and sentiments related to various fact-checking responses on social media, we lack an understanding of the specific reasons for accepting or resisting different approaches to corrective information. By offering an in-depth understanding of the types and motives of acceptance of and resistance to corrective information in users’ comment, this paper aims to introduce a typology of responses to corrective information. Beyond offering a theoretical overview of the user-side of corrective information, the typology aims to be useful for journalism practice and social media interventions that need to overcome different sources of resistance to corrective information among audience segments vulnerable to misinformation. Theoretical Framework Corrective Information on Social Media Corrective information can generally be understood as all forms of information used to de-bunk misinformation or warn people about the presence of misinformation. Corrective 778 M. HAMELEERS information can be presented in various formats across social media platforms (e.g., Clayton et al. 2020). In this paper, I first of all focus on social media users’ responses to fact-checking (e.g., Amazeen 2015). Fact-checks can generally be defined as corrective messages that offer an evidence-based falsification or verification of dubious or suspicious claims circulating on different platforms (e.g., Amazeen 2015). Such information may be embedded in traditional journalistic coverage, or presented on separate independent platforms, such as Politifact.com. Fact-checking information is mostly presented in the form of a short journalistic article in which a clear verdict or rating of (mis)information is presented, alongside an elaboration on the reasons why information was found to be (un)true (e.g., Lewandowsky et al. 2012). In recent literature, there seems to be a growing consensus that exposing people to fact-checking information is effective in reducing misperceptions (e.g., Walter et al. 2020). Such information can be effective in correcting factual beliefs by offering a concrete and evidence-driven verdict on the degree to which claims are (un)trustworthy (Lewandowsky et al. 2012). Herewith, fact-checking information may be successful in raising people’s suspicion toward information, motivating a critical assessment of message arguments. In that sense, a fact-check message may act as a prime of suspicious content, motivating people to systematically process messages and assess their veracity critically. Despite its effectiveness, fact-checking information has also been subject to critique (e.g., Uscinski and Butler 2013). More specifically, the selection process of dubious or check-worthy claims is not always transparent and may show a (partisan) bias (Uscinski and Butler 2013). In addition, by offering an attack on people’s beliefs or identities, fact-checking information may be avoided or resisted by users whose beliefs and identities resonate with the statements voiced in the corrected misinformation (e.g., Thorson 2016). Primig (2022) further illustrates that cynical attitudes toward established information sources can spill over to corrective information. This is also reflected in the findings of Lyons et al. (2020): Fact-checkers are more likely to be viewed negatively among people with anti-elite attitudes. Situated in the U.S. as a bi-partisan context, Robertson, Mourão, and Thorson (2020) further show that liberal participants were more likely to trust and rely on corrective information than conservatives. Based on a survey in Argentina, Aruguete et al. (2023) demonstrate that confirming fact-checks verifying true content are more likely to be shared than disputed tags. Taken together, the real-life effectiveness of corrective information may be subject to various sources of resistance, for example, related to people’s partisan beliefs or overall cynical attitudes toward established information. Against this backdrop, (social) media users may display various reasons for accepting or rejecting corrective information. However, to date, we lack a systematic overview of how people respond to different forms of corrective information, and the specific reasons they mention for (not) trusting corrective information (but see e.g., Brandtzaeg and Følstad 2017; Walter, Edgerly, and Saucier 2021). The findings of the content analysis of online public spaces by Brandtzaeg and Følstad (2017) suggest a distinction between supportive and unsupportive responses to fact-checks. Specifically, in response to fact-checks by Snopes, FactCheck.org, and StopFake, they distinguished between usefulness, ability, benevolence, and integrity as key themes, which had both positive and negative indicators. The content analysis revealed that responses to FactCheck.org and Snopes were JOURNALISM STUDIES 779 mostly negative, as online users emphasized their partisan (left-wing) bias and dishonesty. Moving beyond such pre-defined categories of support or opposition, this study will inductively and qualitatively map responses to different formats of corrective information on platforms varying in affordances and audiences (TikTok, Twitter, Facebook). Thus, responding to calls in literature to offer a more fine-grained distinction in factchecking motivations beyond directional and accuracy goals (Walter, Edgerly, and Saucier 2021), I aim to map audience responses and motivations to corrective information as comprehensively as possible. This is especially relevant in a communication context of high distrust in established information sources (Newman et al. 2023) and relativism toward factual information (Van Aelst et al., 2017). Hence, the themes found by Brandtzaeg and Følstad (2017) may not comprehensively map support or resistance to corrective information applied to the context of growing uncertainty and trust related to factual information. I therefore raise the following research question: How do media users respond to fact-checking information shared on social media, and to what extent do their responses reflect acceptance or resistance to corrective information? (RQ1). Beyond Fact-Checking: Journalistic Versus Non-Journalistic Corrections In this paper, I also consider responses to mis- and disinformation or pre-bunking interventions that are independent of journalistic interventions (e.g., Gillespie 2023). These interventions may be captured by the category of non-journalistic corrective information, which comprises of community-oriented approaches involving users’ own peer-to-peer interactions (Margolin, Hannak, and Weber 2018) and third-party fact-checking interventions or community notes that are part of the content moderation activities of platforms themselves (Bélair-Gagnon et al. 2023). As such initiatives become increasingly more salient under new Codes of Practice, it is important to consider how users perceive and respond to interventions that are used by platforms themselves, mostly to add context or pre-bunk misinformation by warning about its potential spread. This paper regards pre-bunking initiatives as a wider category of warning messages or labels that prewarn about the occurrence of misinformation surrounding certain topics, such as Covid-19 or the Russian invasion of Ukraine. Such warning labels that offer suggestions on how people may recognize false information are generally found to be effective in helping people to resist misinformation and detect falsehoods (e.g., Hameleers 2022). Next to pre-bunking warning flags or media literacy interventions, platforms such as Twitter have also allocated responsibility and control to users themselves. Specifically, Twitter has developed a community-based form of fact-checking that allows ordinary users to add context to claims they find dubious or suspicious, which is then also revealed to other social media users (Cotter, DeCook, and Kanthawala 2022). Although such forms of fact-checking and flagging suspicious content are suited to the nature of content moderation by platforms (e.g., Gillespie 2020), they are not directly in line with the principles of independent, objective and verified fact-checking (e.g., Amazeen 2015). Specifically, community members are not required or expected to adhere to the norms of objectivity, balance, and a thorough evidence-driven investigation into the facticity of flagged content. Driven by their position as commercial platforms and disseminators of a diverse set of voices (e.g., Gillespie 2020), platforms such as Twitter, Facebook, or TikTok may rely on 780 M. HAMELEERS different non-journalistic tools that are intended to warn people about false information, for example, through adding context, pre-warning about misinformation or by relying on community-based verification messages or moderation (Cotter, DeCook, and Kanthawala 2022). Yet, to date, we know little about how the embedding of different formats of corrective information endorsed by social media platforms is received by users. It can be argued that the less explicit recommendations and refutations offered may prime less resistance as compared to fact-checks, which are more likely to be distrusted among users that tend to have negative attitudes towards the news media (e.g., Primig 2022). In addition, based on extant literature that has identified a clear ideological bias in responses to traditional fact-checking (e.g., Robertson, Mourão, and Thorson 2020), it can be argued that non-journalistic interventions that warn people about false information or flag suspicious content are less likely to be rejected or perceived as attacks on people’s beliefs. After all, they do not directly debunk attitude-consistent claims, but rather underline the importance of critically verifying information. Especially as the European 2022 Code of Practice on Disinformation has allocated more responsibility to social media platforms to regulate and monitor disinformation spread via their platforms, it is important to consider how such non-journalistic initiatives are received by users. I therefore raise the following research question on responses to different formats of corrective information on social media platforms: What are the differences in responses to non-journalistic formats of corrective information on social media beyond traditional fact-checking? (RQ2). Method Data Collection and Sample To answer these research questions, this study relies on a theoretically motivated and diverse sample of comments to different formats of corrective information published on Twitter, Facebook, and TikTok. These platforms were selected as they cater to different audiences and apply different formats of corrective information. To analyze responses to journalistic fact-checking information, I rely on a diverse sample of disinformation narratives that were de-bunked by Politifact.com and Snopes.com. A disinformation narrative consists of an inaccurate, false or deceptive storyline that was flagged as incorrect or misleading. A narrative can consist of different statements that were together represented in the corrective information (i.e., fact-checkers often refer to “various” online messages or social media posts with similar statements that are part of a wider disinformation narrative). In the inclusion criteria, I aimed to incorporate a balanced sample of disinformation narratives regarding ideology (see Robertson, Mourão, and Thorson 2020)—although most corrections responded to right-wing or conservative issue positions. I additionally aimed to include corrections of various topics and positions, including COVID-19, immigration, climate change, and armed conflicts. Most fact-check articles dealt with health mis- and disinformation surrounding COVID-19, followed by disinformation (i.e., decontextualized images) on armed conflicts. Climate change mis- and disinformation were also often debunked. In this case, the corrective information mainly contrasted climate skeptic interpretations with expert consensus. JOURNALISM STUDIES 781 The two fact-check platforms were chosen as they were available for all three social media platforms, which enhanced comparability. In addition, their fact-checking is seen as acting independently from bigger (partisan) media brands, and both platforms comply with the standards of the International Fact-Checking Network (IFCN). Although the impartiality and neutrality of fact-checking platforms has been a point of discussion in literature (e.g., Uscinski and Butler 2013), I regard both platforms as relevant to consider in light of official efforts to respond to misinformation online. As they focus on similar (partisan) forms of mis- or disinformation with a political slant, I regard them as likely cases of corrective information in the political setting of the US. For case selection, I used the official accounts of these fact-checkers across all three platforms. To collect users’ responses, I initially sampled 25 fact-checks across a wide diversity of issues, spanning the last six months of 2022 and the first six months of 2023. These fact-checks responded to different forms of disinformation and suspicious content that used different types of deception (i.e., decontextualized images, fabricated claims, hyper-partisan attacks on opposed political parties). Diversity and a variety of disinformation and corrective information were regarded as the central inclusion criteria for corrective information messages: Although most fact-checks and corrections refuted right-wing and conservative disinformation, I also included responses to corrections debunking liberal or left-wing issue positions (about 25% of the sample). Given the timing of the sample frame, fact-checks on the Russian invasion of Ukraine and Covid19 were most prominent (65%). However, I also included corrective information relating to climate change disinformation, immigration, crime, and political events/elections. For each corrective message that was selected, if available, the ten most popular responses were considered and coded selectively (i.e., only comments that were relevant for the research questions were analyzed in depth). This means that the responses that were placed first and received most engagement (in terms of likes, comments, and/or shares) were considered, as these are also the most visible responses for other users. It is relevant to note that there is a large variety of levels of engagement with corrective information between the selected platforms. On Twitter, most fact-check messages only received between five and ten comments (which also means that all comments were regarded in the analyses in most cases). On TikTok, however, some fact-checks on partisan disinformation received as much as 6,000 comments. In selecting the ten most popular responses, I thus only included comments with high levels of engagement. Across the board, responses to fact-checking information occur most on TikTok, and least on Twitter. Although the purpose of the sampling procedure was not representativeness, I checked for theoretical saturation by comparing the analyses of the responses to 25 fact-checks per social medium (3) and fact-check platform (2) (N = 150) to an additional sample of five fact-checks per platform. The comparison of the emerging themes with the additional sample of data did not reveal new insights into fact-checking responses. On Facebook and Twitter, I additionally sampled responses to community-driven verification (Twitter) and warning messages about false content (Facebook). These corrective information messages covered the same time period as the journalistic fact-checkers, and the same inclusion and exclusion criteria were used. Thus, there was a strong overlap in topics (Covid-19, armed conflicts, and climate change were prominent). I additionally kept a similar balance in the ideological bias of the corrective information, although some warning messages were formulated in more general terms. For Facebook, for example, 782 M. HAMELEERS I included warning labels that were integrated with independent fact-checks as well as warning messages with more general tips and tricks on how to recognize disinformation. As non-journalistic corrective information was less systematically covered in the selected time period on TikTok, I only focused on responses to traditional fact-checking on this platform. On Twitter, I specifically analyzed responses to 25 instances of userdriven verification, also known as “Birdwatch.” To trace Birdwatch-rated information on Twitter, the phrase “Readers added context” was used to find relevant items of userdriven verification. I used the same approach of data collection as for responses to traditional fact-checks: all responses were included and selectively coded when they were deemed relevant for the research questions (i.e., they reflected relevant meanings related to acceptance, resistance, or another relevant response to the corrective information). I did not consider responses that did not clearly relate to the content of the mis- or disinformation or the fact-check message (i.e., they reflected partisan positions not related to the content, they contained hostile sentiments that were unrelated to the message, they referred to a different context, or they promoted products or ideas that were unrelated to the message). Analysis The responses to corrective information were analyzed according to the three steps of data analysis central to the Grounded Theory approach (e.g., Braun and Clarke 2014; Charmaz 2006). I deviate from a full Grounded Theory approach in at least three respects (also see Braun and Clarke 2014 on Grounded Theory “lite”). First of all, the comments to social media posts were coded selectively instead of line-by-line. Selective coding was guided by the research questions aiming to understand people’s support, resistance, or negotiated meanings of the corrective information’s verdict. Although these sensitizing concepts are relatively open and broad, and therefore not directly limiting the inductive analytical vision, they do restrict the analysis to responses that in some way or form refer to the corrective message, the fact-checker, or the misinformation. Second, I did not engage in theory building in the strictest sense. Third, I do not follow the principles of a constant comparative analysis. Although I checked for theoretical saturation by comparing emerging themes from an initial round of data collection to another sub-sample of responses to corrective information, I did not constantly move back and forth between new raw data and emerging themes to allow for a more structured, traceable, and replicable approach to coding. The analyses started with open coding. During this step, the principal investigator labeled relevant segments of comments to social media posts. These labels were unstructured and kept as open as possible but were guided by the research question and sensitizing concepts aiming to map variety in responses to corrective information and its acceptance and rejection. An example of an open code is “Rejecting correction: User refers to inconsistencies in the evidence forwarded by the fact-checker.” After open coding, the long list of descriptive labels (+250 unique codes) was inspected and relabeled for clarity and parsimony (similar codes were merged and raised to higher levels of abstraction). For this step, the principal investigator and author of this paper consulted a second researcher, and full agreement between researchers was reached on the final list of codes resulting from this step. JOURNALISM STUDIES 783 The focused coding step further entailed the grouping of codes into clusters that captured variety regarding emerging themes. Codes related to the dimension of confirming the correction were grouped, as well as codes related to resistance and distrust. Although the data did not reflect a sub-structuring of themes related to approving or confirming corrective information, there was still a large theoretical variation within the category of resistance and distrust. Based on the variety in the data, I therefore sub-divided this major theme into different dimensions (i.e., disinformation accusations, fact-based disagreement, partisan resistance). These dimensions formed the building blocks of the main themes discussed in the results and reflect the variety to which corrective information is (dis)trusted and rejected. The final axial coding step was used to specify the relationships between themes, considering that the various dimensions of resistance related to each other in their emphasis of distrust and cynicism. Validity and Reliability I believe that it is important to reflect on the potential biases of the analyses, and the measures taken to enhance the quality of the findings reported in this paper. First of all, a structured computer-assisted approach to data analysis was followed. By documenting the entire selective coding procedure and by keeping track of emerging patterns and themes through field notes, I believe that the findings offer a close representation of reality, which is further enhanced by the thick descriptions and in-vivo quotes reported in the results. More formally, although most of the initial coding was done by one researcher (the author of the paper), a second researcher was consulted for all three steps of coding and analyses, and actively contributed to the focused coding steps in which descriptive labels were interpreted and raised to a higher order of analytical abstraction. At the start of the analyses, a small sub-sample of all comments (5%) was coded together by both researchers, which ensured consistency in the labeling of segments of text and the interpretation of comments. Both researchers also independently coded ten comments from different fact-checks that were not included in the analyses. The coding was compared, and there were no substantial differences in the coding (apart from the specific labels used). Then, during focused coding, the merging of open codes, the grouping of codes, and the labeling of themes was done collaboratively, which ensured that the two coders agreed on the classification of codes and findings into the major themes discussed in the results section. Results The Reinforcement or Confirmation of Corrective Information One prominent response to corrective information was to voice agreement with the factcheck message or confirm the intervention’s recommendation. In response to the “readers added context” feature on Twitter, for example, users endorsed the corrective information by quoting and re-tweeting the warning message, also motivating other users to interpret misleading information with a critical perspective. In these responses, users emphasized the agency and responsibility of social media users to conduct their own investigation: “Read the readers added context and do your own research.” In 784 M. HAMELEERS response to fact-checking information, many users responded by confirming the importance of facts and reliable sources of information and were thankful for the correction offered. Other confirmations were more politicized and informed by partisan beliefs. This can be exemplified by a response to a Tweet containing a fact-check by Snopes: “Not surprising. Trump lies as much as he talks. It’s an amazing skill.” Similar responses were found in response to TikTok fact-checks that falsified claims by the former president Donald Trump: “Of course it’s false. Trump said it. Alternative facts President.” Although these comments signify the reinforcement of corrective information, they may mostly be informed by the need to reassure and confirm (partisan) identities rather than the motivation to arrive at an accurate assessment of the validity of political claims—which corresponds to the partisan and ideologically colored interpretation of fact-checking information found in other studies in the US (e.g., Shin and Thorson 2017). Expressing the acceptance of corrective information did not always mean that users found it to helpful or effective in the fight against misinformation. Many users pointed out that other people who are less resilient to misinformation may not see or believe the correction, which makes the refutation of especially partisan misinformation a complicated task. This corresponds to extant research stressing the crucial role of selective exposure and the partisan sharing of fact-checks in response to misinformation (e.g., Shin and Thorson 2017). As one user replied in response to a Snopes fact-check on Twitter refuting a manipulated image of Donald Trump allegedly rescuing kittens from a flood: “Unfortunately the people stupid enough to believe this lazy sack of shit would lift even a pinky finger to help someone else aren’t going to believe Snopes.” This pessimistic outlook underscores and recognizes the increasing levels of distrust related to established media formats and authoritative sources of knowledge in the US (e.g., Newman et al. 2023): In a setting where most people do not trust the media or other established information sources, it is highly likely that the verdicts of platforms that attack people’s partisan beliefs reach people who are vulnerable to deceptive messages spread via alternative or hyper-partisan media. The findings suggest that social media users mostly accepted corrective information in the context of messages posted by fact-checking platforms. However, acceptance or reinforcement was often driven by motivated reasoning: People mainly voiced support for corrections that confirmed their partisan stance, for example, when they refuted the political statement of the politician they strongly opposed and characterized as peddlers of disinformation. Content moderation by the platforms themselves was found useful at times, but also criticized for lacking clear recommendations and verdicts. Here, acceptance was constructed less in terms of congruence between the correction and people’s own viewpoints, and more likely to be expressed as the usefulness of interventions to help people navigate online information. Corrective Information Flagged as Disinformation Across all three platforms and forms of correction, one central and common theme indicating resistance to corrective information is the categorization of the correction as dishonest and deliberately false. This mostly occurred in response to fact-checks and corrective messages that refuted partisan disinformation with a clear political slant, for JOURNALISM STUDIES 785 example, related to Covid-19 or immigration. Social media users frequently labeled factchecks as a form of disinformation, or a false label allegedly used to legitimize an attack on opposed political views. Explicitly labeling corrective information as “fake news” or other terms to indicate disinformation were most prevalent in the context of political factcheckers that debunked partisan news (i.e., on Biden or Trump). They were most prominent on Facebook. Disinformation accusations were also frequently voiced in the context of Covid-19. As one user replied to a fact-check debunking Covid-19 misinformation Tweeted by Snopes: “I think your entire organization is baseless. Produce some real journalism and leave the opinions to the people.” This social media user accused Snopes of spreading opinions instead of facts, and explicitly contrasted the efforts of this platform to “real” journalism. Even more explicitly, also in response to a fact-check on Covid-19 misinformation, another user responded by equating Snopes with a Soros-funded disinformation channel: “If you say it, it must be true. Soros funded = disinformation.” Together, these findings show that the weaponization of “fake news” and disinformation often associated with mainstream media (e.g., Egelhofer and Lecheler 2019) also applies to corrective information from platforms. Especially when corrective information challenges partisan issue position, it may be vulnerable to delegitimizing labels expressed by social media users. Such accusations were often uncivil and hostile in style. This can be illustrated by the following response to a post by PolitiFact: “No one believes politifake anymore. You liars fact checked Russia, Ukraine and covid lies for years. You are a pedophile telling us you don’t like kids. No one believes you.” By referring to the platform as “fake” and by stressing that they are “liars” in the context of various salient issues, many users delegitimized corrective information. Similar responses were voiced by people distrusting Snopes: “Because you can trust Snopes like you can trust a wet fart.” Social media users often referred to fact-checks or the initiatives of Facebook and Twitter as “bullshit” and expressed their anger and frustration with the platforms they perceived to be peddlers of fake news themselves. Correction is Seen as Unclear, Inaccurate, or Insufficient Next to more extreme accusations of disinformation and deliberate deception, many users blamed platforms and fact-checkers for being unclear, inaccurate, or incomplete. This accusation reflected a form of criticism that was based on the content of the correction, or the arguments used in the verdict. In light of this theme, many users stressed that the “added context” feature of Twitter may be insufficient in correcting misinformation, as it does not forward a clear verdict that would label it as false or deceptive information. Also in response to other non-journalistic formats of pre-bunking or content moderation with warning flags (i.e., on Facebook), many users stressed that the verdicts were not clear enough, and that the warnings should be more explicit in labeling disinformation as inaccurate, deceptive, and false. As an example, in response to a Tweet stating that Trump should not be prosecuted, users added context that the felony charges were actually not about his conduct when he was president. According to one user, the correction should not just add context but explicitly correct lies: “That shouldn’t say readers added context; should say readers have corrected his lies.” These findings highlight that platforms’ distance to explicit correction and content moderation (e.g., Gillespie 786 M. HAMELEERS 2010) may not be unequivocally supported by social media users. Users critically engaged with corrections and asked for a more active and interventionist stance on filtering out biased, dishonest, or inaccurate representations of reality. This source of resistance was much less prevalent in response to journalistic fact-checking. In response to other forms of more traditional fact-checking information, some users explained that fact-based corrections and offering statistics as evidence also come with limitations, as statistics on their own may not contain an objective or neutral representation of reality. This can be exemplified by a response from a user responding to a Tweet by Snopes that fact-checked the claim that guns kill young people: “Statistics can be manipulated to say just about anything … but yeah, kids suffer from firearm deaths pretty harshly.” Although the correction was not rated as false, the user responded critically by highlighting that offering statistics does not equal a representation of truth. Thus, just reporting on statistics without contextualizing or explaining their validity was seen as insufficient evidence for the verdict that a claim was totally false. Other users more directly challenged the accuracy of the verification. Although many users did not label the corrective information as disinformation, they tended to respond critically to the refutation by challenging the extremity of verdicts. This can be exemplified by a response to a TikTok fact-check by PolitiFact debunking Trump’s claim that 99% of all youth recover fully from Covid-19: “99% among youth. 96% among elderly. So, isn’t he technically right, Trump just worded it erroneously?” Another example from the same platform further illustrates how social media users challenge the verdict of fact-checkers by contradicting specific claims of the verification: “Which party ran on “defund the police”? Budget cuts doesn’t equal defund the police—Biden’s statement is false.” In this case, PolitiFact stated that “Joe Biden was mostly right in saying that President Trump wants to cut local police aid.” Some users explicitly pointed out that they did not reject the verdict of the factchecker, but rather called for more evidence to back up verdicts. As one user responded to a fact-check of Snopes related to Covid-19 denying the claim that there are lethal sideeffects of the vaccine: “Really? I’m not saying Snopes is wrong, and yes, I am vaccinated (including booster), but has an autopsy been released? Please post the results to back up your claim.” Other users were more critical, and although they did not label the correction as disinformation, they doubted the verdict’s accuracy because “something was off” without knowing the true facts themselves: “Something was off with her neck & just because you say there it isn’t, doesn’t mean anything. We can clearly see it’s not normal. I have never seen a person’s neck look like this ever, not even fat people. Sorry, you’re wrong. I’m not saying I know what it is but it’s not normal.” The findings in general indicate an important difference between the rejection of corrective information (i.e., through a “fake news” label) and voicing critique on the accuracy, impartiality, or completeness of the verdict. Most users did not explicitly label corrective information as a lie or disinformation, but rather challenged the seemingly neutral, objective, and complete line of argumentation presented to them. The verdicts of traditional fact-checks were often questioned on the level of the evidence offered for debunking causal claims, whereas community notes or other formats of non-journalistic content moderation initiated by platforms were regarded as too vague or unspecific in the recommendation forwarded. To answer RQ2, the main difference between formats is thus the source of criticism: Fact-checkers were criticized for the (inaccurate or imprecise) JOURNALISM STUDIES 787 match between claims and evidence, whereas other non-journalistic formats such as community notes were mostly criticized for the recommendation, and not the evidence. Questioning the Selection of Dubious Claims Across platforms, users were not just critical about the actual verdict and its factual basis, but also expressed doubts related to the selection of claims that were responded to. This form of criticism was not often associated with community notes or other non-journalistic forms of content moderation and pre-bunking. Generally, there were two different ways in which doubt was expressed: (1) the claim was seen as so clearly false or useless that it should not even be fact-checked and (2) the platform was accused of being biased toward conservative misinformation. The following response to a Snopes fact-check shared on Twitter about the image of Christ being revealed by salt on a table exemplifies the first critique on a presumed irrelevant selection bias: “Lmao!!!! Does this really have to be fact checked? Hahahahaha.” Yet, users did not always blame it on the fact-check, but also the alleged lack of critical thinking of other people: “That this needed a fact-check is extremely telling of how gullible people can be.” In line with this, irrespective of their agreement with the correction, many users stressed that fact-checkers focused on useless claims to fact-check and voiced the critique that they should devote their resources to more relevant claims that can help society to move forward. Fact-checkers were also accused of selectively fact-checking conservative misinformation or claims that favored Democrats and delegitimized people with different viewpoints. As one user replied to a PolitiFact fact-check on Twitter in response to a correction of an anti-vaccination claim: “The things you guys choose to fact check (and those you ignore) are incredibly one-sided.” This theme was very prominent across all platforms and indicates that many users distrust fact-checkers as they may have a partisan bias in the claims they decide to refute, which is not in line with journalistic principles of impartiality, balance, and objectivity. Such partisan biases and accusations were less clearly voiced in response to community notes or general warning messages. Correction Primes Partisan Responses and Accusations of a Hostile Fact-checker Bias Many users did not only express distrust in the selection of claims to check, but also expressed that the platform, the quoted evidence, and the verdicts communicated to users were informed by a partisan bias. This hostile fact-checker bias was prominent in the responses across all platforms, and mostly occurred in response to critical factchecks with a clear refutation of political statements. As one Twitter user mentioned in response to a Snopes fact-check: “I don’t trust Snopes. Lost me a long time ago. Biased and politicized.” Other users also accused this platform of demonstrating a left-wing bias, which also meant that they could not be considered as independent fact-checkers: “Says the leftist, Soros funded fact checkers. LOL. Sure. She had 2 injections and 2 boosters. There is EVERY need to ask the question.” Similar responses were found in response to fact-checks by PolitiFact. This platform was often blamed for showing a bias against former president Donald Trump: “Politifact can’t say Trump’s claim is false yet writes many words to tell you why Politifact doesn’t like Trump.” 788 M. HAMELEERS Table 1. Overview of main themes and their context. Dimension support or resistance Example Journalistic versus non-journalistic formats The Reinforcement or Confirmation of Corrective Information “Thank you, PolitiFact. I still believe that truth and facts matter” Corrective Information Flagged as Disinformation “You liars fact checked Russia, Ukraine and covid lies for years. You are a pedophile telling us you don’t like kids. No one believes you.” “Statistics can be manipulated to say just about anything … but yeah, kids suffer from firearm deaths pretty harshly.” Journalistic: Regular fact-checks, nonpartisan refutations, congruent fact-checks Non-journalistic: Community-based verification Journalistic: Strong refutations of partisan disinformation across all platforms, more hostile and uncivil comments Correction is Seen as Unclear, Inaccurate, or Insufficient Questioning the Selection of Dubious Claims Partisan Responses and Accusations of a Hostile Fact-checker Bias “The things you guys choose to fact check (and those you ignore) are incredibly one-sided.” “I very much respect this account, but it seems to be biased. It seems to have a more democrat ideology.” Journalistic: Responses to regular fact-checks with unclear verdict, more critical rather than cynical responses Non-journalistic: Community notes that did not offer a clear verdict or recommendation on veracity Journalistic: Mostly in response to factcheckers, related to usefulness and alleged partisan bias of claim selection Journalistic: Responses to critical verdicts of partisan disinformation This response illustrates how challenging the verdict of a fact-check was often informed by partisan or ideological motives: Users expressed doubts on the conclusions and evidence forwarded by corrective information when their own positions and partisan leanings did not align with the fact-checking message. As another example that reveals how “bias” is explicitly mentioned in response to corrective information, one user responded to a TikTok fact-check by PolitiFact that blamed Trump for lying by emphasizing the biased and partisan nature of claims made by the platform: “I very much respect this account, but it seems to be biased. It seems to have a more democrat ideology.” In Table 1, the different forms of resistance and support identified in response to different formats of corrective information are summarized. Discussion Based on a qualitative content analysis of responses to corrective information across Twitter, Facebook, and TikTok, this paper has offered novel insights into the types of resistance that are voiced in response to corrective information, including journalistic responses and non-journalistic interventions used by platforms engaging in content moderation. Based on the inductive findings, I suggest the following typology of resistance to corrective information: (1) casting doubt on the selection biases of fact-checkers; (2) challenging the evidence and conclusions of corrective information; (3) blaming the correction for being biased and/or partisan and (4) labeling the fact-check as a disinformation source. Importantly, and answering RQ2, although all forms of critique were found in the context of traditional fact-checks, non-journalistic corrections were mostly associated with criticism on the clarity of the verdict and the lack of explicitness on calling false content disinformation or fake news. These non-journalistic formats were less likely to be to the target of partisan responses, however, although their usefulness was often questioned. As such, although non-journalistic practices of content JOURNALISM STUDIES 789 moderation do not trigger resistance, the lack of perceived usefulness in informing people’s verdict on the veracity of information may cast doubt on their potential. At least in the format of current applications, their guidelines may not be concrete enough to offer guidance to media users who are already uncertain on how to discern true from false information. In line with literature critiquing the neutrality and impartial selection routines of factcheckers (Uscinski and Butler 2013), many users stressed that corrective information may disproportionally check conservative claims, leading to a distorted image of mis- and disinformation. As a second type of resistance, skeptical and critical social media users doubted the factual claims and validity of fact-checkers’ verdicts or the conclusions and suggestions forwarded by community notes and pre-warning messages—stressing that their evidence or verdict was incorrect, incomplete or unsubstantiated. Importantly, this type of resistance is in line with perceptions of misinformation, and not disinformation. More specifically, users did not question the intentions of the fact-checkers or platforms to arrive at accurate conclusions but were suspicious regarding the ways in which evidence was used to arrive at a verdict on the level of untruthfulness. Crucially, holding a critical perspective on fact-checkers’ or platforms’ efforts is not necessarily problematic. Even more so, fact-checkers such as PolitiFact even emphasize the need for people to think critically and use their own frame of reference to interpret claims (Amazeen 2015). Given the complexity of political claims and their resonance with specific socio-political contexts (Vinhas and Bastos 2022), it can also be noted that different perspectives on seemingly factual statements can exist (i.e., something may be true under one condition, but false in another context). Therefore, critical perspectives on fact-checks may be part of a healthy democratic debate, where engaged citizens critically interpret the information presented to them. The two other types of resistance highlight more cynical forms of rejecting corrective information, which also question the intentions of fact-checkers to arrive at accurate conclusions. In this category, fact-checkers were accused of deliberately distorting reality, and blamed for deceiving the public with false fact-checks driven by financial or political interests. This aligns with literature on using “fake news” or disinformation as a weaponized term to delegitimize established or conventional sources (e.g., Egelhofer and Lecheler 2019). Different from misinformation accusations, the fact-checkers and platforms were accused of deliberately misrepresenting reality and were seen as part of a dishonest elite that opposed the people’s truth. Finally, and related to alleged intentional deception, fact-checkers were accused of showing a hostile bias against people’s views. Irrespective of the veracity of the verdict, fact-checkers and platforms were accused of favoring liberal or left-wing viewpoints disproportionally (i.e., by selectively quoting evidence, or by including partisan preferences in the verdict). This category is indicative of a variant of the hostile media bias (e.g., Vallone, Ross, and Lepper 1985): Not only are (mass) media accused of showing a bias against people’s views, independent fact-checking organizations may also be subjected to the same critique. I argue that these more cynical responses to corrective information are more worrisome, as they may go beyond a critical and thoughtful consideration of claims. Hence, they may correspond to the rejection of established information sources, and the approach of alternative or hyper-partisan media that are more likely to contain disinformation. 790 M. HAMELEERS It should also be noted that the typology of responses reflecting support and resistance on different dimension may be at odds with the actual checks and balances used by fact-checking platforms. Both Snopes and PolitiFact offer context on how they select claims, and how they ensure a balanced and objective process of verifying suspicious information (also see Amazeen 2015). Thus, some of the criticism that fact-checkers are biased and untransparent about how they work may resonate with a cynical view of the audience that is not reflected in the supply-side of fact-checking. In addition, social media environments may not offer a fruitful context for deliberation, truth-seeking, and verification intentions of the audience (e.g., Ross Arguedas et al. 2022). Social media platforms may delegitimize journalistic truth-seeking and question the objectivity of the mainstream press, whilst fostering echo chambers and spaces where the exchange of cross-cutting views is demotivated (Ross Arguedas et al. 2022). As such, the credibility and usefulness of corrective information may be undermined by social media platforms that do not offer a constructive context for the interpretation of corrections. The proposed typology may have important implications for stakeholders involved in corrective information, both in the context of content moderation and the embedding of fact-checking information on social media. First, to counter the critique that the selection of claims to check is informed by biases and ideological preferences, journalists, platforms, and fact-checking organizations can make their inclusion criteria more transparent. Although platforms such as Snopes and PolitiFact offer transparency on their selection and verification routines on their own platforms, it can be argued that such information is not selected by more cynical users who come across their verdicts on social media. Thus, it is suggested to make their routines and selection processes more easily accessible, for example, by embedding a short disclaimer or link to their procedures in the fact-check messages presented on social media. For non-journalistic platforms using community notes, warning messages, or content moderation, it is also important to explicate why certain comments were flagged, or why pre-bunking messages focus on certain issues or political actors. Responding to the critique of incomplete verdicts or unsubstantiated labels, a second recommendation for online corrective information on social media is to explicate the evidence and expert sources that support the verdict of the fact-check or corrective information. Although the fact-checks clearly refer to expert knowledge and evidence in their verdicts, such context may be missing on social media. Although the format of social media and the average reading time of the online audience does not allow for the inclusion of many details, a clearer connection between claims and evidence may circumvent an important source of critique. Explicitly linking to sources or additional information that presents the evidence and sources may be an efficient way to enhance trust. Social media platforms that use community notes or pre-warnings should explicate why certain labels and warnings are used, and what the reasons are to not offer a clear refutation of false claims. For example, responding to the critique that community notes should label disinformation as fake news, it platforms should provide more concrete information to users that offer an indication of trustworthiness, which should motivate them to look for additional information themselves. Beyond offering more transparency on the procedures and motivations, it may be worthwhile to forward a clearer recommendation about the lack of facticity or trustworthiness of claims and sources. Although platforms may not be in the position to JOURNALISM STUDIES 791 remove content or to become a “ministry of truth,” verdicts that simply state that context was added to disputed claims may fuel cynicism and frustration among social media users. It may therefore be more useful to experiment with clearer flags and terms, such as “readers flagged the content as likely to be false or untrustworthy.” Considering that social media users did not always perceive the verdicts of fellow social media users to be trustworthy and neutral, it is also important to supplement community-based verification with more independent fact-checking information and expert-based verdicts. This paper comes with a number of limitations. First, the qualitative sample does not allow for the generalization of findings across all platforms. Although I selected different platforms that all have been associated with misinformation, I also excluded other platforms, such as YouTube. Future research can rely on a broader selection of platforms. Second, I focused on the US only in this paper. Although it has been regarded as a less resilient country to the threats of misinformation due to its high level of polarization and overall low levels of trust in journalism (Humprecht, Esser, and Van Aelst 2020), the findings of this paper may not be directly transferable to other contexts. The finding that people respond to fact-checks in partisan ways and accuse fact-checkers of having a strong ideological color may, for example, be less pronounced in multiparty settings. Here, it should also be emphasized that the findings do not tell us anything about the ideological profile of media users: Future research needs to establish whether different forms of resistance resonate with people’s ideological bias and the potential discrepancy between the corrective information and their own ideology (also see e.g., Shin and Thorson 2017). Another limitation is that the study does not clearly differentiate between the strength of the verdicts of fact-checkers and user responses. Critical corrections may be more effective than confirming checks (Fridkin, Kenney, and Wintersieck 2015), and stronger ratings may be most effective (Jarman 2016), although they may also cause more resistance or avoidance. Future research needs to more systematically investigate the affinity between the strength of the verdict and the responses triggered. Disclosure Statement No potential conflict of interest was reported by the author(s). References Amazeen, M. A. 2015. “Revisiting the Epistemology of Fact-Checking.” Critical Review 27 (1): 1–22. https://doi.org/10.1080/08913811.2014.993890. Aruguete, N., I. Bachmann, E. Calvo, S. Valenzuela, and T. Ventura. 2023. “Truth be Told: How “True” and “False” Labels Influence User Engagement with Fact-Checks.” New Media & Society, https:// doi.org/10.1177/14614448231193709. Bélair-Gagnon, V., R. Larsen, L. Graves, and O. Westlund. 2023. “Knowledge Work in Platform FactChecking Partnerships.” International Journal of Communication 17: 1169–1189. Brandtzaeg, P. B., and A. Følstad. 2017. “Trust and Distrust in Online Fact-Checking Services.” Communications of the ACM 60 (9): 65–71. https://doi.org/10.1145/3122803. Braun, V., and V. Clarke. 2014. Successful Qualitative Research: A Practical Guide for Beginners. London, UK: Sage. Bridgman, A., E. Merkley, P. J. Loewen, T. Owen, D. Ruths, L. Teichmann, and O. Zhilin. 2020. “The Causes and Consequences of COVID-19 Misperceptions: Understanding the Role of News and Social Media.” Harvard Kennedy School Misinformation Review 1 (3): 1–18. 792 M. HAMELEERS Charmaz, K. 2006. Constructing Grounded Theory: A Practical Guide Through Qualitative Analysis. London, UK: Sage. Clayton, K., S. Blair, J. A. Busam, S. Forstner, J. Glance, G. Green, A. Kawata, et al. 2020. “Real Solutions for Fake News? Measuring the Effectiveness of General Warnings and Fact-Check Tags in Reducing Belief in False Stories on Social Media.” Political Behavior 42 (4): 1073–1095. https:// doi.org/10.1007/s11109-019-09533-0. Cotter, K., J. R. DeCook, and S. Kanthawala. 2022. “Fact-Checking the Crisis: COVID-19, Infodemics, and the Platformization of Truth.” Social Media+ Society 8 (1). https://doi.org/10.1177/ 20563051211069048. Egelhofer, J. L., and S. Lecheler. 2019. “Fake News as a Two-Dimensional Phenomenon: A Framework and Research Agenda.” Annals of the International Communication Association 43 (2): 97–116. https://doi.org/10.1080/23808985.2019.1602782. Fridkin, K., P. J. Kenney, and A. Wintersieck. 2015. “Liar, Liar, Pants on Fire: How Fact-Checking Influences Citizens’ Reactions to Negative Advertising.” Political Communication 32 (1): 127– 151. https://doi.org/10.1080/10584609.2014.914613 Gillespie, T. 2010. “The Politics of ‘Platforms’.” New Media & Society 12 (3): 347–364. https://doi.org/ 10.1177/1461444809342738. Gillespie, T. 2020. “Content Moderation, AI, and the Question of Scale.” Big Data & Society 7 (2). https://doi.org/10.1177/2053951720943234. Gillespie, T. 2023. “The Fact of Content Moderation; or Let’s Not Solve the Platforms’ Problems for Them.” Media and Communication 11 (2). https://doi.org/10.17645/mac.v11i2.6610. Hameleers, M. 2022. “Separating Truth from Lies: Comparing the Effects of News Media Literacy Interventions and Fact-Checkers in Response to Political Misinformation in the US and Netherlands.” Information, Communication & Society 25 (1): 110–126. https://doi.org/10.1080/ 1369118X.2020.1764603. Humprecht, E., F. Esser, and P. Van Aelst. 2020. “Resilience to Online Disinformation: A Framework for Cross-National Comparative Research.” The International Journal of Press/Politics 25 (3): 493–516. https://doi.org/10.1177/1940161219900126. Jarman, J. W. 2016. “Influence of Political Affiliation and Criticism on the Effectiveness of Political Fact-Checking.” Communication Research Reports 33 (1): 9–15. https://doi.org/10.1080/ 08824096.2015.1117436 Lewandowsky, S., U. K. Ecker, C. M. Seifert, N. Schwarz, and J. Cook. 2012. “Misinformation and its Correction: Continued Influence and Successful Debiasing.” Psychological Science in the Public Interest 13 (3): 106–131. https://doi.org/10.1177/1529100612451018. Lyons, B., V. Mérola, J. Reifler, and F. Stoeckel. 2020. “How Politics Shape Views Toward FactChecking: Evidence from six European Countries.” The International Journal of Press/Politics 25 (3): 469–492. https://doi.org/10.1177/1940161220921732. Margolin, D. B., A. Hannak, and I. Weber. 2018. “Political Fact-Checking on Twitter: When do Corrections Have an Effect?” Political Communication 35 (2): 196–219. https://doi.org/10.1080/ 10584609.2017.1334018. Newman, N., R. Fletcher, A. Schulz, S. Andi, and R. K. Nielsen. 2023. “Reuters Institute Digital News Report 2023.” Reuters Institute for the Study of Journalism. https://reutersinstitute.politics.ox.ac. uk/digital-news-report/2023. Primig, F. 2022. “The Influence of Media Trust and Normative Role Expectations on the Credibility of Fact Checkers.” Journalism Practice, 1–21. https://doi.org/10.1080/17512786.2022.2080102. Robertson, C. T., R. R. Mourão, and E. Thorson. 2020. “Who Uses Fact-Checking Sites? The Impact of Demographics, Political Antecedents, and Media use on Fact-Checking Site Awareness, Attitudes, and Behavior.” The International Journal of Press/Politics 25 (2): 217–237. https://doi.org/10.1177/ 1940161219898055. Roozenbeek, J., and S. Van der Linden. 2019. “Fake News Game Confers Psychological Resistance Against Online Misinformation.” Palgrave Communications 5 (1): 1–10. https://doi.org/10.1057/ s41599-019-0279-9 Ross Arguedas, A. A., S. Badrinathan, C. Mont’Alverne, B. Toff, R. Fletcher, and R. K. Nielsen. 2022. ““It’s a Battle you are Never Going to Win”: Perspectives from Journalists in Four Countries on How JOURNALISM STUDIES 793 Digital Media Platforms Undermine Trust in News.” Journalism Studies 23 (14): 1821–1840. https:// doi.org/10.1080/1461670X.2022.2112908. Shin, J., and K. Thorson. 2017. “Partisan Selective Sharing: The Biased Diffusion of Fact-Checking Messages on Social Media: Sharing Fact-Checking Messages on Social Media.” Journal of Communication 67 (2): 233–255. https://doi.org/10.1111/jcom.12284. Thorson, E. 2016. “Belief Echoes: The Persistent Effects of Corrected Misinformation.” Political Communication 33 (3): 460–480. https://doi.org/10.1080/10584609.2015.1102187. Uscinski, J. E., and R. W. Butler. 2013. “The Epistemology of Fact Checking.” Critical Review 25 (2): 162–180. https://doi.org/10.1080/08913811.2013.843872. Vallone, R. P., L. Ross, and M. R. Lepper. 1985. “The Hostile Media Phenomenon: Biased Perception and Perceptions of Media Bias in Coverage of the Beirut Massacre.” Journal of Personality and Social Psychology 49 (3): 577. https://doi.org/10.1037/0022-3514.49.3.577. Van Aelst, P., J. Strömbäck, T. Aalberg, F. Esser, C. De Vreese, J. Matthes, J. Stanyer, et al. 2017. “Political Communication in a High–Choice Media Environment: A Challenge for Democracy?” Annals of the International Communication Association 41 (1): 3–27. https://doi.org/10.1080/ 23808985.2017.1288551. Vinhas, O., and M. Bastos. 2022. “Fact-checking Misinformation: Eight Notes on Consensus Reality.” Journalism Studies 23 (4): 448–468. https://doi.org/10.1080/1461670X.2022.2031259. Walter, N., J. Cohen, R. L. Holbert, and Y. Morag. 2020. “Fact-checking: A Meta-Analysis of What Works and for Whom.” Political Communication 37 (3): 350–375. https://doi.org/10.1080/ 10584609.2019.1668894. Walter, N., S. Edgerly, and C. J. Saucier. 2021. ““Trust, Then Verify”: When and Why People Fact-Check Partisan Information.” International Journal of Communications 15: 21–25. https://doi.org/10. 46300/9107.2021.15.5
0
You can add this document to your study collection(s)
Sign in Available only to authorized usersYou can add this document to your saved list
Sign in Available only to authorized users(For complaints, use another form )