Original Paper
Abstract
Background: Rejection of science is a common phenomenon on social media, but little is known about the effects of videos by social media influencers who reject basic principles of science and how other users could effectively counter these false claims.
Objective: This study aimed to explore the effects of an online video on social media that openly rejects facts of medical science on users’ attitudes toward science and scientists, as well as to examine whether different response strategies by other users in the commentary section are effective in altering the impact of the video.
Methods: For this experiment, 470 adults were randomized to 1 of 5 groups. Each group watched either a video rejecting science related to medical research or footage unrelated to science. The science rejection video groups were also exposed to critical comments that focused on factual information, personal attacks on the influencer, a mix of these responses, or they received no comments. We collected data on trust in scientists, attitudes toward scientific research, science support, interest in scientific research, belief in science, conspiracy beliefs, and evaluation of video, influencer, and user comments presented to the participants.
Results: Participants rated the video and influencer less favorably in the science rejection conditions than in the control group, and opposing user comments with factual information tended to be perceived as most informative and trustworthy, but there were no other differences between the 5 experimental conditions. However, in an additional exploratory analysis comparing all 4 science rejection groups with the control group, we found a decrease in interest in scientific research and an increase in trust in scientists. This can potentially be attributed to reactance to the science rejection content due to the sample consisting predominantly of individuals with a high affinity toward science.
Conclusions: Portrayals of science rejection in a video on social media had minimal impact on the attitudes toward science and scientists among users with a high affinity for scientific research. There was limited exploratory evidence suggesting that such videos might slightly increase trust in scientists while decreasing interest in research, but this finding requires further study for confirmation. Furthermore, whereas fact-based user comments tended to be considered more informative and trustworthy than other comments, none of the tested response strategies significantly changed the video’s impact, highlighting the need for more research, particularly in individuals with little or no affinity toward scientific research who might be particularly vulnerable to the examined video content.
Trial Registration: German Clinical Trial Registry DRKS00033829; https://drks.de/search/en/trial/DRKS00033829/details
doi:10.2196/79917
Keywords
Introduction
Background
Science is a fundamental component of modern society that influences many aspects of life, including technology, communication, and health care. Understanding scientific research empowers individuals to make better decisions about their health, environment, and daily lives, and helps prevent the development and strengthening of common misconceptions and myths in society. A basic understanding of medical science is a key component in health literacy, which can even become essential for survival in the face of health crises and is subsequently crucial for social safety []. Thus, the systematic and unwarranted rejection or denial of the value of science is a major public health problem that can have serious consequences for both individuals and society [].
The rejection of science is differentiated from scientific skepticism in the literature on the basis of its more extreme claims or accusations, such as labeling scientists as “corrupt,” “liars,” or members of a “cartel” []. The rejection of science can concern specific topics, such as COVID-19, vaccinations, climate change, or evolutionary theory [-], but can also comprise science in its entirety as a reliable source of knowledge about the world []. Science rejecters or deniers publicly oppose the robust results of scientific inquiry and spread misinformation, resulting in potentially biased public opinions that can affect important societal decisions []. We use the term “misinformation” as opposed to “disinformation” because we argue that science rejecters, for the most part, are convinced of the validity of the (incorrect) information they share publicly, aiming to counteract the dissemination of information by scientists, but do not intend to deliberately mislead users (or, at least, we cannot be sure about any intentions to purposely cause harm) [,].
Science rejection develops particularly in those areas where science has led to discoveries that threaten individual lifestyles or worldviews or affect the interests of companies []. Particularly high religiosity, conservative political attitudes, and low science literacy have been identified as predictors of science skepticism in previous studies []. However, the use of social media may also play a role in developing tendencies for science rejection. Recent research suggests that social media has become one of the most widely used sources of health-related information, particularly among young people [,]. This is because social media bypasses traditional “gatekeeping” roles typically filled by professional journalists and editors [] and may be a particularly interesting source for those who already mistrust established media organizations and governments []. Without these checks, unverified or misleading information can circulate widely, potentially eroding public trust in scientific consensus and established knowledge [].
Several content analyses have shown that for some specific scientific topics, particularly those related to public health, such as COVID-19 or vaccinations, a considerable amount of information found on social media is incorrect or inaccurate [-]. Accordingly, social media use was found to be associated with belief in COVID-19 misinformation [], hesitancy to vaccinate against COVID-19 [], and low trust in government [] and other institutions []. Furthermore, the use of YouTube (Google) in particular was found to be associated with distrust in science and research []. Concerns have been raised that the combination of inconsistent quality of online health information and distrust of evidence-based sources of information may lead to negative public health outcomes, such as delays in seeking essential medical care or pursuing potentially harmful alternative therapies [].
One of the primary sources of information social media users are confronted with on these platforms are the so-called social media influencers [,,]. They are considered role models, particularly among young people and those with high mistrust in established media organizations and the government [,]. While there is some evidence from cross-sectional studies showing that distrust in science and research is associated with the use of YouTube [], and experimental designs that assessed the effects of specific misinformation topics [], there are, so far, no experimental studies available that explored the impact of a video by a social media influencer rejecting core principles of science on users’ attitudes toward science or trust in scientists.
Social media influencers are typically considered for their persuasive potential in marketing efforts []. Recently, however, there has been a growing interest in influencers’ content on political and social issues []. People are increasingly turning to social media for information, especially those who are skeptical of mainstream media []. Social media influencers, in particular, can be seen as modern-day opinion leaders. In the traditional understanding of opinion leadership, opinion leaders are those who are better informed about certain issues and therefore give advice on these issues to others []. Social media influencers can now act as parasocial (as there is no actual face-to-face contact between influencers and their followers) opinion leaders for their audiences []. Thus, they use their platform to share their opinions and views, often in a very relatable and approachable way, which makes the audience more likely to adopt the shared views []. Based on the role these social media influencers may take in their audience’s perception, it is plausible to assume that exposure to a video by a social media influencer rejecting core principles of science might have negative effects on users’ attitudes toward science or trust in scientists.
Strategies for Effective Responses to Science Rejection
The prevalence of inaccurate information on science and scientific research on social media and the potential detrimental impact of such portrayals on society raises the question of how to react to videos that feature false or negative portrayals of science or discredit and reject science on social media platforms (eg, YouTube). Besides top-down approaches such as technical or legal countermeasures (ie, content moderation by platform providers, deplatforming of influencers, or integration of content disclosures based on keywords), there are bottom-up approaches that rely on the intervention of other users in the comments section, also known as bystander intervention []. The general rule of thumb in research on crisis communication is that any response to criticism or negative publicity is better than no response at all [,], but, so far, there is only limited evidence of how user comments should look like in order to effectively counter postings with false claims about science on social media.
One source of information that may inform strategies on how to effectively react to science rejection stems from literature on debunking myths and countering misinformation related to specific topics of scientific research. Most of these studies are related to myths and misinformation about COVID-19. A meta-analysis that explored factors underlying effective messages to counter attitudes and beliefs based on misinformation revealed that the effectiveness of messages was mainly determined by the number of details included in the message. Debunking was more successful when the counter message contained elaborate information on the respective issue as opposed to just labeling misinformation as incorrect []. It is, however, important not to repeat the respective myth when informing audiences about misinformation but to emphasize the correct information []. Combining information and social norm modeling with myth-busting has been found to be effective in combating misinformation and distrust related to COVID-19 vaccines []. Similarly, briefly viewing an infographic about science (ie, pictures that highlighted that scientific recommendations change along with newly available evidence) increased trust in science, which in turn reduced incorrect beliefs in COVID-19 misinformation []. In a study on the effectiveness of countering conspiracy theories related to COVID-19 [], a video featuring a lecture on nature and features of conspiracy theories, as well as the debunking of some widely spread conspiracy theories, reduced political radicalization. In contrast, highlighting the lack of credibility of the source of the conspiracy theory prevented the intensification of radicalization but did not reduce previous radicalization [].
Several authors have put together lists of recommendations for communication practitioners and members of the scientific community on how to approach public debates with individuals who are skeptical of research findings or respond to misinformation based on current literature about debunking misinformation [,,]. However, there is little evidence on how to react to individuals who are not just skeptical toward specific research findings or provide misinformation on particular topics, but completely deny or reject core principles of science. Schmid and Betsch [] differentiated two general types or goals of rebuttal messages to science rejection: (1) responding to misinformation by supporting the scientific standpoint with scientific facts (ie, topic rebuttal) and (2) refuting the opposing position by attacking its plausibility and unmasking its illegitimate techniques (ie, technique rebuttal). The authors found that both approaches had positive effects on audiences’ beliefs in scientific evidence, but neither one of the two approaches nor a specific combination of them was overall superior in terms of effectiveness []. However, more research on the effectiveness of specific strategies or approaches to respond adequately to science rejection is needed.
Furthermore, all available studies that may inform effective responses to science rejection have focused on one of the following two approaches: (1) they have either explored what scientists, researchers, or practitioners can do to ideally state their case or persuade the audience in a public debate [,,,] or (2) they have exposed participants in experimental settings to news articles, videos, or infographics that addressed well-known or preinduced myths or misinformation [-]. However, as mentioned above, the majority of incorrect information about scientific topics, or materials that support science rejection, is not transported in traditional media or public debates, but on social media platforms such as YouTube [,]. Social media platforms usually have a commenting system allowing users to share comments. These comments represent opinions, queries, appreciations, or displeasure regarding the provided or shared content []. Critical user comments regarding content that rejects science may play an important role in mitigating the detrimental effects of such postings, but there are no studies available testing whether the different types of user comments are effective in countering content of social media influencers who openly reject science.
In this study, we conducted a randomized controlled online trial in order to pursue the following two objectives: (1) to explore the effects of an online video on social media where an influencer openly rejects facts of (medical) science on users’ attitudes toward science and scientists and (2) to assess to what extent different response strategies by other users in the commentary section change these effects.
Based on the aforementioned concept of opinion leadership [], it appears plausible to assume that users’ attitudes toward science and scientists might be influenced by a video of a social media influencer openly rejecting science as compared to a video that is unrelated to science. Furthermore, according to the negativity bias, news with negative overtones, such as criticism or accusations from social media users that challenge fundamental components of modern society (eg, science and research), become more salient, are subsequently processed with greater attention, and therefore have a stronger impact on audiences’ arousal, perceptions, attention, and learning than positive contents [,]. Based on these theoretical concepts, we assumed that users’ attitudes toward science and scientists will be impacted by a science rejection video as compared to a video unrelated to science. However, since evidence on the impact of misinformation on social media is mixed [] and there are no previous studies that specifically addressed this particular research question, we assumed that any direction of the impact of the science rejection video might be possible in principle. For example, it is possible that the influencer’s rejection of science would legitimize skepticism toward scientific research and thereby negatively affect attitudes toward science and scientists and decrease support, trust, and interest related to scientific research. At the same time, however, it may also be the case that openly rejecting science in its entirety is viewed as controversial by audiences, triggering the opposite effect, especially among more educated viewers who value science. If this is the case, the messages might increase support for scientists and interest in scientific research. In fact, such a message might evoke psychological reactance among audiences, which is defined as a motivational state to regain personal freedom when individuals experience a threat to their freedom []. Dogmatic or extreme messages that challenge personal belief systems can function as such a threat, potentially resulting in a boomerang effect []. Thus, since both positive and negative effects are theoretically plausible for most outcomes, we formulated the following open, nondirectional hypothesis:
- Hypothesis 1: The outcomes (1.1) trust in scientists, (1.2) attitudes toward scientific research, (1.3) science support, (1.4) interest in scientific research, (1.5) belief in science, (1.6) conspiracy beliefs, and (1.7) evaluation of video and influencer will differ between the groups based on the differences in the content of the videos (ie, science rejection content compared to no science content).
The majority of available literature on countering hate speech, debunking of misinformation, and mitigating reputational damage seems to suggest that educating the public about an issue by focusing on providing factual information is the most effective response to science rejection [,,,,]. This is consistent with the misinformation receptivity framework, which highlights the importance of perceived credibility of the source and perceived plausibility of information as characteristics for messages effectively refuting misinformation [,]. Based on this model, we thus propose that factual counterarguments that plausibly refute the arguments presented in a video could be a relevant strategy for addressing science rejection. In addition, rejecting the credibility of the source through personal attacks, such as describing the source as unintelligent or misinformed, could dampen the persuasiveness of what that person has shared. Also, a combination of both strategies should help to refute science rejection content. However, evidence on the effectiveness of refutations is mixed [], and there is no “golden standard” or clear guidelines in the development of rebuttals to science rejection []. Thus, we formulated to following nondirectional hypothesis:
- Hypothesis 2: The outcomes (2.1) trust in scientists, (2.2) attitudes toward scientific research, (2.3) science support, (2.4) interest in scientific research, (2.5) belief in science, (2.6) conspiracy beliefs, and (2.7) evaluation of video, influencer, and comments will differ between the groups based on the differences in the user comments provided as response to the videos (ie, personal attacks on the influencer vs responses providing primarily factual information vs a mix of both strategies vs no response).
Methods
Participants
Between May 21, 2024, and June 27, 2024, we conducted a double-blind randomized controlled trial that followed strict intent-to-treat principles []. Participants were recruited via emails with invitations to participate in an online study on the impact of videos by influencers on social media sent to registered adult members of the German noncommercial online access panel SoSci Survey. The SoSci Survey panel comprises approximately 100,000 active individuals from German-speaking countries who registered for the panel pool via a double opt-in procedure []. The invitation emails were sent to 4000 panelists by the administrators of SoSci Survey and included basic information on the topic and (expected) duration of the study, but did not reveal any details. Researchers and participants remained blinded to the assignment of the experimental conditions until the end of data collection.
Of the 470 participants who responded to items on sociodemographics, 261 (55.5%) identified themselves as women and 199 (42.3%) as men, 7 (1.5%) indicated other genders, and 3 (0.6%) opted not to reveal their gender. The participants’ ages ranged from 18 to 88 years, with a mean age of 49.1 (SD 15.8) years. In terms of the highest completed education for these 470 participants, 92 (19.6%) indicated educational attainment below high school graduation, 136 (28.9%) indicated they had a high school diploma, and 242 (51.5%) had completed college or university. lists an overview of participants’ sociodemographic characteristics in each experimental condition. There were no differences between the 5 experimental conditions in terms of sociodemographics.
| Variables | Control (n=94) | No response (n=92) | Attack response (n=94) | Factual response (n=93) | Mixed response (n=97) | P value | ||||||||||
| Age (years), mean (SD) | 51.1 (15) | 49.9 (15) | 47.9 (17) | 47.3 (17.3) | 49.4 (14.8) | .48a | ||||||||||
| Sex, n (%) | ||||||||||||||||
| Female | 48 (51.1) | 53 (57.6) | 48 (51.1) | 53 (57) | 59 (60.8) | .58b | ||||||||||
| Male | 43 (45.7) | 38 (41.3) | 45 (47.9) | 38 (40.9) | 35 (36.1) | .51b | ||||||||||
| Other | 3 (3.2) | 1 (1.1) | 0 (0) | 2 (2.2) | 1 (1) | .45b | ||||||||||
| Undisclosed | 0 (0) | 0 (0) | 1 (1.1) | 0 (0) | 2 (2.1) | .52b | ||||||||||
| Education, n (%) | ||||||||||||||||
| College or university | 54 (57.4) | 56 (60.9) | 43 (45.7) | 44 (47.3) | 45 (46.4) | .11b | ||||||||||
| High school | 25 (26.6) | 19 (20.7) | 34 (36.2) | 27 (29) | 31 (32) | .19b | ||||||||||
| Below high school | 15 (16) | 17 (18.5) | 17 (18.1) | 22 (23.7) | 21 (21.6) | .70 b | ||||||||||
aP values calculated by ANOVA assessing group differences (F4,465=0.88).
bP values calculated by Fisher exact test assessing group differences.
Power Analysis
According to a sample size calculation with G*Power (version 3.1.9.7; Franz Faul, Universität Kiel) [], an ANOVA in a design with 5 experimental conditions along with an error probability of 0.05 and a power of 0.80 will require a minimum of 395 participants in order to detect a small to medium treatment effect of f=0.175, which is slightly more conservative, but comparable, to common benchmarks recommended and used for exploratory psychological research [-].
Materials
Participants of the intervention groups watched selected footage (length 4 minutes and 9 seconds) of a video entitled “Why science is nothing but a lie – you will change your way of thinking” of a German YouTube influencer []. At the beginning of the footage presented to the participants, the influencer claims that scientific research is based mostly on “false facts” and lies, and uses medical science, infectious diseases in particular, as an example for his claims. He explains that conventional medicine is based on the work of Robert Koch and Louis Pasteur, who proposed far-fetched ideas, which were never proven to be true or accurate, but were considered as facts by authorities for the purpose of imposing fear in the population in order to make money and maintain power. The influencer then continues to claim there is no progress in science, because its foundation is based on lies, and that most people, including scientists, do not care about this issue or are too narrow-minded to notice it. The footage concludes with the influencer claiming that everyone who dares to speak up is getting shut off or suppressed by authorities. At no point in this video does the influencer present any evidence that would support his claims.
After viewing the footage, the participants were directed to a webpage that provided comments on the video by other YouTube users. All comments were fictitious but loosely based on real comments on the actual video, and contained between 45 and 376 characters (including emoticons). Based on response strategies to science denialism in public discussions tested in a previous study [], factors stated in the misinformation receptivity framework [], highlighting the importance of perceived credibility of the source and perceived plausibility of information, and observations of different types of actual responses to the science rejection video on YouTube, we developed 4 different types of responses to the video in this study: responses providing primarily personal attacks on the influencer, responses providing primarily factual information countering the content in the video, a mix of both strategies, and no response. Since the reactions to these types of videos by users on YouTube are usually very diverse (ie, there is never a unanimous approval or rejection of such videos among users), we provided a balanced mixture of supporting and opposing comments to the participants in the intervention groups (except for the “No response” group).
Intervention group 1 (Attack response) read 8 comments posted as a response to the video. In total, 4 of them were supportive and contained statements such as “What a great, honest talk!” “I confirm this as an awakened doctor!” and “Thank you for your valuable work!” In addition, 4 comments criticized the content of the video and attacked the influencer on a personal level. These comments contained statements such as “You have no idea what you’re talking about, but you feel great because you finally have an excuse for your lack of education,” “On what insights do you practice medicine? Probably only on your own experiences!” or “My morals won't allow a small sports influencer who thinks he's got it all figured out to spread his dose of misinformation to the ignorant.”
Intervention group 2 (Factual response) read the same 4 supportive comments as Intervention group 1, but 4 different opposing comments. These comments focused on criticizing and refuting the video content by providing primarily factual information. They contained statements such as “Thanks to Robert Koch, we know the causes of tuberculosis and cholera and can treat them,” “In science, it is always made transparent which methodology was used, what was investigated, etc,” or “Without the progress that humanity has made thanks to science, we would...still die early from infections of small wounds.”
Intervention group 3 (Mixed response) read the same 4 supportive comments and a mix of opposing comments (ie, 2 comments that attacked the influencer and 2 comments with factual information) from intervention groups 1 and 2.
Intervention group 4 (No response) read a statement informing the participants that this video did not have any user comments. Participants of the control group watched a video (length 4 minutes and 6 seconds) with a similar style (ie, both videos feature a young male influencer lecturing his audience about a specific topic by talking directly into the camera) and length (ie, both videos had a length of approximately 4 minutes) as the intervention video, but unrelated to science and research. The video contained footage of an influencer discussing how to properly shine shoes [].
Procedure
Once informed consent was obtained on the first page of the online survey, we collected participants’ sociodemographic data. Afterwards, all participants were randomly assigned to 1 of the 5 experimental conditions via automated urn randomization with an even allocation ratio as outlined by Till et al []. After watching the respective footage and reading the respective response, we used questionnaires to collect data on all outcomes (ie, trust in scientists, attitudes toward scientific research, science support, interest in scientific research, belief in science, conspiracy beliefs, and evaluation of video, influencer, and user comments). Subsequently, all participants completed 1 item for the manipulation check and blinding success, respectively. The online survey concluded with an extensive debriefing and clarification of the misinformation provided in the intervention video.
Measures
Trust in Scientists
We used 5 items taken from the Special Eurobarometer 516 survey [] to assess trust in scientists. Since these items do not comprise a validated scale, we used Cronbach α to assess the internal consistency of the items and excluded items with low coefficients. As a result, we assessed trust in scientists with the following 3 items: “We can no longer trust scientists to tell the truth about controversial scientific and technological issues because they depend more and more on money from industry,” “Scientists only look at very specific issues and do not consider problems from a wider perspective,” and “Nowadays, the problems we are facing are so complex that scientists are no longer able to understand them.” These items were rated on a scale from 1 (strongly disagree) to 5 (strongly agree). We calculated mean scores across all 3 items and reversed the coding in order to have higher scores indicating greater trust in scientists (mean 3.76, SD 0.87; score range 1-5; Cronbach α=0.75).
Attitudes Toward Scientific Research
We used 9 items taken from the Special Eurobarometer 516 survey [] to assess respondents’ attitudes toward scientific research. Exclusion of 2 items due to low internal consistency resulted in 7 items used for the final statistical analysis (eg, “Science can sort out any problem” or “Young people’s interest in science is essential for our future prosperity”), which were rated on a scale from 1 (strongly disagree) to 5 (strongly agree). We calculated mean scores across all 7 items, with higher scores indicating greater approval of science (mean 3.33, SD 0.53; score range 1-5; Cronbach α=0.53). Note that, because of the scale’s low reliability, which remained low even after omitting individual items with low coefficients, we excluded this measure from the statistical data analysis.
Science Support
Participants’ science support was assessed with the following item, which was loosely based on a measure developed by Rutjens et al []: “In Germany, Austria, and Switzerland, an average of approximately 3% of the respective gross domestic product (GDP) is spent on research. Please indicate to what extent you believe the government should spend more or less than 3% of GDP on scientific research in the future.” Participants provided their ratings on a visual analog scale with a 2-sided scroll bar ranging from 0 (much less) to 100 (much more). Higher scores indicate greater science support (mean 76.34, SD 17.03; score range 0-100).
Interest in Scientific Research
Participants’ interest in scientific research was assessed with the following single item: “In everyday life, we have to deal with many different problems that we are more or less interested in. Please indicate to what extent you are interested in science.” Participants provided their ratings on a visual analog scale with a 2-sided scroll bar ranging from 0 (not at all interested) to 100 (very interested). Higher scores indicate greater interest in scientific research (mean 80.43, SD 17.53; score range 0-100).
Belief in Science
We used a singular item taken from the Special Eurobarometer 516 survey [] to assess respondents’ belief in science: “How do you think is the overall influence of science and technology on society?” which was rated on a scale from 1 (very negative) to 4 (very positive). Higher scores indicate greater belief in science (mean 3.19, SD 0.53; score range 1-4).
Conspiracy Beliefs
We used a scale developed by Nocun and Lamberty [] consisting of 5 items (eg, “There are secret organizations that have a major influence on political decisions”) to assess conspiracy beliefs. These items were rated on a scale from 1 (strongly disagree) to 7 (strongly agree). We calculated mean scores across all 5 items, with low scores indicating a lack of critical world view, average scores indicating a healthy amount of criticism, and higher scores indicating greater conspiracy beliefs (mean 2.86, SD 1.33; score range 1-7; α=0.80).
Evaluation of Video and Influencer
We asked participants to assess the video they had watched as well as the influencer with the following 4 single items: “How much did you like the video?” “How informative was the video?” “How much did you like the influencer?” and “How trustworthy was the influencer?” All items were rated on a scale from 1 (not at all) to 5 (very much). Higher scores indicated greater approval of the video or influencer (item 1: mean 1.78, SD 1.10; item 2: mean 1.84, SD 1.21; item 3: mean 2.30, SD 1.24; and item 4: mean 1.97, SD 1.32; score range 1-5).
Evaluation of Comments
We asked participants of intervention groups 1-3 (the “No response” group and the control group were not exposed to any comments) to assess the comments that criticized the video with the following three single items: “How much did you like the comments?” “How informative were the comments?” and “How trustworthy were the comments?” All items were rated on a scale from 1 (not at all) to 5 (very much). Higher scores indicated greater approval of the comments (item 1: mean 3.13, SD 1.15; item 2: mean 2.68, SD 1.21; and item 3: mean 3.12, SD 1.26; score range 1-5).
Masking
In order to mask the aim of this study and the group assignment, we included 8 items that asked participants to rate the importance of science as well as fashion for their personal life (eg, “I often talk about science/fashion with other people”). All items were rated on a scale from 1 (strongly disagree) to 5 (strongly agree).
Blinding Success
In order to assess the success of blinding manipulation, we asked participants to indicate which group they thought they had been allocated to (intervention group, control group, or don’t know) as outlined by Kolahi et al [].
Manipulation Check
In order to assess whether the experimental manipulation had been successful, we asked participants to indicate how personally offensive or factual the comments on the video were on a visual analog scale with a 2-sided scroll bar ranging from 0 (personally offensive) to 100 (factual), with higher scores indicating greater perception of the comments being factual (mean 46.09, SD 25.57; score range 0-100).
Data Analysis
For the primary data analysis, the effects of the intervention on all outcomes were compared between the 5 experimental conditions (ie, Attack response, Factual response, Mixed response, No response, and control group) with univariate ANOVAs along with Bonferroni-corrected contrast tests for the examination of individual group differences, testing both hypothesis 1 and hypothesis 2. In an additional exploratory analysis that specifically focused on the impact of the science rejection video compared to the control group, we compared outcomes of all intervention groups combined with the control group in order to explore the impact of the science rejection video compared to the control group with unpaired 2-sample t tests. Differences in sociodemographics between the experimental conditions as well as between survey completers and dropouts were analyzed with Fisher exact tests, unpaired 2-sample t tests, and univariate ANOVAs.
Ethical Considerations
This study was reviewed and approved by the Research Ethics Board of the Medical University of Vienna (study protocol 1206/2024). All statistical analyses were conducted within the scope of this ethics approval. All participants provided informed consent electronically before the start of the study. No personally identifiable information was collected, and no compensation was offered for participation. The study was preregistered with the German Clinical Trial Registry as DRKS00033829.
Results
Overview
demonstrates an illustration of the study flowchart as per the CONSORT (Consolidated Standards of Reporting Trials) guidelines (CONSORT checklist in ). Of the 494 individuals who accessed the survey, 3 (0.6%) individuals did not provide informed consent, and 21 (4.3%) discontinued their participation before randomization was completed. The remaining 470 (95.1%) individuals were randomly allocated to 1 of the 5 experimental conditions (“Attack response” group: n=94; “Factual response” group: n=93; “Mixed response” group: n=97; “No response” group: n=92; and control group: n=94) and included in the statistical analysis of the study. A total of 430 of 470 (91.5%) randomized participants completed the entire survey.
Only 3 respondents were already familiar with the influencer in the science rejection video before the current intervention (one in the “Attack response,” “Mixed response,” and “No response” group, respectively), and none of the respondents in the control group were familiar with the influencer in the control video. Furthermore, 1 respondent (in the “Attack response” group) has already seen footage from the science rejection video before the study. The overwhelming majority of the participants thus did not have a preexisting relationship with the influencer.

Comparisons With Dropouts
There were no differences between participants who completed the entire study and those who were randomized, but dropped out of the study before completion, with regard to gender (P=.26), age (t468=–0.67; P=.50), education (P=.17), and group allocation (P=.83). Similarly, there were no differences between randomized participants and dropouts before randomization in terms of gender (P=.88), age (t487=–0.49; P=.62), and education (P=.17).
Blinding Success
A total of 430 participants responded to the item assessing blinding success. Out of these 430 participants, 117 (27.2%) correctly guessed their group allocation, whereas 94 (21.9%) provided an incorrect answer, and 219 participants (50.9%) responded with “don’t know.” With the number of “don’t know” responses being relatively high as well as correct and incorrect guesses being balanced, blinding was considered successful [].
Manipulation Check
There was a difference in the perception of the provided user comments being factual or personally offensive between intervention groups 1-3 (F2,261=7.14, ηp2=.05, P<.001). Participants of the “Factual response” group (mean 48.53, SD 22.68) perceived the comments as significantly more factual than participants of the “Attack response” group (mean 34.89, SD 22.18; Cohen d=0.23, Cohen d score range 0.08-0.38; P<.001). The score of the “Mixed response” group (mean 43.07, SD 26.10) was lower than in the “Factual response” group (Cohen d=–0.11, Cohen d score range –0.03 to 0.04; P=.22), and higher than in the “Attack response” group (Cohen d=0.12, Cohen d score range –0.02 to 0.27; P=.13), although these differences did not reach statistical significance. Based on these scores, the experimental manipulation was considered successful. There are no results regarding the “No response” group and the control group, as these groups did not read any comments.
Primary Analysis: Differences Between Experimental Conditions
We tested both hypothesis 1 and hypothesis 2 by comparing all outcomes between the 5 experimental conditions. shows an overview of comparisons of trust in scientists, science support, interest in scientific research, belief in science, conspiracy beliefs, and evaluation of video and influencer between the 5 experimental conditions. An overview of differences in the evaluation of the user comments between groups 1-3 (ie, “Attack response” group, “Factual response” group, and “Mixed response” group) is provided in .
The experimental conditions had no significant effect on trust in scientists (F4,427=1.33, ηp2=.01; P=.26), science support (F4,426=0.15, ηp2=.001; P=.96), interest in scientific research (F4,426=1.78, ηp2=.02; P=.13), belief in science (F4,426=0.71, ηp2=.007; P=.58), and conspiracy beliefs (F4,426=0.30, ηp2=.003; P=.88). Based on these results, we rejected hypothesis 1.1 to hypothesis 1.6 as well as hypothesis 2.1 to hypothesis 2.6.
| Experimental conditions | Control | No response | Attack response | Factual response | Mixed response | ||||||||
| Mean score (95% CI) | Mean score (95% CI) | Mean difference (95% CI)a | Mean score (95% CI) | Mean difference (95% CI)a | Mean score (95% CI) | Mean difference (95% CI)a | Mean score (95% CI)a | Mean difference (95% CI)a | |||||
| Outcomes | |||||||||||||
| Trust in scientists (Cronbach α= 0.75; n=432) | 3.57 (3.26 to 3.77) | 3.78 (3.59 to 3.97) | 0.21 (–0.17 to 0.59) | 3.78 (3.60 to 3.95) | 0.21 (–0.17 to 0.58) | 3.85 (3.68 to 4.01) | 0.28 (–0.10 to 0.65) | 3.81 (3.62 to 4.01) | 0.25 (–0.13 to 0.62) | ||||
| Science support (n=431) | 76.96 (72.93 to 81.00) | 75.66 (71.62 to 79.71) | –1.30 (–8.77 to 6.17) | 77.18 (74.05 to 80.31) | 0.22 (–7.14 to 7.58) | 76.21 (72.78 to 79.64) | –0.75 (–8.18 to 6.67) | 75.70 (72.06 to 79.34) | –1.26 (–8.56 to 6.04) | ||||
| Interest in scientific research (n=431) | 84.00 (80.56 to 87.44) | 82.30 (79.12 to 85.48) | –1.70 (–9.33 to 5.93) | 79.05 (75.10 to 82.99) | –4.96 (–12.47 to 2.56) | 78.59 (74.65 to 82.53) | –5.41 (–12.99 to 2.17) | 78.47 (74.45 to 82.49) | –5.53 (–12.99 to 1.93) | ||||
| Belief in science (n=431) | 3.14 (3.03 to 3.26) | 3.24 (3.14 to 3.35) | 0.10 (–0.13 to 0.33) | 3.17 (3.06 to 3.28) | 0.03 (–0.20 to 0.26) | 3.24 (3.12 to 3.35) | 0.09 (–0.14 to 0.32) | 3.14 (3.02 to 3.26) | 0.00 (–0.23 to 0.23) | ||||
| Conspiracy beliefs (Cronbach α=0.80; n=431) | 2.98 (2.72 to 3.24) | 2.90 (2.59 to 3.21) | –0.08 (–0.66 to 0.51) | 2.84 (2.59 to 3.10) | –0.13 (–0.71 to 0.44) | 2.76 (2.49 to 3.04) | –0.21 (–0.79 to 0.37) | 2.83 (2.52 to 3.15) | –0.14 (–0.71 to 0.43) | ||||
| Evaluation of video and influencer (n=432) | |||||||||||||
| How much did you like the video? | 3.02 (2.78 to 3.27) | 1.61 (1.41 to 1.82) | –1.41 (–1.81 to –1.01)b | 1.47 (1.31 to 1.62) | –1.56 (–1.95 to –1.17)b | 1.42 (1.23 to 1.61) | –1.61 (–2.00 to –1.21)b | 1.42 (1.24 to 1.59) | –1.61 (–2.00 to –1.22)b | ||||
| How informative was the video? | 3.49 (3.26 to 3.72) | 1.54 (1.31 to 1.78) | –1.95 (–2.34 to –1.56)b | 1.40 (1.25 to 1.55) | –2.09 (–2.48 to –1.71)b | 1.51 (1.33 to 1.69) | –1.98 (–2.36 to –1.59)b | 1.35 (1.19 to 1.51) | –2.14 (–2.52 to –1.75)b | ||||
| How much did you like the influencer? | 3.69 (3.46 to 3.92) | 2.14 (1.90 to 2.39) | –1.55 (–2.00 to –1.09)b | 1.98 (1.77 to 2.19) | –1.71 (–2.16 to –1.27)b | 1.97 (1.74 to 2.19) | –1.73 (–2.17 to –1.28)b | 1.79 (1.59 to 2.00) | –1.90 (–2.34 to –1.46)b | ||||
| How trustworthy was the influencer? | 3.98 (3.76 to 4.19) | 1.64 (1.44 to 1.84) | –2.34 (–2.72 to –1.96)b | 1.45 (1.29 to 1.61) | –2.52 (–2.90 to –2.15)b | 1.47 (1.27 to 1.66) | –2.51 (–2.89 to –2.13)b | 1.37 (1.22 to 1.53) | –2.60 (–2.98 to –2.23)b | ||||
aComparison of means with the control group with Bonferroni corrected contrast tests.
bMean differences with significant P values <.05.
| Commentsa | Attack response | Factual response | Mixed response | ||||
| Mean score (95% CI) | Mean score (95% CI) | Mean difference (95% CI)b | Mean score (95% CI) | Mean difference (95% CI)b | Mean difference (95% CI)c | ||
| How much did you like the comments? (n=265) | 2.90 (2.65 to 3.15) | 3.21 (2.98 to 3.44) | 0.31 (–0.11 to 0.73) | 3.27 (3.03 to 3.52) | 0.38 (–0.03 to 0.79) | 0.07 (–0.35 to 0.48) | |
| How informative were the comments? (n=265) | 2.40 (2.16 to 2.64) | 2.85 (2.59 to 3.10) | 0.45 (0.01 to 0.89)d | 2.80 (2.54 to 3.06) | 0.40 (–0.03 to 0.84) | –0.05 (–0.48 to 0.39) | |
| How trustworthy were the comments? (n=265) | 2.78 (2.51 to 3.06) | 3.28 (3.02 to 3.54) | 0.50 (0.04 to 0.95)d | 3.30 (3.04 to 3.55) | 0.51 (0.07 to 0.96)d | 0.02 (–0.43 to 0.47) | |
aThere are no results regarding the “No response” group and control group, as these groups did not read any comments.
bComparisons of means with the “Attack response” group with Bonferroni corrected contrast tests.
cComparisons of means with the “Factual response” group with Bonferroni corrected contrast tests.
dMean differences with significant P values <.05.
In contrast, the experimental conditions had a significant effect on how much the participants liked the video (F4,427=49.40, ηp2=.32; P<.001), how much they found the video to be informative (F4,427=88.32, ηp2=.45; P<.001), how much they liked the influencer (F4,427=48.19, ηp2=.31; P<.001), and how much they found the influencer to be trustworthy (F4,427=138.43, ηp2=.57; P<.001). Bonferroni corrected contrast tests indicated that scores for liking the video (Cohen d=0.46-0.54) finding the video informative (Cohen d=0.65-0.72), liking the influencer (Cohen d=0.45-0.56), and finding the influencer trustworthy (Cohen d=0.79-0.90) were higher in the control group than in all intervention groups, whereas there were no differences between the individual intervention groups (ie, groups 1-4). These results supported hypothesis 1.7, as video and influencer were rated more favorably across all domains in the control group than in all intervention groups.
Furthermore, in terms of assessment of the comments, there were differences between intervention groups 1-3 in participants’ assessment of how informative (F2,262=3.78, ηp2=.03; P=.02) and how trustworthy (F2,262=4.86, ηp2=.04; P=.01) the comments were. Bonferroni corrected contrast tests indicated that participants of the “Factual response” group rated the comments as more informative than in the “Attack response” group (Cohen d=0.15, Cohen d score range 0.004-0.30; P=.04). Furthermore, participants of the “Factual response” group (Cohen d=0.16, Cohen d score range 0.01-0.31; P=.03) and the “Mixed response” group (Cohen d=0.17, Cohen d score range 0.02-0.32; P=.02) rated the comments as more trustworthy than those in the “Attack response” group. There were no group differences in terms of liking the comments (F2,262=2.77, ηp2=.02; P=.07). These results partly supported hypothesis 2.7, as we found some differences in the evaluation of the user comments between individual intervention groups.
Exploratory Analysis: The Impact of the Science Rejection Video Compared to the Control Group
Table S1 in lists the randomization checks, specifically comparing the pooled treatment groups versus the control group. There were no differences between these two conditions in terms of sociodemographics. Table S2 in provides an overview of binary comparisons of participants’ outcomes in all groups who watched the science rejection video (ie, groups 1-4 combined) compared to the control group. Trust in scientists was overall higher in the intervention groups than in the control group, whereas interest in scientific research was lower in the intervention groups than in the control group. There were, however, no differences in terms of science support, belief in science, and conspiracy beliefs. Finally, consistent with the results of the primary analysis, assessment of the video and the influencer was less favorable in the intervention groups than in the control group for all items.
Discussion
Principal Findings
This study assessed the impact of a YouTube video by a social media influencer who openly rejects science on users’ attitudes toward science and scientists, and to what extent different response strategies applied in user comments changed these effects. When we compared all 5 experimental conditions with each other in the primary analysis, we found that participants rated the video and the influencer less favorably in the science rejection conditions than in the control group. However, the analysis revealed no differences between the experimental conditions in terms of all other outcomes, suggesting that participants of this study were largely not negatively impacted by the science rejection video. This offers a nuanced view of opinion leadership of influencers and aligns with a recent experimental study on COVID-19–related misinformation by an influencer, which found that such misinformation did not consistently lead to misunderstandings about SARS-CoV-2 among audiences [].
Furthermore, the different responses to the science rejection video (ie, factual, attack, or mixed) did not seem to impact participants’ trust in scientists, science support, interest in research, belief in science, conspiracy beliefs, or evaluations of the video and the influencer. A potential explanation for the lack of differing effects of the experimental conditions on the respondents might be that the views expressed in the science rejection video were so extreme that the specific message of the influencer was considered entirely irrelevant to their own views. Similarly, in the light of this strong divergence from their own opinions, it might have made little difference what others wrote about the video content in the comments, as once existing, strongly entrenched beliefs can also prevail even if confronted with nonsupportive views []. Consistent with cognitive dissonance theory [], it might also be the case that the content of the science rejection video was perceived as attitude-challenging information by the majority of participants in this sample with relatively high education levels, inducing dissonance and thus limiting the impact of the video []. Furthermore, a single exposure to a short video might not have been sufficient to change opinions and attitudes toward fundamental beliefs such as the belief in the validity of science. It has been noted in the literature that individual core beliefs, particularly those related to politics, health, and social issues, are often too robust to change after a brief media intervention [,].
Despite this, factual responses tended to be perceived as more informative and trustworthy than personal attacks on the influencer in this study, suggesting that supporting the scientific standpoint with factual information [] that refutes the plausibility of the information the influencer has shared [] may have been perceived more favorably by the respondents. This aligns with research emphasizing the importance of educating the public with factual responses about science rejection [,,,,]. While none of the response strategies significantly altered how the video was perceived by participants in this study, previous work advocates for constructive, transparent, and persistent engagement in public debates on science [,].
When we compared the outcomes of all intervention groups combined with the control group in an additional exploratory analysis, we found that, contrary to our expectations, participants who were exposed to the science critical video reported greater trust in scientists than participants of the control group. In accordance with the principle of cognitive consistency, individuals are generally motivated to maintain coherence in their beliefs and attitudes within a stable psychological framework []. It is therefore possible that exposure to content rejecting science may have resulted in a boomerang effect [] in the current sample, which consisted mainly of individuals with a high affinity for scientific research (ie, high education and high interest in scientific research), and increased participants’ trust in science. The high affinity toward scientific research in the current sample is consistent with the recruitment method via the noncommercial online panel SoSci Survey [], which primarily attracts individuals who regularly participate in online studies without monetary incentives. The high affinity toward scientific research in the current sample could have also led to reactance as a reaction to the science-critical video. Previous research has shown that individuals tend to defend personally held beliefs when confronted with opposing views, sometimes resulting in a reinforcement of those beliefs []. We can only speculate that this may, at least partially, explain the observed pattern of increased trust in science following exposure to the science rejection video. It is, however, unlikely that individuals with a low or average affinity toward scientific research would react with a similar positive effect as the participants in our sample. Instead, a general population sample, with a more diverse range of attitudes and affinities, would likely exhibit more varied responses, including indifference or even agreement with the critique, rather than the observed increase in trust toward scientists. Whereas individuals who openly reject science might be difficult to reach for participation in scientific research, the general population likely includes a sufficient number of individuals with moderate or low levels of science skepticism who may be willing to participate in follow-up studies.
Of note, participants of this study who were exposed to the science critical video also reported lower interest in scientific research than those in the control group, despite their increased trust in scientists. A possible explanation for this finding might be that individuals with a high affinity for scientific research react with exhaustion or fatigue when confronted (again) with doubts and debates about the validity of scientific research in the video. This may have resulted in participants distancing themselves from the topic debated in the video and thus reporting lower interest in scientific research.
Limitations
In addition to the lack of representativeness of the sample discussed above, this study has some additional limitations. First, several measures were either single-item measures or the items had low internal consistency. For trust in scientists, internal consistency improved to an acceptable level (ie, α≥.70) when items with low coefficients were excluded. However, the measure of attitudes toward scientific research was omitted from statistical analysis due to its lack of internal consistency. Additionally, our design only tested a limited number of response strategies. Other response strategies may have been more effective in mitigating the effects of the science-critical video. Furthermore, we tested the impact of a single exposure to a science rejection video of an influencer who was unknown to most of our participants. Social media influencers and their audience are often engaged in a more long-term mediated relationship, involving exposure to different messages over a prolonged period []. Thus, any inferences about the impact of science-critical influencers in different contexts should be made with caution. A multiwave examination or investigation of a well-known influencer was beyond the scope of this study, but this should be addressed in future work. Another limitation of this study is the use of exploratory, nondirectional hypotheses. While this approach allowed for an open exploration of potential effects and was consistent with the preregistration of the study, it reflects the current lack of specific theories and theoretical concepts in this particular area of research. Future research could benefit from more theoretically driven, directional hypotheses to increase specificity and strengthen the interpretability of the findings. There may also be several factors that might help to explain or identify individuals for whom science rejection videos are particularly influential, but were not assessed in this study. These potentially relevant factors should be included in future research. Another limitation of the study was that no control group was included that received no stimulus (ie, no video) at all. Furthermore, there was a lack of full comparability between the 2 videos, which differed not only in content (ie, rejection of science vs unrelated to science), but also in influencer identity, filming environment, and production quality (eg, lighting, sound, etc). These differences may have introduced unintended confounding, limiting the internal validity of our findings. Nevertheless, it is a relevant aspect of our design that we provided participants with a control stimulus specifically chosen to have no connection to science rejection in the influencer content and that offered as neutral a setting as possible. Still, future research is needed to address these limitations by using videos from the same influencer, filmed under similar conditions, and by including another control group that receives no stimulus at all, in order to isolate the specific effect of the video content. Finally, the experimental conditions with user comments not only contained opposing comments about the video, but also an equal number of supporting comments. This approach may have increased the authenticity of the response strategies, but it may have also eliminated, mitigated, or at least reduced the impact of negative user comments. Future research should investigate the effect of balance in the comments in order to gain insights into whether a perceived majority of opinion might play a role in the effectiveness of user comments.
Conclusions
Portrayals of science rejection by an influencer in YouTube videos had relatively little effect on attitudes toward science and scientists of users with a high affinity for scientific research. There was, however, some limited evidence that science rejection videos could potentially simultaneously increase trust in scientists and reduce interest in scientific research in this population. Further research on this effect is needed in order to replicate and verify this finding. Furthermore, user comments that predominantly used factual information in order to critically address and challenge the content of the video tended to be viewed as more informative and trustworthy than other comments; however, the effects of the science rejection video did not vary with different types of user comments. The ability to alter the impact of the science rejection video was somewhat limited for all response strategies tested in the current study, highlighting the need for further research, particularly among individuals with little or no affinity toward scientific research. Future research needs to focus on the development of effective approaches for science communication and social media users to counter false claims by science rejecters who spread public misinformation.
Acknowledgments
No external financial support or grants were received from any public, commercial, or not-for-profit entities for the research, authorship, or publication of this article.
Data Availability
Data is available from the corresponding author upon reasonable request.
Conflicts of Interest
None declared.
CONSORT 2025 checklist.
PDF File (Adobe PDF File), 136 KBOverview of the results of the exploratory analysis.
DOCX File , 89 KBReferences
- Romanova A, Rubinelli S, Diviani N. Improving health and scientific literacy in disadvantaged groups: A scoping review of interventions. Patient Educ Couns. 2024;122:108168. [FREE Full text] [CrossRef] [Medline]
- Rutjens BT, Sengupta N, der Lee RV, et al. Science skepticism across 24 countries. Social Psychological and Personality Science. 2021;13(1):102-117. [CrossRef]
- Lewandowsky S, Mann ME, Brown NJL, Friedman H. Science and the public: debate, denial, and skepticism. J Soc Polit Psych. 2016;4(2):537-553. [CrossRef]
- Eberl J, Lebernegg N. The pandemic through the social media lens. Medien J. 2022;45(3):5-15. [CrossRef]
- Harff D, Bollen C, Schmuck D. Responses to social media influencers’ misinformation about COVID-19: a pre-registered multiple-exposure experiment. Media Psychol. 2022;25(6):831-850. [CrossRef]
- Rutjens BT, Sutton RM, van der Lee R. Not all skepticism is equal: exploring the ideological antecedents of science acceptance and rejection. Pers Soc Psychol Bull. 2018;44(3):384-405. [FREE Full text] [CrossRef] [Medline]
- Scheitle CP, Corcoran KE. COVID-19 skepticism in relation to other forms of science skepticism. Socius. 2021;7. [CrossRef]
- Schmuck D, Harff D. Popular among distrustful youth? Social media influencers' communication about COVID-19 and young people's risk perceptions and vaccination intentions. Health Commun. 2024;39(12):2730-2743. [FREE Full text] [CrossRef] [Medline]
- Farias M, Newheiser A, Kahane G, de Toledo Z. Scientific faith: belief in science increases in the face of stress and existential anxiety. J Exp Soc Psychol. 2013;49(6):1210-1213. [FREE Full text] [CrossRef] [Medline]
- Schmid P, Betsch C. Effective strategies for rebutting science denialism in public discussions. Nat Hum Behav. 2019;3(9):931-939. [CrossRef] [Medline]
- Guess A, Lyons B. Misinformation, disinformation, and online propaganda. In: Persily N, Tucker JA, editors. Social Media and Democracy. SSRC Anxieties of Democracy. Cambridge, United Kingdom. Cambridge University Press; 2020:10-33.
- Humprecht E, Esser F, Van Aelst P. Resilience to online disinformation: a framework for cross-national comparative research. The International Journal of Press/Politics. 2020;25(3):493-516. [CrossRef]
- Chen J, Wang Y. Social media use for health purposes: systematic review. J Med Internet Res. 2021;23(5):e17917. [FREE Full text] [CrossRef] [Medline]
- Chou WS, Gaysynsky A, Trivedi N, Vanderpool RC. Using social media for health: national data from HINTS 2019. J Health Commun. 2021;26(3):184-193. [CrossRef] [Medline]
- Lewandowsky S, Ecker UKH, Seifert CM, Schwarz N, Cook J. Misinformation and its correction: continued influence and successful debiasing. Psychol Sci Public Interest. 2012;13(3):106-131. [CrossRef] [Medline]
- Carrion-Alvarez D, Tijerina-Salina PX. Fake news in COVID-19: a perspective. Health Promot Perspect. 2020;10(4):290-291. [FREE Full text] [CrossRef] [Medline]
- Lahouati M, De Coucy A, Sarlangue J, Cazanave C. Spread of vaccine hesitancy in France: what about YouTube™? Vaccine. 2020;38(36):5779-5782. [CrossRef] [Medline]
- Schuetz SW, Sykes TA, Venkatesh V. Combating COVID-19 fake news on social media through fact checking: antecedents and consequences. Eur J Info Syst. 2021;30(4):376-388. [CrossRef]
- Ahmed S, Rasul ME. Social media news use and COVID-19 misinformation engagement: survey study. J Med Internet Res. 2022;24(9):e38944. [FREE Full text] [CrossRef] [Medline]
- Till B, Niederkrotenthaler T. Predictors of vaccine hesitancy during the COVID-19 pandemic in Austria: a population-based cross-sectional study. Wien Klin Wochenschr. 2022;134(23-24):822-827. [FREE Full text] [CrossRef] [Medline]
- Klein E, Robison J. Like, post, and distrust? How social media use affects trust in government. Political Communication. 2019;37(1):46-64. [CrossRef]
- Yigzaw KY, Wynn R, Marco-Ruiz L, et al. The association between health information seeking on the internet and physician visits (the seventh Tromsø study - part 4): population-based questionnaire study. J Med Internet Res. 2020;22(3):e13120. [FREE Full text] [CrossRef] [Medline]
- Newman N, Fletcher R, Eddy K, Robertson C, Nielsen R. Reuters institute digital news report 2023 (12th ed.). Reuters Institute for the Study of Journalism. 2024. URL: https://reutersinstitute.politics.ox.ac.uk/sites/default/files/2023-06/Digital_News_Report_2023.pdf [accessed 2024-11-26]
- Zimmermann D, Noll C, Gräßer L, et al. Influencers on YouTube: a quantitative study on young people’s use and perception of videos about political and societal topics. Curr Psychol. 2020;41(10):6808-6824. [CrossRef]
- Hudders L, De JS, De VM. The commercialization of social media stars: a literature review and conceptual framework on the strategic use of social media influencers. Int J Advert. 2021;40(3):24-67. [CrossRef]
- Riedl MJ, Lukito J, Woolley SC. Political influencers on social media: an introduction. Social Media + Society. 2023;9(2):20563051231177938. [CrossRef]
- Katz E, Lazarsfeld P. Personal Influence: The Part Played by People in the Flow of Mass Communications. New York, NY. Free Press; 1955.
- Stehr P, Rössler P, Leissner L, Schönhardt F. Parasocial opinion leadership media personalities? influence within parasocial relations: theoretical conceptualization and preliminary results. Int J Commun. 2015;9:982-1001.
- Harff D, Schmuck D. Influencers as empowering agents? following political influencers, internal political efficacy and participation among youth. Political Communication. 2023;40(2):147-172. [CrossRef]
- Jia Y, Schumann S. Tackling hate speech online: the effect of counter-speech on subsequent bystander behavioral intentions. Cyberpsychology. 2025;19(1):1-4. [CrossRef]
- Henard D. Negative publicity: what companies need to know about public reactions. Public Relations Q. 2002;47(4):8-12.
- Chan MS, Jones CR, Hall Jamieson K, Albarracín D. Debunking: a meta-analysis of the psychological efficacy of messages countering misinformation. Psychol Sci. 2017;28(11):1531-1546. [FREE Full text] [CrossRef] [Medline]
- Peter C, Koch T. When debunking scientific myths fails (and when it does not): the backfire effect in the context of journalistic coverage and immediate judgments as prevention strategy. Sci Commun. 2015;38(1):3-25. [CrossRef]
- Yousuf H, van der Linden S, Bredius L, et al. A media intervention applying debunking versus non-debunking content to combat vaccine misinformation in elderly in the Netherlands: a digital randomised trial. EClinicalMedicine. 2021;35:100881. [FREE Full text] [CrossRef] [Medline]
- Agley J, Xiao Y, Thompson EE, Chen X, Golzarri-Arroyo L. Intervening on trust in science to reduce belief in COVID-19 misinformation and increase COVID-19 preventive behavioral intentions: randomized controlled trial. J Med Internet Res. 2021;23(10):e32425. [FREE Full text] [CrossRef] [Medline]
- Liu T, Guan T, Yuan R. Can debunked conspiracy theories change radicalized views? Evidence from racial prejudice and anti-China sentiment amid the COVID-19 pandemic. J Chin Polit Sci. 2022:1-33. [FREE Full text] [CrossRef] [Medline]
- Van Rensburg W, Head BW. Climate change scepticism: reconsidering how to respond to core criticisms of climate science and policy. Sage Open. 2017;7(4). [CrossRef]
- Kavitha K, Shetty A, Abreo B, D’Souza A, Kondana A. Analysis and classification of user comments on YouTube videos. Procedia Computer Science. 2020;177:593-598. [CrossRef]
- Van der Meer TGLA, Hameleers M, Kroon AC. Crafting our own biased media diets: the effects of confirmation, source, and negativity bias on selective attendance to online news. Mass Communication and Society. 2020;23(6):937-967. [CrossRef]
- van der Meer TGLA, Hameleers M. I knew it, the world is falling apart! Combatting a confirmatory negativity bias in audiences’ news selection through news media literacy interventions. Digital Journalism. 2022;10(3):473-492. [CrossRef]
- Brehm J. A Theory of Psychological Reactance. New York, NY. Academic Press; 1966.
- Bensley LS, Wu R. The role of psychological reactance in drinking following alcohol prevention messages. J Applied Social Pyschol. 2006;21(13):1111-1124. [CrossRef]
- Kim Y, Lim H. Debunking misinformation in times of crisis: Exploring misinformation correction strategies for effective internal crisis communication. Contingencies & Crisis Mgmt. 2022;31(3):406-420. [CrossRef]
- Amazeen M. The misinformation recognition and response model: an emerging theoretical framework for investigating antecedents to and consequences of misinformation recognition. Hum Commun Res. 2024;50(2):29. [CrossRef]
- Amazeen MA, Krishna A. Refuting misinformation: examining theoretical underpinnings of refutational interventions. Curr Opin Psychol. 2024;56:101774. [CrossRef] [Medline]
- Gupta S. Intention-to-treat concept: a review. Perspect Clin Res. 2011;2(3):109-112. [FREE Full text] [CrossRef] [Medline]
- Leiner DJ. Our research’s breadth lives on convenience samples. A case study of the online respondent pool “SoSci Panel”. Stud Media Commun. 2016;5(4):367-396.
- Faul F, Erdfelder E, Lang A, Buchner A. G*Power 3: a flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behav Res Methods. 2007;39(2):175-191. [CrossRef] [Medline]
- Collins E, Watt R. Using and understanding power in psychological research: a survey study. Collabra: Psychol. 2021;7(1):28250. [CrossRef]
- Brysbaert M. How many participants do we have to include in properly powered experiments? A tutorial of power analysis with reference tables. J Cogn. 2019;2(1):16. [FREE Full text] [CrossRef] [Medline]
- Brysbaert M, Stevens M. Power analysis and effect size in mixed effects models: a tutorial. J Cogn. 2018;1(1):9. [FREE Full text] [CrossRef] [Medline]
- Richard FD, Bond CF, Stokes-Zoota JJ. One hundred years of social psychology quantitatively described. Rev Gen Psychol. 2003;7(4):331-363. [CrossRef]
- Lakens D. Sample size justification. Collabra Psychol. 2022;8(1):33267. [CrossRef]
- Why science is nothing but a lie - you will change your thinking [Video in German]. YouTube. 2021. URL: https://www.youtube.com/watch?v=OHt_53djNeE [accessed 2024-01-22]
- ShoepassionTV. Smooth leather shoe care made easy [Video in German]. YouTube. 2016. URL: https://www.youtube.com/watch?v=pLJBtmorOQc [accessed 2024-01-22]
- Till B, Tran US, Niederkrotenthaler T. The impact of educative news articles about suicide prevention: a randomized controlled trial. Health Commun. 2021;36(14):2022-2029. [CrossRef] [Medline]
- European citizens’ knowledge and attitudes towards science and technology. European Union. 2021. URL: https://europa.eu/eurobarometer/surveys/detail/2237 [accessed 2024-11-26]
- Nocun K, Lamberty P. Fake Facts: How Conspiracy Theories Shape Our Thinking [Book in German]. Berlin, Germany. Quadriga; 2020.
- Kolahi J, Bang H, Park J. Towards a proposal for assessment of blinding success in clinical trials: up-to-date review. Community Dent Oral Epidemiol. 2009;37(6):477-484. [FREE Full text] [CrossRef] [Medline]
- Lord CG, Ross L, Lepper MR. Biased assimilation and attitude polarization: the effects of prior theories on subsequently considered evidence. Journal of Personality and Social Psychology. 1979;37(11):2098-2109. [CrossRef]
- Festinger L. Cognitive dissonance. Sci Am. 1962;207:93-102. [CrossRef] [Medline]
- Metzger MJ, Hartsell EH, Flanagin AJ. Cognitive dissonance or credibility? A comparison of two theoretical explanations for selective exposure to partisan news. Communication Research. 2015;47(1):3-28. [CrossRef]
- Erber M, Hodges S, Wilson T. Attitude strength, attitude stability, and the effects of analyzing reasons. In: Attitude Strength: Antecedents and Consequences. Mahwah, NJ. Erlbaum Publishers (now part of Informa Plc); 1995:433-454.
- Till B, Niederkrotenthaler T, Herberth A, Vitouch P, Sonneck G. Suicide in films: the impact of suicide portrayals on nonsuicidal viewers' well-being and the effectiveness of censorship. Suicide Life Threat Behav. 2010;40(4):319-327. [CrossRef] [Medline]
- Zhao X, Fink EL. Proattitudinal versus counterattitudinal messages: message discrepancy, reactance, and the boomerang effect. Communication Monographs. 2020;88(3):286-305. [CrossRef]
- Rosenberg BD, Siegel JT. A 50-year review of psychological reactance theory: do not read this article. Motivation Science. 2018;4(4):281-300. [CrossRef]
- Balaban DC, Szambolics J, Chirică M. Parasocial relations and social media influencers' persuasive power. Exploring the moderating role of product involvement. Acta Psychol (Amst). 2022;230:103731. [FREE Full text] [CrossRef] [Medline]
Abbreviations
| CONSORT: Consolidated Standards of Reporting Trials |
| GDP: gross domestic product |
Edited by A Mavragani; submitted 01.Jul.2025; peer-reviewed by K Kishore, A King, D Harff; comments to author 16.Jul.2025; revised version received 10.Sep.2025; accepted 02.Oct.2025; published 17.Nov.2025.
Copyright©Benedikt Till, Thomas Niederkrotenthaler, Brigitte Naderer. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 17.Nov.2025.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research (ISSN 1438-8871), is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.

