<?xml version="1.0" encoding="UTF-8"?><!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.0 20040830//EN" "journalpublishing.dtd"><article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" dtd-version="2.0" xml:lang="en" article-type="news"><front><journal-meta><journal-id journal-id-type="nlm-ta">J Med Internet Res</journal-id><journal-id journal-id-type="publisher-id">jmir</journal-id><journal-id journal-id-type="index">1</journal-id><journal-title>Journal of Medical Internet Research</journal-title><abbrev-journal-title>J Med Internet Res</abbrev-journal-title><issn pub-type="epub">1438-8871</issn><publisher><publisher-name>JMIR Publications</publisher-name><publisher-loc>Toronto, Canada</publisher-loc></publisher></journal-meta><article-meta><article-id pub-id-type="publisher-id">v28i1e95730</article-id><article-id pub-id-type="doi">10.2196/95730</article-id><article-categories><subj-group subj-group-type="heading"><subject>News and Perspectives</subject></subj-group></article-categories><title-group><article-title>As Social Media Scales Back Fact-Checking, Can Technologies Fill the Gap?</article-title></title-group><contrib-group><contrib contrib-type="author"><name name-style="western"><surname>Glauser</surname><given-names>Wendy</given-names></name><role>JMIR Correspondent</role></contrib></contrib-group><contrib-group><contrib contrib-type="editor"><name name-style="western"><surname>Clegg</surname><given-names>Kayleigh-Ann</given-names></name></contrib></contrib-group><pub-date pub-type="collection"><year>2026</year></pub-date><pub-date pub-type="epub"><day>6</day><month>4</month><year>2026</year></pub-date><volume>28</volume><elocation-id>e95730</elocation-id><history><date date-type="received"><day>19</day><month>03</month><year>2026</year></date><date date-type="accepted"><day>19</day><month>03</month><year>2026</year></date></history><copyright-statement>&#x00A9; JMIR Publications. Originally published in the Journal of Medical Internet Research (<ext-link ext-link-type="uri" xlink:href="https://www.jmir.org">https://www.jmir.org</ext-link>), 6.4.2026. </copyright-statement><copyright-year>2026</copyright-year><license license-type="open-access" xlink:href="https://creativecommons.org/licenses/by/4.0/"><p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (<ext-link ext-link-type="uri" xlink:href="https://creativecommons.org/licenses/by/4.0/">https://creativecommons.org/licenses/by/4.0/</ext-link>), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research (ISSN 1438-8871), is properly cited. The complete bibliographic information, a link to the original publication on <ext-link ext-link-type="uri" xlink:href="https://www.jmir.org/">https://www.jmir.org/</ext-link>, as well as this copyright and license information must be included.</p></license><self-uri xlink:type="simple" xlink:href="https://www.jmir.org/2026/1/e95730"/><kwd-group><kwd>misinformation</kwd><kwd>social media</kwd><kwd>artificial intelligence</kwd><kwd>fact-checking</kwd><kwd>algorithms</kwd><kwd>public health</kwd></kwd-group></article-meta></front><body><boxed-text id="box1"><p><bold>Key Takeaways</bold></p><list list-type="bullet"><list-item><p>Algorithms, artificial intelligence fact-checking, and social media ads can reduce the spread of misinformation at scale.</p></list-item><list-item><p>Low-cost &#x201C;content-neutral&#x201D; interventions reminding users to think before sharing can help prevent misinformation spread.</p></list-item></list></boxed-text><p><italic>Part one of this series</italic> [<xref ref-type="bibr" rid="ref1">1</xref>] <italic>showed how researchers are working with social media influencers to boost accurate health information online. In part two, we explore technological solutions for detecting and combating misinformation</italic>.</p><p>Misinformation is increasingly spread with single clicks, bots, and artificial intelligence (AI) deepfakes. AI-generated images and videos share fake treatments, with even deepfake versions of renowned doctors&#x2019; likenesses used to gain credibility [<xref ref-type="bibr" rid="ref2">2</xref>]. In an age where generative AI is increasing the volume and speed of health misinformation [<xref ref-type="bibr" rid="ref3">3</xref>] and agencies like the World Health Organization are raising alarms about the impact on vaccine trust and public health [<xref ref-type="bibr" rid="ref4">4</xref>], are AI and algorithm-based technologies for combating that misinformation keeping up?</p><p>While evidence suggests technological solutions to misinformation on social media are effective, researchers worry that social media companies&#x2019; interest in employing, evaluating, and improving these tools has waned in recent years.</p><p>Common technologies for combating misinformation include everything from algorithmic labeling of posts that contain misinformation, to downregulation of AI-deemed inaccurate posts, to mass awareness campaigns that encourage critical thinking [<xref ref-type="bibr" rid="ref5">5</xref>-<xref ref-type="bibr" rid="ref7">7</xref>].</p><p>Cameron Martel, PhD&#x2014;assistant professor of marketing at the Johns Hopkins Carey Business School&#x2014;explains that in the late 2010s and early 2020s, major social media companies, including Facebook and Twitter, employed algorithms to identify potentially false articles and engaged third-party fact-checkers to verify posts.</p><p>In 2023, he led a large study of warning labels, in which over 14,000 participants in the United States were exposed to true and false headlines and asked about their belief in the headlines or interest in sharing them [<xref ref-type="bibr" rid="ref8">8</xref>]. Half of the participants were exposed to warning labels when presented with false information, while half were not.</p><p>Fact-checking labels reduced belief in false information by nearly 28% and reduced misinformation sharing by roughly 25% relative to the control group. The study also showed that among those with low trust in fact-checkers, warning labels nonetheless reduced misinformation sharing by more than 16%.</p><p>In January 2025, however, Meta announced it would end its partnership with third-party fact-checkers and instead adopt community notes, whereby everyday users comment on the accuracy of information [<xref ref-type="bibr" rid="ref9">9</xref>]. If comments are upvoted by people from across the political spectrum, then they&#x2019;ll appear prominently.</p><p>Such community notes are likely to be trusted if the process behind community note generation is transparent and reasonable, Martel says. In a study published last year, Martel and colleagues [<xref ref-type="bibr" rid="ref10">10</xref>] found that while both Democrat- and Republican-leaning participants preferred expert fact-checkers over laypeople, laypeople &#x201C;juries&#x201D; could be deemed equally trustworthy as or more trustworthy than experts if their size was large enough, they had consulted with each other, and they had equal representation across political groups.</p><sec id="s1"><title>The Rise of AI Fact-Checking</title><p>There is far less information about how the public views AI fact-checking tools and their accuracy. A study (available as a preprint [<xref ref-type="bibr" rid="ref11">11</xref>]) suggests that the large language models (LLMs) Perplexity and Grok largely align with community note decisions about posts that are misleading. However, 21% to 28% of posts that community notes deemed as misleading were deemed true by the AI bots.</p><p>Concerningly, the authors observe that the launch of the Grok bot on X in early March 2025 co-occurred with a substantial reduction in the community note submissions, suggesting that social media users may see AI as an alternative, rather than as a complement, to democratized fact-checks.</p><p>While Martel points out that AI can be very helpful for identifying and responding to &#x201C;well debunked conspiracy theories or often repeated myths,&#x201D; the limit of AI fact-checking has become glaring during breaking news events. Al Jazeera reported, for example, that Grok struggled to recognize AI-generated media in conflict situations and incorrectly said that a trans pilot was responsible for a helicopter crash, among many other breaking news fact-checking errors [<xref ref-type="bibr" rid="ref12">12</xref>].</p><p>&#x201C;Large language models don&#x2019;t have any existing corpus of information about what&#x2019;s happening currently,&#x201D; explains Martel. Yet, &#x201C;there&#x2019;s at least anecdotal evidence that people are still trying to use LLMs to find out information about unfolding events, and that is troubling.&#x201D;</p><p>Martel says that ultimately, democratized fact-checking through community notes, AI fact-checking, and professional fact-checkers &#x201C;have great promise&#x201D; when used in tandem. For example, AI systems could refer breaking news or claims that they can&#x2019;t easily verify to human fact-checkers, social media users could rate the accuracy of information fact-checked by AI, and AI and algorithm-based systems could respond to real-time feedback from democratized fact-checks.</p><p>But fact-checking systems should be transparent, continually audited, assessed for effectiveness, and improved. And that&#x2019;s not happening. &#x201C;Right now, it seems like there is no corporate will to invest heavily in these types of content moderation practices, so while I&#x2019;m theoretically hopeful about these technologies, in practice, I&#x2019;m less hopeful,&#x201D; says Martel.</p></sec><sec id="s2"><title>&#x201C;Content-Neutral&#x201D; Interventions Can Promote Critical Thinking on Social Media</title><p>Interventions that are &#x201C;content neutral&#x201D; are another scalable solution to reducing misinformation, says Hause Lin, PhD&#x2014;a researcher at the Massachusetts Institute of Technology and Cornell University, and a data scientist at the World Bank. &#x201C;People are going to be producing all kinds of weird content that you just will not be able to anticipate,&#x201D; he explains, but interventions that encourage critical thinking and help people spot common propaganda tactics can blunt the influence of misinformation.</p><p>In 2023, Lin and colleagues [<xref ref-type="bibr" rid="ref7">7</xref>] assessed the effectiveness of Facebook and Twitter ads that encouraged people to consider the accuracy of information before they shared it. The Facebook study, which involved 33 million users, found that these accuracy prompts led to a 2.6% reduction in misinformation sharing among users who had previously shared misinformation (as flagged by third-party fact-checkers or Facebook&#x2019;s internal system). The Twitter study, which relied on data from over 157,000 users, showed that accuracy prompts resulted in an up to 6.3% reduction in misinformation sharing among users who saw at least one ad and had previously shared misinformation.</p><p>The magnitude of the effect could be much higher with different types of accuracy prompts that are designed to reach more people over longer periods of time, Lin says. (The Facebook study only assessed user behavior for an hour after the ad was shared, while the Twitter study evaluated user behavior over days to weeks, Lin explains.) Regardless, 6% of millions is a significant impact for a relatively &#x201C;low-cost&#x201D; intervention.</p><p>The goal of the project was to jolt social media users from an emotional state to a reflective state, Lin says. &#x201C;When people are scrolling, they are often not thinking reflectively but intuitively. They&#x2019;re thinking &#x2018;This gets me worked up so I&#x2019;m going to share it with the world,&#x2019;&#x201D; he explains. &#x201C;If you slow them down just a little bit, and say, &#x2018;Do you want to think more about whether this is true?&#x2019; that actually reduces misinformation.&#x201D;</p><p>Still, Lin acknowledges that large-scale content moderation may not align with the profit motive. For example, Lin recently studied the effect of &#x201C;prosocial&#x201D; celebrity messages aimed at countering ethnic hate&#x2013;driven rhetoric on social media in Nigeria. A preprint of the study [<xref ref-type="bibr" rid="ref13">13</xref>] suggests that people who saw the videos were less likely to share hate content but also more likely to reduce the time they spent on Twitter overall. &#x201C;The side effects of interventions like this can be unpredictable,&#x201D; Lin says.</p><p>There is growing evidence that multipronged efforts can help counter health and other misinformation, and even small efforts can make an impact. Whether social media companies are willing to invest in these initiatives for the broader social good remains to be seen.</p></sec></body><back><fn-group><fn fn-type="conflict"><p>None declared.</p></fn></fn-group><ref-list><title>References</title><ref id="ref1"><label>1</label><nlm-citation citation-type="journal"><person-group person-group-type="author"><name name-style="western"><surname>Glauser</surname><given-names>W</given-names> </name></person-group><article-title>Influencing the influencers: how health experts are partnering with content creators to fight misinformation online</article-title><source>J Med Internet Res</source><year>2026</year><month>02</month><day>27</day><volume>28</volume><fpage>e93450</fpage><pub-id pub-id-type="doi">10.2196/93450</pub-id><pub-id pub-id-type="medline">41773685</pub-id></nlm-citation></ref><ref id="ref2"><label>2</label><nlm-citation citation-type="web"><article-title>What Dr. Sanjay Gupta learned from being the target of a deepfake health ad - Terms of Service with Clare Duffy</article-title><source>CNN Podcasts</source><year>2025</year><month>09</month><day>23</day><access-date>2026-03-26</access-date><comment><ext-link ext-link-type="uri" xlink:href="https://edition.cnn.com/audio/podcasts/terms-of-service-with-clare-duffy/episodes/56d4d6b8-25e7-11f0-a31f-a7da5a03d2d1">https://edition.cnn.com/audio/podcasts/terms-of-service-with-clare-duffy/episodes/56d4d6b8-25e7-11f0-a31f-a7da5a03d2d1</ext-link></comment></nlm-citation></ref><ref id="ref3"><label>3</label><nlm-citation citation-type="journal"><person-group person-group-type="author"><name name-style="western"><surname>Saeidnia</surname><given-names>HR</given-names> </name><name name-style="western"><surname>Jahani</surname><given-names>S</given-names> </name><name name-style="western"><surname>Ghiasi</surname><given-names>N</given-names> </name><name name-style="western"><surname>Keshavarz</surname><given-names>H</given-names> </name></person-group><article-title>Generative AI and health misinformation: production, propagation, and mitigation-a systematic review</article-title><source>BMC Public Health</source><year>2026</year><month>01</month><day>29</day><volume>26</volume><issue>1</issue><fpage>693</fpage><pub-id pub-id-type="doi">10.1186/s12889-025-26148-9</pub-id><pub-id pub-id-type="medline">41606555</pub-id></nlm-citation></ref><ref id="ref4"><label>4</label><nlm-citation citation-type="web"><person-group person-group-type="author"><collab>AFP</collab></person-group><article-title>Vaccines facing misinformation spike: WHO experts</article-title><source>CTV News</source><year>2026</year><month>03</month><day>18</day><access-date>2026-03-26</access-date><comment><ext-link ext-link-type="uri" xlink:href="https://www.ctvnews.ca/health/article/vaccines-facing-misinformation-spike-who-experts/">https://www.ctvnews.ca/health/article/vaccines-facing-misinformation-spike-who-experts/</ext-link></comment></nlm-citation></ref><ref id="ref5"><label>5</label><nlm-citation citation-type="journal"><person-group person-group-type="author"><name name-style="western"><surname>Cianciulli</surname><given-names>A</given-names> </name><name name-style="western"><surname>Santoro</surname><given-names>E</given-names> </name><name name-style="western"><surname>Manente</surname><given-names>R</given-names> </name><etal/></person-group><article-title>Artificial intelligence and digital technologies against health misinformation: a scoping review of public health responses</article-title><source>Healthcare (Basel)</source><year>2025</year><month>10</month><day>18</day><volume>13</volume><issue>20</issue><fpage>2623</fpage><pub-id pub-id-type="doi">10.3390/healthcare13202623</pub-id><pub-id pub-id-type="medline">41154301</pub-id></nlm-citation></ref><ref id="ref6"><label>6</label><nlm-citation citation-type="journal"><person-group person-group-type="author"><name name-style="western"><surname>Grover</surname><given-names>H</given-names> </name><name name-style="western"><surname>Nour</surname><given-names>R</given-names> </name><name name-style="western"><surname>Zary</surname><given-names>N</given-names> </name><name name-style="western"><surname>Powell</surname><given-names>L</given-names> </name></person-group><article-title>Online interventions addressing health misinformation: scoping review</article-title><source>J Med Internet Res</source><year>2025</year><month>09</month><day>4</day><volume>27</volume><fpage>e69618</fpage><pub-id pub-id-type="doi">10.2196/69618</pub-id><pub-id pub-id-type="medline">40906516</pub-id></nlm-citation></ref><ref id="ref7"><label>7</label><nlm-citation citation-type="other"><person-group person-group-type="author"><name name-style="western"><surname>Lin</surname><given-names>H</given-names> </name><name name-style="western"><surname>Garro</surname><given-names>H</given-names> </name><name name-style="western"><surname>Wernerfelt</surname><given-names>N</given-names> </name><etal/></person-group><article-title>Reducing misinformation sharing at scale using digital accuracy prompt ads</article-title><source>PsyArXiv</source><comment>Preprint posted online on  Feb 7, 2024</comment><pub-id pub-id-type="doi">10.31234/osf.io/u8anb</pub-id></nlm-citation></ref><ref id="ref8"><label>8</label><nlm-citation citation-type="journal"><person-group person-group-type="author"><name name-style="western"><surname>Martel</surname><given-names>C</given-names> </name><name name-style="western"><surname>Rand</surname><given-names>DG</given-names> </name></person-group><article-title>Fact-checker warning labels are effective even for those who distrust fact-checkers</article-title><source>Nat Hum Behav</source><year>2024</year><month>10</month><volume>8</volume><issue>10</issue><fpage>1957</fpage><lpage>1967</lpage><pub-id pub-id-type="doi">10.1038/s41562-024-01973-x</pub-id><pub-id pub-id-type="medline">39223352</pub-id></nlm-citation></ref><ref id="ref9"><label>9</label><nlm-citation citation-type="web"><person-group person-group-type="author"><name name-style="western"><surname>Kaplan</surname><given-names>J</given-names> </name></person-group><article-title>More speech and fewer mistakes</article-title><source>Meta</source><year>2025</year><month>01</month><day>7</day><access-date>2026-03-26</access-date><comment><ext-link ext-link-type="uri" xlink:href="https://about.fb.com/news/2025/01/meta-more-speech-fewer-mistakes/">https://about.fb.com/news/2025/01/meta-more-speech-fewer-mistakes/</ext-link></comment></nlm-citation></ref><ref id="ref10"><label>10</label><nlm-citation citation-type="journal"><person-group person-group-type="author"><name name-style="western"><surname>Martel</surname><given-names>C</given-names> </name><name name-style="western"><surname>Berinsky</surname><given-names>AJ</given-names> </name><name name-style="western"><surname>Rand</surname><given-names>DG</given-names> </name><name name-style="western"><surname>Zhang</surname><given-names>AX</given-names> </name><name name-style="western"><surname>Resnick</surname><given-names>P</given-names> </name></person-group><article-title>Perceived legitimacy of layperson and expert content moderators</article-title><source>PNAS Nexus</source><year>2025</year><month>05</month><day>20</day><volume>4</volume><issue>5</issue><fpage>pgaf111</fpage><pub-id pub-id-type="doi">10.1093/pnasnexus/pgaf111</pub-id><pub-id pub-id-type="medline">40395435</pub-id></nlm-citation></ref><ref id="ref11"><label>11</label><nlm-citation citation-type="other"><person-group person-group-type="author"><name name-style="western"><surname>Renault</surname><given-names>T</given-names> </name><name name-style="western"><surname>Mosleh</surname><given-names>M</given-names> </name><name name-style="western"><surname>Rand</surname><given-names>D</given-names> </name></person-group><article-title>@Grok is this true? LLM-powered fact-checking on social media</article-title><source>PsyArXiv</source><comment>Preprint posted online on  Dec 2, 2025</comment><pub-id pub-id-type="doi">10.31234/osf.io/85quw_v1</pub-id></nlm-citation></ref><ref id="ref12"><label>12</label><nlm-citation citation-type="web"><person-group person-group-type="author"><name name-style="western"><surname>Christopher</surname><given-names>N</given-names> </name><name name-style="western"><surname>Pepe</surname><given-names>V</given-names> </name></person-group><article-title>As millions adopt Grok to fact-check, misinformation abounds</article-title><source>Al Jazeera</source><year>2025</year><month>07</month><day>11</day><access-date>2026-03-26</access-date><comment><ext-link ext-link-type="uri" xlink:href="https://www.aljazeera.com/economy/2025/7/11/as-millions-adopt-grok-to-fact-check-misinformation-abounds">https://www.aljazeera.com/economy/2025/7/11/as-millions-adopt-grok-to-fact-check-misinformation-abounds</ext-link></comment></nlm-citation></ref><ref id="ref13"><label>13</label><nlm-citation citation-type="other"><person-group person-group-type="author"><name name-style="western"><surname>Jahani</surname><given-names>E</given-names> </name><name name-style="western"><surname>Kolic</surname><given-names>B</given-names> </name><name name-style="western"><surname>Tonneau</surname><given-names>M</given-names> </name><name name-style="western"><surname>Lin</surname><given-names>H</given-names> </name><name name-style="western"><surname>Barkoczi</surname><given-names>D</given-names> </name><name name-style="western"><surname>Fraiberger</surname><given-names>SP</given-names> </name></person-group><article-title>Celebrity messages reduce online hate and limit its spread</article-title><source>SocArXiv</source><comment>Preprint posted online on  Jan 12, 2026</comment><pub-id pub-id-type="doi">10.31235/osf.io/qmvuh_v1</pub-id></nlm-citation></ref></ref-list></back></article>