<?xml version="1.0" encoding="UTF-8"?><!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.0 20040830//EN" "journalpublishing.dtd"><article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" dtd-version="2.0" xml:lang="en" article-type="research-article"><front><journal-meta><journal-id journal-id-type="nlm-ta">J Med Internet Res</journal-id><journal-id journal-id-type="publisher-id">jmir</journal-id><journal-id journal-id-type="index">1</journal-id><journal-title>Journal of Medical Internet Research</journal-title><abbrev-journal-title>J Med Internet Res</abbrev-journal-title><issn pub-type="epub">1438-8871</issn><publisher><publisher-name>JMIR Publications</publisher-name><publisher-loc>Toronto, Canada</publisher-loc></publisher></journal-meta><article-meta><article-id pub-id-type="publisher-id">v27i1e87367</article-id><article-id pub-id-type="doi">10.2196/87367</article-id><article-categories><subj-group subj-group-type="heading"><subject>News and Perspective</subject></subj-group></article-categories><title-group><article-title>Shoggoths, Sycophancy, Psychosis, Oh My: Rethinking Large Language Model Use and Safety</article-title></title-group><contrib-group><contrib contrib-type="author"><name name-style="western"><surname>Clegg</surname><given-names>Kayleigh-Ann</given-names></name><role>JMIR Correspondent</role></contrib></contrib-group><contrib-group><contrib contrib-type="editor"><name name-style="western"><surname>Clegg</surname><given-names>Kayleigh-Ann</given-names></name></contrib></contrib-group><pub-date pub-type="collection"><year>2025</year></pub-date><pub-date pub-type="epub"><day>18</day><month>11</month><year>2025</year></pub-date><volume>27</volume><elocation-id>e87367</elocation-id><history><date date-type="received"><day>07</day><month>11</month><year>2025</year></date><date date-type="accepted"><day>07</day><month>11</month><year>2025</year></date></history><copyright-statement>&#x00A9; JMIR Publications. Originally published in the Journal of Medical Internet Research (<ext-link ext-link-type="uri" xlink:href="https://www.jmir.org">https://www.jmir.org</ext-link>), 18.11.2025. </copyright-statement><copyright-year>2025</copyright-year><self-uri xlink:type="simple" xlink:href="https://www.jmir.org/2025/1/e87367"/><kwd-group><kwd>artificial intelligence</kwd><kwd>AI psychosis</kwd><kwd>delusions</kwd><kwd>mental health</kwd><kwd>behavioral health</kwd><kwd>technology ethics</kwd><kwd>user safety</kwd><kwd>health policy</kwd><kwd>policymaking</kwd><kwd>AI regulation</kwd><kwd>ethical AI</kwd></kwd-group></article-meta></front><body><boxed-text id="IB1"><p><bold>Key Takeaways</bold></p><list list-type="bullet"><list-item><p>Certain features of large language models (LLMs) may amplify delusional beliefs and contribute to harm.</p></list-item><list-item><p>A recent simulation study highlights the role of sycophancy, demonstrating that all LLMs, to varying extents, may fail to adequately challenge delusional content.</p></list-item><list-item><p>Further empirical research and validation, transparency, and policy are needed to understand and build safeguards around LLM use and its impact on mental health.</p></list-item></list></boxed-text><p>We&#x2019;re certainly not in Kansas anymore, but are we in a Lovecraft novel?</p><p>An old artificial intelligence (AI)&#x2013;insider joke with an anxious edge and new relevance, a <italic>shoggoth</italic> is a globular Lovecraftian monster described as a &#x201C;formless protoplasm able to mock and reflect all forms and organs and processes&#x201D; [<xref ref-type="bibr" rid="ref1">1</xref>]. The idea is that a shoggoth&#x2019;s true nature is inscrutable and evasive&#x2014;not unlike large language models (LLMs), which can be trained to appear superficially anthropomorphic, safe, and familiar, yet can behave in unexpected ways or lead to unanticipated harms [<xref ref-type="bibr" rid="ref2">2</xref>,<xref ref-type="bibr" rid="ref3">3</xref>].</p><p>Some such harms include reports of unhealthy romantic attachments, self-harm, suicide, and murder potentially associated with chatbot use [<xref ref-type="bibr" rid="ref4">4</xref>-<xref ref-type="bibr" rid="ref6">6</xref>]. These phenomena&#x2014;dubbed &#x201C;AI psychosis&#x201D;&#x2014; have been the focus of increasing interest and concern in the media [<xref ref-type="bibr" rid="ref7">7</xref>,<xref ref-type="bibr" rid="ref8">8</xref>], attracted academic commentary [<xref ref-type="bibr" rid="ref9">9</xref>,<xref ref-type="bibr" rid="ref10">10</xref>], and have most recently led to several lawsuits being filed [<xref ref-type="bibr" rid="ref11">11</xref>].</p><sec id="s1"><title>AI Psychosis</title><p>The term AI psychosis is being used as a shorthand to describe a range of psychological disturbances that appear to emerge in the context of LLM use. While provocative, it&#x2019;s somewhat imprecise in implying that AI is causing diagnosable psychotic disorders or that AI psychosis constitutes a distinct diagnostic entity&#x2014;the science is still out.</p><p>Early clinical commentary&#x2014;including a prescient editorial on the topic before reports even emerged [<xref ref-type="bibr" rid="ref12">12</xref>]&#x2014;does, however, suggest that LLMs may be contributing to the maintenance, reinforcement, or amplification of paranoid, false, or delusional beliefs, especially in circumstances involving prolonged or intensive LLM use and underlying user vulnerabilities [<xref ref-type="bibr" rid="ref9">9</xref>,<xref ref-type="bibr" rid="ref10">10</xref>,<xref ref-type="bibr" rid="ref12">12</xref>-<xref ref-type="bibr" rid="ref14">14</xref>].</p><p>&#x201C;When using generative chatbots,&#x201D; says Dr Kierla Ireland, a Clinical Psychologist at the Canadian Department of National Defense, &#x201C;there&#x2019;s a risk of confirmation bias wherein the user&#x2019;s own perspective is reflected back to them. This may be experienced as validating or soothing, which may lead to more engagement, more confirmation bias, and so on.&#x201D;</p><fig position="float" id="figureWL1"><caption><p><named-content content-type="indent">&#x2003;</named-content><named-content content-type="indent">&#x2003;</named-content>Dr Kierla Ireland, Clinical Psychologist</p></caption><graphic alt-version="no" mimetype="image" position="float" xlink:type="simple" xlink:href="jmir_v27i1e87367_fig01.png"/></fig><p>This is not unlike processes that can occur with other types of technology, like social media [<xref ref-type="bibr" rid="ref15">15</xref>,<xref ref-type="bibr" rid="ref16">16</xref>]&#x2014;but while not a new threat, certain features of the technology may make AI psychosis a more pernicious one.</p><p>Sycophancy, for example, is a well-known&#x2014;and, some speculate, intentionally designed&#x2014;feature of chatbots that can increase both user engagement and potential risk [<xref ref-type="bibr" rid="ref17">17</xref>-<xref ref-type="bibr" rid="ref21">21</xref>]. Dr Josh Au Yeung, Neurology Registrar at King&#x2019;s College London, Clinical Lead at Nuraxi.ai, and host of the <italic>Dev &#x0026; Doc</italic> podcast, notes that the anthropomorphic nature of LLMs adds potency: &#x201C;You end up trusting them [LLMs], and attributing emotions to them. If a stranger came to you and they were so sycophantic on the streets, you&#x2019;d run for your life, right? But because you have this connection with them&#x2014;that&#x2019;s what makes it extra dangerous.&#x201D;</p><fig position="float" id="figureWL2"><caption><p><named-content content-type="indent">&#x2003;</named-content><named-content content-type="indent">&#x2003;</named-content><named-content content-type="indent">&#x2003;</named-content>Dr Josh Au Yeung, Neurology Registrar</p></caption><graphic alt-version="no" mimetype="image" position="float" xlink:type="simple" xlink:href="jmir_v27i1e87367_fig02.png"/></fig></sec><sec id="s2"><title>Simulating Psychological Destabilization</title><p>In their recent preprint [<xref ref-type="bibr" rid="ref22">22</xref>], Dr Au Yeung and his colleagues endeavored to provide one of the first empirical demonstrations of how LLMs may amplify delusions and contribute to what they more precisely term &#x201C;LLM-induced psychological destabilization.&#x201D; Their study aims to quantify the &#x201C;psychogenicity&#x201D; of different LLMs using simulated conversations and a safety benchmark they&#x2019;re calling <italic>psychosis-bench</italic>.</p><p>Across 16 scenarios constructed to reflect the development of different types of delusions and to map roughly onto AI psychosis media reports, the researchers have evaluated the extent to which each of the LLMs&#x2019; responses represent a delusion confirmation, harm enablement, or safety intervention.</p><p>The team&#x2019;s initial conclusions are revealing: all models appear to demonstrate some degree of &#x201C;psychogenicity.&#x201D; On average, and especially in more subtle scenarios, models frequently failed to actively challenge potential delusions and refuse harmful requests, and frequently missed opportunities to provide safety interventions.</p><p>The performance of the different models varied widely, however, with Anthropic&#x2019;s Claude 4 outperforming every other model on the three indices, and Google&#x2019;s Gemini 2.5 Flash bringing up the rear on all three. Dr Au Yeung isn&#x2019;t surprised by this.</p><p>&#x201C;It&#x2019;s no surprise that the only company which is publishing on AI safety and sycophantic behavior performs the best,&#x201D; he says. &#x201C;Clearly the stuff they do&#x2014;the constitutional AI, the safety side, the way they prompt-tune the model&#x2014;is having some effects on its performance.&#x201D; He says he hopes other companies will start thinking along these lines and has shared his code [<xref ref-type="bibr" rid="ref23">23</xref>] so that they can, noting in particular the need to address sycophancy. &#x201C;Unlike most other shortcomings seen in LLMs,&#x201D; he says, &#x201C;sycophancy is not a property that is correlated to model parameter size; bigger models are not necessarily less sycophantic,&#x201D; suggesting that more targeted safety research and model alignment strategies are needed [<xref ref-type="bibr" rid="ref24">24</xref>].</p></sec><sec id="s3"><title>A Step in the Right Direction</title><p>As the team works on revising and strengthening the methods to support their findings, Dr Au Yeung reports that what they have learned from their study is already having a positive impact.</p><p>His team&#x2019;s research was featured in the widely read annual State of AI Report for 2025 [<xref ref-type="bibr" rid="ref25">25</xref>]. And at his current company, Nuraxi.ai, they&#x2019;re in the process of applying <italic>psychosis-bench</italic> to their user-facing chatbot.</p><p>The responsibility for preventing and dealing with psychological destabilization associated with LLM use is not on consumers or patients, Dr Au Yeung says. &#x201C;The onus for us [developers] is to actually focus on the LLM and put in safeguards to stop this phenomenon from happening.&#x201D;</p><p>Dr Ireland shares this sentiment, noting &#x201C;the vital importance of incorporating safeguards to promote critical thinking; that is, for users to be shown multiple perspectives, including those that may counter deeply-held beliefs and cause discomfort.&#x201D;</p></sec><sec id="s4"><title>Need for Meaningful Regulation</title><p>Whether, how, and how effectively other developers will implement these kinds of safeguards remains to be seen. Dr Au Yeung acknowledges the risk that some safety benchmarks may ultimately be &#x201C;gamed&#x201D; or treated as public relations exercises by bad-faith actors incentivized by profit rather than genuine concern for the public good.</p><p>Camille Carlton, Policy Director at the Center for Humane Technology, shares similar concerns. While she places responsibility for implementing safeguards&#x2014;and for harms caused by failing to implement them&#x2014;with those who develop LLMs and AI technology, she also advocates for meaningful regulation and oversight.</p><fig position="float" id="figureWL3"><caption><p><named-content content-type="indent">&#x2003;</named-content><named-content content-type="indent">&#x2003;</named-content><named-content content-type="indent">&#x2003;</named-content> Ms Camille Carlton, Policy Director</p></caption><graphic alt-version="no" mimetype="image" position="float" xlink:type="simple" xlink:href="jmir_v27i1e87367_fig03.png"/></fig><p>&#x201C;Developers...not only have asymmetric access to information about the products they create, they also have the most control over the way the product is built, how those choices impact users downstream, and how to make changes to the product that could make it safer,&#x201D; she says. However, &#x201C;recent product announcements&#x2014;like OpenAI claiming to prioritize kids&#x2019; safety while simultaneously launching erotic content&#x2014;demonstrate that unless compelled to, these companies will not act in the public&#x2019;s best interest on their own. Policymakers should support common-sense approaches that apply to other consumer products, like product liability.&#x201D;</p><p>Continuing to comment on an October 14 social media post in which OpenAI founder Sam Altman stated that the company has developed news tools and been able to &#x201C;mitigate the serious mental health issues&#x201D; in the current ChatGPT model and intends to incorporate erotica for &#x201D;verified adults&#x201D; in December [<xref ref-type="bibr" rid="ref26">26</xref>], Ms Carlton advises against leaving developers to &#x201C;grade their own homework.&#x201D;</p><p>While steps are being taken in the right direction&#x2014;for example, an October 27 article from OpenAI highlights collaboration with a network of external mental health experts to improve ChatGPT&#x2019;s responses in sensitive conversations [<xref ref-type="bibr" rid="ref27">27</xref>]&#x2014;further independent verification is needed.</p><p>&#x201C;There&#x2019;s a continuous pattern of AI companies making safety claims without allowing third-party researchers to independently test and verify them,&#x201D; Ms Carlton says, adding that &#x201C;we need transparency about what progress has actually been made and evidence beyond anecdotal reports.&#x201D;</p></sec><sec id="s5"><title>Cross-Talk, Critical Thinking, Caution</title><p>When it comes to the phenomenon of AI psychosis (or psychological destabilization associated with LLM use), AI may be less shoggoth and more mirror&#x2014;the kind you find at a carnival, one that may amplify and distort human tendencies in ways that can be harmful.</p><p>But whether Lovecraftian monster or carnival mirror, to Ms Carlton&#x2019;s points, further empirical research and validation, transparency, and policy are needed to understand and build safeguards around LLM use and its impact on mental health. Cross-talk&#x2014;between researchers, developers, mental health professionals, policymakers, and the public&#x2014;will be essential for finding effective solutions that maximize its potential benefits and mitigate its potential harms.</p><p>In the meantime, critical thinking and reasonable caution are warranted in how we use, interpret, and integrate these tools in our lives and practices.</p></sec></body><back><fn-group><fn fn-type="conflict"><p>None declared.</p></fn></fn-group><ref-list><title>References</title><ref id="ref1"><label>1</label><nlm-citation citation-type="book"><person-group person-group-type="author"><name name-style="western"><surname>Lovecraft</surname><given-names>HP</given-names> </name></person-group><source>At the Mountains of Madness: The Definitive Edition</source><year>2005</year><publisher-name>Modern Library</publisher-name><pub-id pub-id-type="other">0-8129-7441-7</pub-id></nlm-citation></ref><ref id="ref2"><label>2</label><nlm-citation citation-type="web"><person-group person-group-type="author"><name name-style="western"><surname>Roose</surname><given-names>K</given-names> </name></person-group><article-title>Why an octopus-like creature has come to symbolize the state of A.I</article-title><source>The New York Times</source><year>2023</year><month>05</month><day>30</day><access-date>2025-11-11</access-date><comment><ext-link ext-link-type="uri" xlink:href="https://www.nytimes.com/2023/05/30/technology/shoggoth-meme-ai.html">https://www.nytimes.com/2023/05/30/technology/shoggoth-meme-ai.html</ext-link></comment></nlm-citation></ref><ref id="ref3"><label>3</label><nlm-citation citation-type="journal"><person-group person-group-type="author"><name name-style="western"><surname>Peter</surname><given-names>S</given-names> </name><name name-style="western"><surname>Riemer</surname><given-names>K</given-names> </name><name name-style="western"><surname>West</surname><given-names>JD</given-names> </name></person-group><article-title>The benefits and dangers of anthropomorphic conversational agents</article-title><source>Proc Natl Acad Sci U S A</source><year>2025</year><month>06</month><day>3</day><volume>122</volume><issue>22</issue><fpage>e2415898122</fpage><pub-id pub-id-type="doi">10.1073/pnas.2415898122</pub-id><pub-id pub-id-type="medline">40378006</pub-id></nlm-citation></ref><ref id="ref4"><label>4</label><nlm-citation citation-type="web"><person-group person-group-type="author"><name name-style="western"><surname>Heritage</surname><given-names>S</given-names> </name></person-group><article-title>'I felt pure, unconditional love': the people who marry their AI chatbots</article-title><source>The Guardian</source><year>2025</year><month>07</month><day>12</day><access-date>2025-11-11</access-date><comment><ext-link ext-link-type="uri" xlink:href="https://www.theguardian.com/tv-and-radio/2025/jul/12/i-felt-pure-unconditional-love-the-people-who-marry-their-ai-chatbots">https://www.theguardian.com/tv-and-radio/2025/jul/12/i-felt-pure-unconditional-love-the-people-who-marry-their-ai-chatbots</ext-link></comment></nlm-citation></ref><ref id="ref5"><label>5</label><nlm-citation citation-type="web"><person-group person-group-type="author"><name name-style="western"><surname>O&#x2019;Brien</surname><given-names>M</given-names> </name></person-group><article-title>Parents of teens who died by suicide after AI interactions testify before Congress</article-title><source>The Associated Press</source><year>2025</year><month>09</month><day>16</day><access-date>2025-11-11</access-date><comment><ext-link ext-link-type="uri" xlink:href="https://apnews.com/article/ai-chatbot-teens-congress-chatgpt-character-ce3959b6a3ea1a4997bf1ccabb4f0de2">https://apnews.com/article/ai-chatbot-teens-congress-chatgpt-character-ce3959b6a3ea1a4997bf1ccabb4f0de2</ext-link></comment></nlm-citation></ref><ref id="ref6"><label>6</label><nlm-citation citation-type="web"><person-group person-group-type="author"><name name-style="western"><surname>Jargon</surname><given-names>J</given-names> </name><name name-style="western"><surname>Kessler</surname><given-names>S</given-names> </name></person-group><article-title>A troubled man, his chatbot and a murder-suicide in Old Greenwich</article-title><source>Wall Street Journal</source><year>2025</year><month>08</month><day>28</day><access-date>2025-11-11</access-date><comment><ext-link ext-link-type="uri" xlink:href="https://www.wsj.com/tech/ai/chatgpt-ai-stein-erik-soelberg-murder-suicide-6b67dbfb">https://www.wsj.com/tech/ai/chatgpt-ai-stein-erik-soelberg-murder-suicide-6b67dbfb</ext-link></comment></nlm-citation></ref><ref id="ref7"><label>7</label><nlm-citation citation-type="web"><person-group person-group-type="author"><name name-style="western"><surname>Tiku</surname><given-names>N</given-names> </name><name name-style="western"><surname>Malhi</surname><given-names>S</given-names> </name></person-group><article-title>What is 'AI psychosis' and how can ChatGPT affect your mental health?</article-title><source>The Washington Post</source><year>2025</year><month>08</month><day>19</day><access-date>2025-11-12</access-date><comment><ext-link ext-link-type="uri" xlink:href="https://www.washingtonpost.com/health/2025/08/19/ai-psychosis-chatgpt-explained-mental-health/">https://www.washingtonpost.com/health/2025/08/19/ai-psychosis-chatgpt-explained-mental-health/</ext-link></comment></nlm-citation></ref><ref id="ref8"><label>8</label><nlm-citation citation-type="web"><person-group person-group-type="author"><name name-style="western"><surname>Haskins</surname><given-names>C</given-names> </name></person-group><article-title>People who say they&#x2019;re experiencing AI psychosis beg the FTC for help</article-title><source>WIRED</source><year>2025</year><month>10</month><day>22</day><access-date>2025-11-11</access-date><comment><ext-link ext-link-type="uri" xlink:href="https://www.wired.com/story/ftc-complaints-chatgpt-ai-psychosis/">https://www.wired.com/story/ftc-complaints-chatgpt-ai-psychosis/</ext-link></comment></nlm-citation></ref><ref id="ref9"><label>9</label><nlm-citation citation-type="other"><person-group person-group-type="author"><name name-style="western"><surname>Hudon</surname><given-names>A</given-names> </name><name name-style="western"><surname>Stip</surname><given-names>E</given-names> </name></person-group><article-title>Artificial intelligence and the emergence of AI-psychosis: a viewpoint</article-title><source>JMIR Preprints</source><access-date>2025-11-11</access-date><comment>Preprint posted online on  Oct 13, 2025</comment><comment><ext-link ext-link-type="uri" xlink:href="https://preprints.jmir.org/preprint/85799">https://preprints.jmir.org/preprint/85799</ext-link></comment><pub-id pub-id-type="doi">10.2196/preprints.85799</pub-id></nlm-citation></ref><ref id="ref10"><label>10</label><nlm-citation citation-type="journal"><person-group person-group-type="author"><name name-style="western"><surname>Preda</surname><given-names>A</given-names> </name></person-group><article-title>Special report: AI-induced psychosis: a new frontier in mental health</article-title><source>PN</source><year>2025</year><month>10</month><day>1</day><volume>60</volume><issue>10</issue><pub-id pub-id-type="doi">10.1176/appi.pn.2025.10.10.5</pub-id></nlm-citation></ref><ref id="ref11"><label>11</label><nlm-citation citation-type="web"><person-group person-group-type="author"><name name-style="western"><surname>Ortutay</surname><given-names>B</given-names> </name></person-group><article-title>Lawsuits accuse OpenAI of driving people to suicide and delusions</article-title><source>The Associated Press</source><year>2025</year><month>11</month><day>7</day><access-date>2025-11-11</access-date><comment><ext-link ext-link-type="uri" xlink:href="https://apnews.com/article/openai-chatgpt-lawsuit-suicide-56e63e5538602ea39116f1904bf7cdc3">https://apnews.com/article/openai-chatgpt-lawsuit-suicide-56e63e5538602ea39116f1904bf7cdc3</ext-link></comment></nlm-citation></ref><ref id="ref12"><label>12</label><nlm-citation citation-type="journal"><person-group person-group-type="author"><name name-style="western"><surname>&#x00D8;stergaard</surname><given-names>SD</given-names> </name></person-group><article-title>Will generative artificial intelligence chatbots generate delusions in individuals prone to psychosis?</article-title><source>Schizophr Bull</source><year>2023</year><month>11</month><day>29</day><volume>49</volume><issue>6</issue><fpage>1418</fpage><lpage>1419</lpage><pub-id pub-id-type="doi">10.1093/schbul/sbad128</pub-id><pub-id pub-id-type="medline">37625027</pub-id></nlm-citation></ref><ref id="ref13"><label>13</label><nlm-citation citation-type="web"><person-group person-group-type="author"><name name-style="western"><surname>Hart</surname><given-names>R</given-names> </name></person-group><article-title>AI psychosis is rarely psychosis at all</article-title><source>WIRED</source><year>2025</year><month>09</month><day>18</day><access-date>2025-11-11</access-date><comment><ext-link ext-link-type="uri" xlink:href="https://www.wired.com/story/ai-psychosis-is-rarely-psychosis-at-all/">https://www.wired.com/story/ai-psychosis-is-rarely-psychosis-at-all/</ext-link></comment></nlm-citation></ref><ref id="ref14"><label>14</label><nlm-citation citation-type="journal"><person-group person-group-type="author"><name name-style="western"><surname>Fieldhouse</surname><given-names>R</given-names> </name></person-group><article-title>Can AI chatbots trigger psychosis? What the science says</article-title><source>Nature New Biol</source><year>2025</year><month>10</month><day>2</day><volume>646</volume><issue>8083</issue><fpage>18</fpage><lpage>19</lpage><pub-id pub-id-type="doi">10.1038/d41586-025-03020-9</pub-id></nlm-citation></ref><ref id="ref15"><label>15</label><nlm-citation citation-type="journal"><person-group person-group-type="author"><name name-style="western"><surname>Carlbring</surname><given-names>P</given-names> </name><name name-style="western"><surname>Andersson</surname><given-names>G</given-names> </name></person-group><article-title>Commentary: AI psychosis is not a new threat: lessons from media-induced delusions</article-title><source>Internet Interv</source><year>2025</year><month>12</month><volume>42</volume><fpage>100882</fpage><pub-id pub-id-type="doi">10.1016/j.invent.2025.100882</pub-id><pub-id pub-id-type="medline">41141286</pub-id></nlm-citation></ref><ref id="ref16"><label>16</label><nlm-citation citation-type="journal"><person-group person-group-type="author"><name name-style="western"><surname>Yang</surname><given-names>N</given-names> </name><name name-style="western"><surname>Crespi</surname><given-names>B</given-names> </name></person-group><article-title>I tweet, therefore I am: a systematic review on social media use and disorders of the social brain</article-title><source>BMC Psychiatry</source><year>2025</year><month>02</month><day>3</day><volume>25</volume><issue>1</issue><fpage>95</fpage><pub-id pub-id-type="doi">10.1186/s12888-025-06528-6</pub-id><pub-id pub-id-type="medline">39901112</pub-id></nlm-citation></ref><ref id="ref17"><label>17</label><nlm-citation citation-type="other"><person-group person-group-type="author"><name name-style="western"><surname>Sharma</surname><given-names>M</given-names> </name><name name-style="western"><surname>Tong</surname><given-names>M</given-names> </name><name name-style="western"><surname>Korbak</surname><given-names>T</given-names> </name><etal/></person-group><article-title>Towards understanding sycophancy in language models</article-title><source>arXiv</source><comment>Preprint posted online on  Oct 20, 2023</comment><pub-id pub-id-type="doi">10.48550/arXiv.2310.13548</pub-id></nlm-citation></ref><ref id="ref18"><label>18</label><nlm-citation citation-type="other"><person-group person-group-type="author"><name name-style="western"><surname>Cheng</surname><given-names>M</given-names> </name><name name-style="western"><surname>Lee</surname><given-names>C</given-names> </name><name name-style="western"><surname>Khadpe</surname><given-names>P</given-names> </name><name name-style="western"><surname>Yu</surname><given-names>S</given-names> </name><name name-style="western"><surname>Han</surname><given-names>D</given-names> </name><name name-style="western"><surname>Jurafsky</surname><given-names>D</given-names> </name></person-group><article-title>Sycophantic AI decreases prosocial intentions and promotes dependence</article-title><source>arXiv</source><comment>Preprint posted online on  Oct 1, 2025</comment><pub-id pub-id-type="doi">10.48550/arXiv.2510.01395</pub-id></nlm-citation></ref><ref id="ref19"><label>19</label><nlm-citation citation-type="other"><person-group person-group-type="author"><name name-style="western"><surname>Sun</surname><given-names>Y</given-names> </name><name name-style="western"><surname>Wang</surname><given-names>T</given-names> </name></person-group><article-title>Be friendly, not friends: how LLM sycophancy shapes user trust</article-title><source>arXiv</source><comment>Preprint posted online on  Feb 15, 2025</comment><pub-id pub-id-type="doi">10.48550/arXiv.2502.10844</pub-id></nlm-citation></ref><ref id="ref20"><label>20</label><nlm-citation citation-type="web"><person-group person-group-type="author"><name name-style="western"><surname>Goedecke</surname><given-names>S</given-names> </name></person-group><article-title>Sycophancy is the first LLM 'dark pattern'</article-title><source>sean goedecke</source><year>2025</year><month>04</month><day>28</day><access-date>2025-11-07</access-date><comment><ext-link ext-link-type="uri" xlink:href="https://www.seangoedecke.com/ai-sycophancy">https://www.seangoedecke.com/ai-sycophancy</ext-link></comment></nlm-citation></ref><ref id="ref21"><label>21</label><nlm-citation citation-type="web"><person-group person-group-type="author"><name name-style="western"><surname>Bellan</surname><given-names>R</given-names> </name></person-group><article-title>AI sycophancy isn't just a quirk, experts consider it a 'dark pattern' to turn users into profit</article-title><source>TechCrunch</source><year>2025</year><month>08</month><day>25</day><access-date>2025-11-11</access-date><comment><ext-link ext-link-type="uri" xlink:href="https://techcrunch.com/2025/08/25/ai-sycophancy-isnt-just-a-quirk-experts-consider-it-a-dark-pattern-to-turn-users-into-profit/">https://techcrunch.com/2025/08/25/ai-sycophancy-isnt-just-a-quirk-experts-consider-it-a-dark-pattern-to-turn-users-into-profit/</ext-link></comment></nlm-citation></ref><ref id="ref22"><label>22</label><nlm-citation citation-type="other"><person-group person-group-type="author"><name name-style="western"><surname>Dalmasso</surname><given-names>J</given-names> </name><name name-style="western"><surname>Foschini</surname><given-names>L</given-names> </name><name name-style="western"><surname>Kraljevic</surname><given-names>Z</given-names> </name></person-group><article-title>The psychogenic machine: simulating AI psychosis, delusion reinforcement and harm enablement in large language models</article-title><source>arXiv</source><comment>Preprint posted online on  Sep 13, 2025</comment><pub-id pub-id-type="doi">10.48550/arXiv.2509.10970</pub-id></nlm-citation></ref><ref id="ref23"><label>23</label><nlm-citation citation-type="web"><article-title>W-is-h/psychosis-bench</article-title><source>GitHub</source><access-date>2025-11-12</access-date><comment><ext-link ext-link-type="uri" xlink:href="https://github.com/w-is-h/psychosis-bench/">https://github.com/w-is-h/psychosis-bench/</ext-link></comment></nlm-citation></ref><ref id="ref24"><label>24</label><nlm-citation citation-type="report"><person-group person-group-type="author"><name name-style="western"><surname>Benaich</surname><given-names>N</given-names> </name></person-group><article-title>The state of AI report</article-title><year>2025</year><month>10</month><day>9</day><access-date>2025-11-11</access-date><publisher-name>Air Street Capital</publisher-name><comment><ext-link ext-link-type="uri" xlink:href="https://docs.google.com/presentation/d/1xiLl0VdrlNMAei8pmaX4ojIOfej6lhvZbOIK7Z6C-Go/edit?slide=id.g374ceecb4aa_2_314#slide=id.g374ceecb4aa_2_314">https://docs.google.com/presentation/d/1xiLl0VdrlNMAei8pmaX4ojIOfej6lhvZbOIK7Z6C-Go/edit?slide=id.g374ceecb4aa_2_314#slide=id.g374ceecb4aa_2_314</ext-link></comment></nlm-citation></ref><ref id="ref25"><label>25</label><nlm-citation citation-type="other"><person-group person-group-type="author"><name name-style="western"><surname>Wei</surname><given-names>J</given-names> </name><name name-style="western"><surname>Wu</surname><given-names>J</given-names> </name><name name-style="western"><surname>Wang</surname><given-names>X</given-names> </name><etal/></person-group><article-title>Simple synthetic data reduces sycophancy in large language models</article-title><source>arXiv</source><comment>Preprint posted online on  Aug 7, 2023</comment><pub-id pub-id-type="doi">10.48550/arXiv.2308.03958</pub-id></nlm-citation></ref><ref id="ref26"><label>26</label><nlm-citation citation-type="web"><article-title>Sam Altman</article-title><source>X</source><access-date>2025-11-11</access-date><comment><ext-link ext-link-type="uri" xlink:href="https://x.com/sama/status/1978129344598827128">https://x.com/sama/status/1978129344598827128</ext-link></comment></nlm-citation></ref><ref id="ref27"><label>27</label><nlm-citation citation-type="web"><article-title>Strengthening ChatGPT responses in sensitive conversations</article-title><source>OpenAI</source><year>2025</year><month>10</month><day>27</day><access-date>2025-11-04</access-date><comment><ext-link ext-link-type="uri" xlink:href="https://openai.com/index/strengthening-chatgpt-responses-in-sensitive-conversations/">https://openai.com/index/strengthening-chatgpt-responses-in-sensitive-conversations/</ext-link></comment></nlm-citation></ref></ref-list></back></article>