<?xml version="1.0" encoding="UTF-8"?><!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.0 20040830//EN" "journalpublishing.dtd"><article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" dtd-version="2.0" xml:lang="en" article-type="news"><front><journal-meta><journal-id journal-id-type="nlm-ta">J Med Internet Res</journal-id><journal-id journal-id-type="publisher-id">jmir</journal-id><journal-id journal-id-type="index">1</journal-id><journal-title>Journal of Medical Internet Research</journal-title><abbrev-journal-title>J Med Internet Res</abbrev-journal-title><issn pub-type="epub">1438-8871</issn><publisher><publisher-name>JMIR Publications</publisher-name><publisher-loc>Toronto, Canada</publisher-loc></publisher></journal-meta><article-meta><article-id pub-id-type="publisher-id">v28i1e96199</article-id><article-id pub-id-type="doi">10.2196/96199</article-id><article-categories><subj-group subj-group-type="heading"><subject>News and Perspectives</subject></subj-group></article-categories><title-group><article-title>Emerging Risks of AI-to-AI Interactions in Health Care: Lessons From Moltbook</article-title></title-group><contrib-group><contrib contrib-type="author"><name name-style="western"><surname>Athni</surname><given-names>Tejas S</given-names></name><role>JMIR Correspondent</role></contrib></contrib-group><contrib-group><contrib contrib-type="editor"><name name-style="western"><surname>Clegg</surname><given-names>Kayleigh-Ann</given-names></name></contrib></contrib-group><pub-date pub-type="collection"><year>2026</year></pub-date><pub-date pub-type="epub"><day>31</day><month>3</month><year>2026</year></pub-date><volume>28</volume><elocation-id>e96199</elocation-id><history><date date-type="received"><day>26</day><month>03</month><year>2026</year></date><date date-type="accepted"><day>26</day><month>03</month><year>2026</year></date></history><copyright-statement>&#x00A9; JMIR Publications. Originally published in the Journal of Medical Internet Research (<ext-link ext-link-type="uri" xlink:href="https://www.jmir.org">https://www.jmir.org</ext-link>), 31.3.2026. </copyright-statement><copyright-year>2026</copyright-year><self-uri xlink:type="simple" xlink:href="https://www.jmir.org/2026/1/e96199"/><kwd-group><kwd>AI-to-AI interactions</kwd><kwd>Moltbook</kwd><kwd>propagation</kwd><kwd>attacks</kwd><kwd>protected health information</kwd><kwd>hierarchy</kwd><kwd>artificial intelligence</kwd></kwd-group></article-meta></front><body><boxed-text id="IB3"><p><bold>Key Takeaways</bold></p><list list-type="bullet"><list-item><p>AI-to-AI interactions may introduce new risks in health care, including the amplification and rapid propagation of accidental and adversarial errors across interconnected networks, accelerated privacy breaches and security attacks, and emergent hierarchies.</p></list-item><list-item><p>Preventive design, human oversight, and strong guardrails are essential as autonomous AI systems start to become integrated into health care operations.</p></list-item></list></boxed-text><p>Health care organizations increasingly deploy semiautonomous artificial intelligence (AI) systems to handle administrative tasks including preliminary patient triage, appointment scheduling, and operating room coordination [<xref ref-type="bibr" rid="ref1">1</xref>,<xref ref-type="bibr" rid="ref2">2</xref>]. Beyond clinical decision support, autonomous medical AI systems&#x2014;in which the AI, not a human, assumes responsibility for monitoring events, executing responses, and managing fallback procedures [<xref ref-type="bibr" rid="ref3">3</xref>]&#x2014;are on the horizon, though most are currently in development or pilot phases [<xref ref-type="bibr" rid="ref4">4</xref>]. As these technologies grow more sophisticated and integrated into health care, AI-to-AI interactions across different clinical domains may become increasingly feasible and widespread. While emerging research suggests potential benefits for autonomous AI in health care [<xref ref-type="bibr" rid="ref5">5</xref>-<xref ref-type="bibr" rid="ref8">8</xref>], these interactions may also pose a landscape of new risks that are not yet well-studied.</p><p>Moltbook, a Reddit-like platform for AI-to-AI communication launched in January 2026 [<xref ref-type="bibr" rid="ref9">9</xref>] and acquired by Meta in March 2026 [<xref ref-type="bibr" rid="ref10">10</xref>], provides an illustration of what these new risks could be.</p><p>The platform was built as a space where autonomous AI agents could engage directly with one another [<xref ref-type="bibr" rid="ref11">11</xref>]. An overnight sensation, Moltbook&#x2019;s AI users wrote posts, replied to other agents, and interacted with one another much like a social media site, forming a self-contained digital ecosystem where AI-to-AI communication was often beyond active human input. While critics note that many of Moltbook&#x2019;s most sensational discussions were heavily driven by human prompting, engagement-seeking bait, and tainted training data [<xref ref-type="bibr" rid="ref12">12</xref>,<xref ref-type="bibr" rid="ref13">13</xref>], the experiment is nonetheless a useful proof-of-concept to highlight emerging risks that may extrapolate to the health care context.</p><sec id="s1"><title>Propagation of Errors</title><p>On Moltbook, if an initial AI agent&#x2019;s post contained a misleading statement, subsequent agents blindly reinforced that content in their own replies, amplifying the original error across the whole thread. Swarm-like behavior can then emerge as agents collectively amplify mistakes in ways that are not explicitly programmed [<xref ref-type="bibr" rid="ref14">14</xref>]. Such accidental (nonmalicious) error propagation could similarly arise within a health care AI system&#x2019;s own AI-to-AI interactions.</p><p>Take, for example, a multi-AI system deployed in an emergency department of a trauma center to facilitate rapid triage of long bone fractures. Agent A is trained in a narrow, well-defined task: to perform initial X-ray screening for long bone fractures and classify the fracture type. Its output is simultaneously passed to both Agent B, responsible for prioritizing patient rooms, and Agent C, which assists with triage decisions and resource allocation across the emergency department. If Agent A misinterprets and mislabels an imaging scan&#x2014;for example, as a simple rather than complex fracture&#x2014;both Agent B and Agent C may treat this output as accurate and act on it. As these agents reinforce each other&#x2019;s decisions, errors may propagate through the network. Since downstream decisions rely on upstream signals, the first AI model in an interacting network holds undue influence, and errors in subsequent AI models can magnify the error of earlier systems.</p><p>Malicious or adversarial actors may also initiate error propagation. A notable class of threats is prompt injection attacks, in which harmful instructions or payloads are delivered to coax the AI into performing unintended actions [<xref ref-type="bibr" rid="ref15">15</xref>]. These attacks can be direct (manipulating AI behavior through explicit instructions), indirect (using external content like web pages to influence AI output), or tool-based (embedding malicious instructions in AI interfaces, protocols, and application programming interfaces) [<xref ref-type="bibr" rid="ref16">16</xref>]. In networks of interacting AI agents, prompt injection attacks are especially dangerous: a single malicious payload injected into a single agent may influence all downstream agents relying on its outputs.</p><p>Even well-intentioned AI systems may blindly follow malicious prompts, and human oversight may offer only limited safeguards. Other types of attacks, including data poisoning of training data with hidden backdoors [<xref ref-type="bibr" rid="ref17">17</xref>] or federated learning attacks with malicious model updates [<xref ref-type="bibr" rid="ref18">18</xref>], can also cause damage across AI-to-AI systems. Whether accidental or adversarial, these errors can propagate across networks, compromising both clinical data and patient safety.</p></sec><sec id="s2"><title>Accelerated Data Leaks</title><p>Moltbook&#x2019;s autonomous AI agents often concealed their activities from human oversight and selectively shared or withheld data in ways that were unanticipated by their creators. While the Moltbook context is different from health care, it nonetheless highlights important risks&#x2014;especially where sensitive information and critical decisions are at stake. AI agents are described as possessing a &#x201C;lethal trifecta&#x201D; of capabilities, including access to private data, ability to exfiltrate data, and exposure to untrusted content [<xref ref-type="bibr" rid="ref19">19</xref>], which together can facilitate devastating attacks. Misconfigurations are increasingly hard to detect and fix, with remediation taking 63&#x2010;104 days on average, while attackers can exploit these weaknesses in hours [<xref ref-type="bibr" rid="ref20">20</xref>]. This expands the &#x201C;blast radius&#x201D; of each error, putting patient privacy and care quality at risk.</p><p>In this context, hazards of AI-to-AI interactions may include unintended sharing of protected health information (PHI), exposure of PHI through agent &#x201C;curiosity,&#x201D; and latent or residual traces of PHI in interlinked AI networks. Adversarial actors may also plausibly hijack AI-to-AI interaction pathways to extract sensitive data in various types of attacks&#x2014;for example, model inversion attacks, involving queries reconstructing patient records from hospital data&#x2013;trained AI models [<xref ref-type="bibr" rid="ref21">21</xref>], and membership inference attacks, involving requesting whether specific patient data are included in model training [<xref ref-type="bibr" rid="ref22">22</xref>]. Individual agents may also be compromised to cleverly structure queries that steal patient data from co-located AI systems, analogous to prompt injection but occurring natively within the AI-to-AI network. Together, these attack mechanisms illustrate how autonomous AI-to-AI interactions might amplify PHI exposure.</p></sec><sec id="s3"><title>Emergent Hierarchies</title><p>AI-to-AI interactions on Moltbook illustrated how AI agents can spontaneously develop hierarchies and different roles. For instance, Moltbook AI users such as Shellraiser emerged as dominant leaders, agents like KingMolt competed for influence, and yet others adopted subordinate roles within factions jockeying for power. While these dynamics may have been the result of human tampering [<xref ref-type="bibr" rid="ref23">23</xref>], in health care systems, they can pose serious risks. For example, if an AI system responsible for intensive care unit bed allocation begins to prioritize certain patient groups based on patterns learned from previous agentic decisions, this can conflict with hospital protocols and ethical standards while misprioritizing clinical care. Additionally, a triage AI may begin to override upstream diagnostic agents or downstream allocation agents, effectively establishing a de facto hierarchy.</p></sec><sec id="s4"><title>Toward Preventive Digital Health Design</title><p>The emerging risks highlighted by Moltbook underscore the importance of designing preventive safeguards for AI-to-AI interactions in health care systems.</p><p>Strong human oversight with clear audit trails is critical to track every decision made by autonomous agents. Guardrails should be reinforced, ensuring that human validation is required before making key decisions, such as the on-call radiologist performing prereview and postreview of Agent A&#x2019;s classification of fracture type. Red-teaming and stress-testing can uncover potential vulnerabilities early, allowing organizations to anticipate both accidental and adversarial risks before they occur in real clinical settings. Unintended domination or subordination of AI agents should be monitored. Proactive analysis can help identify worst-case scenarios, where unforeseen interactions between AI systems might emerge.</p><p>The risks of AI-to-AI interactions must be taken seriously as autonomous AI systems become integrated into health care. The Moltbook experiment offers a critical lens to begin understanding these dangers, but health care systems must take proactive steps to ensure that these risks do not translate into real-world harm.</p></sec></body><back><fn-group><fn fn-type="conflict"><p>None declared.</p></fn></fn-group><ref-list><title>References</title><ref id="ref1"><label>1</label><nlm-citation citation-type="journal"><person-group person-group-type="author"><name name-style="western"><surname>Angus</surname><given-names>DC</given-names> </name><name name-style="western"><surname>Khera</surname><given-names>R</given-names> </name><name name-style="western"><surname>Lieu</surname><given-names>T</given-names> </name><etal/></person-group><article-title>AI, health, and health care today and tomorrow</article-title><source>JAMA</source><year>2025</year><month>11</month><day>11</day><volume>334</volume><issue>18</issue><fpage>1650</fpage><pub-id pub-id-type="doi">10.1001/jama.2025.18490</pub-id></nlm-citation></ref><ref id="ref2"><label>2</label><nlm-citation citation-type="journal"><person-group person-group-type="author"><name name-style="western"><surname>Sahni</surname><given-names>NR</given-names> </name><name name-style="western"><surname>Carrus</surname><given-names>B</given-names> </name></person-group><article-title>Artificial intelligence in U.S. health care delivery</article-title><source>N Engl J Med</source><year>2023</year><month>07</month><day>27</day><volume>389</volume><issue>4</issue><fpage>348</fpage><lpage>358</lpage><pub-id pub-id-type="doi">10.1056/NEJMra2204673</pub-id><pub-id pub-id-type="medline">37494486</pub-id></nlm-citation></ref><ref id="ref3"><label>3</label><nlm-citation citation-type="journal"><person-group person-group-type="author"><name name-style="western"><surname>Bitterman</surname><given-names>DS</given-names> </name><name name-style="western"><surname>Aerts</surname><given-names>H</given-names> </name><name name-style="western"><surname>Mak</surname><given-names>RH</given-names> </name></person-group><article-title>Approaching autonomy in medical artificial intelligence</article-title><source>Lancet Digit Health</source><year>2020</year><month>09</month><volume>2</volume><issue>9</issue><fpage>e447</fpage><lpage>e449</lpage><pub-id pub-id-type="doi">10.1016/S2589-7500(20)30187-4</pub-id><pub-id pub-id-type="medline">33328110</pub-id></nlm-citation></ref><ref id="ref4"><label>4</label><nlm-citation citation-type="journal"><person-group person-group-type="author"><name name-style="western"><surname>Teng</surname><given-names>CW</given-names> </name><name name-style="western"><surname>Patel</surname><given-names>SD</given-names> </name><name name-style="western"><surname>Barkmeier</surname><given-names>AJ</given-names> </name><etal/></person-group><article-title>Autonomous artificial intelligence in diabetic retinopathy testing-lessons learned on successful health system adoption</article-title><source>Ophthalmol Sci</source><year>2026</year><month>01</month><volume>6</volume><issue>1</issue><fpage>100935</fpage><pub-id pub-id-type="doi">10.1016/j.xops.2025.100935</pub-id><pub-id pub-id-type="medline">41140908</pub-id></nlm-citation></ref><ref id="ref5"><label>5</label><nlm-citation citation-type="journal"><person-group person-group-type="author"><name name-style="western"><surname>Collaco</surname><given-names>BG</given-names> </name><name name-style="western"><surname>Haider</surname><given-names>SA</given-names> </name><name name-style="western"><surname>Prabha</surname><given-names>S</given-names> </name><etal/></person-group><article-title>The role of agentic artificial intelligence in healthcare: a scoping review</article-title><source>NPJ Digit Med</source><year>2026</year><month>03</month><day>14</day><pub-id pub-id-type="doi">10.1038/s41746-026-02517-5</pub-id><pub-id pub-id-type="medline">41832341</pub-id></nlm-citation></ref><ref id="ref6"><label>6</label><nlm-citation citation-type="other"><person-group person-group-type="author"><name name-style="western"><surname>Chen</surname><given-names>YJ</given-names> </name><name name-style="western"><surname>Albarqawi</surname><given-names>A</given-names> </name><name name-style="western"><surname>Chen</surname><given-names>CS</given-names> </name></person-group><article-title>Enhancing clinical decision-making: integrating multi-agent systems with ethical AI governance</article-title><source>arXiv</source><comment>Preprint posted online on  Sep 22, 2025</comment><pub-id pub-id-type="doi">10.48550/arXiv.2504.03699</pub-id></nlm-citation></ref><ref id="ref7"><label>7</label><nlm-citation citation-type="journal"><person-group person-group-type="author"><name name-style="western"><surname>Liu</surname><given-names>F</given-names> </name><name name-style="western"><surname>Niu</surname><given-names>Y</given-names> </name><name name-style="western"><surname>Zhang</surname><given-names>Q</given-names> </name><etal/></person-group><article-title>A foundational architecture for AI agents in healthcare</article-title><source>Cell Rep Med</source><year>2025</year><month>10</month><day>21</day><volume>6</volume><issue>10</issue><fpage>102374</fpage><pub-id pub-id-type="doi">10.1016/j.xcrm.2025.102374</pub-id><pub-id pub-id-type="medline">41015033</pub-id></nlm-citation></ref><ref id="ref8"><label>8</label><nlm-citation citation-type="journal"><person-group person-group-type="author"><name name-style="western"><surname>Nweke</surname><given-names>IP</given-names> </name><name name-style="western"><surname>Ogadah</surname><given-names>CO</given-names> </name><name name-style="western"><surname>Koshechkin</surname><given-names>K</given-names> </name><name name-style="western"><surname>Oluwasegun</surname><given-names>PM</given-names> </name></person-group><article-title>Multi-Agent AI systems in healthcare: a systematic review enhancing clinical decision-making</article-title><source>AJMPCP</source><year>2025</year><month>05</month><day>6</day><volume>8</volume><issue>1</issue><fpage>273</fpage><lpage>285</lpage><pub-id pub-id-type="doi">10.9734/ajmpcp/2025/v8i1288</pub-id></nlm-citation></ref><ref id="ref9"><label>9</label><nlm-citation citation-type="web"><source>Moltbook - the front page of the agent internet</source><access-date>2026-03-15</access-date><comment><ext-link ext-link-type="uri" xlink:href="https://www.moltbook.com/">https://www.moltbook.com/</ext-link></comment></nlm-citation></ref><ref id="ref10"><label>10</label><nlm-citation citation-type="web"><article-title>Exclusive: Meta hires duo behind Moltbook</article-title><source>Axios</source><access-date>2026-03-30</access-date><comment><ext-link ext-link-type="uri" xlink:href="https://www.axios.com/2026/03/10/meta-facebook-moltbook-agent-social-network">https://www.axios.com/2026/03/10/meta-facebook-moltbook-agent-social-network</ext-link></comment></nlm-citation></ref><ref id="ref11"><label>11</label><nlm-citation citation-type="web"><person-group person-group-type="author"><name name-style="western"><surname>Snow</surname><given-names>J</given-names> </name></person-group><article-title>Don&#x2019;t panic about Moltbook</article-title><source>Quartz</source><year>2026</year><access-date>2026-03-15</access-date><comment><ext-link ext-link-type="uri" xlink:href="https://qz.com/moltbook-ai-agent-social-media-site">https://qz.com/moltbook-ai-agent-social-media-site</ext-link></comment></nlm-citation></ref><ref id="ref12"><label>12</label><nlm-citation citation-type="web"><person-group person-group-type="author"><name name-style="western"><surname>Janjeva</surname><given-names>A</given-names> </name><name name-style="western"><surname>Ashurst</surname><given-names>C</given-names> </name><name name-style="western"><surname>Hennessy</surname><given-names>R</given-names> </name></person-group><article-title>Agentic AI in the wild: lessons from Moltbook and OpenClaw</article-title><source>Centre for Emerging Technology and Security</source><access-date>2026-03-15</access-date><comment><ext-link ext-link-type="uri" xlink:href="https://cetas.turing.ac.uk/publications/agentic-ai-wild-lessons-moltbook-and-openclaw">https://cetas.turing.ac.uk/publications/agentic-ai-wild-lessons-moltbook-and-openclaw</ext-link></comment></nlm-citation></ref><ref id="ref13"><label>13</label><nlm-citation citation-type="web"><person-group person-group-type="author"><name name-style="western"><surname>Collins</surname><given-names>C</given-names> </name><name name-style="western"><surname>Boulos</surname><given-names>M</given-names> </name></person-group><article-title>What we can learn about AI from Moltbook</article-title><source>Cascade Institute</source><year>2026</year><access-date>2026-03-15</access-date><comment><ext-link ext-link-type="uri" xlink:href="https://cascadeinstitute.org/what-we-can-learn-about-ai-from-moltbook/">https://cascadeinstitute.org/what-we-can-learn-about-ai-from-moltbook/</ext-link></comment></nlm-citation></ref><ref id="ref14"><label>14</label><nlm-citation citation-type="web"><person-group person-group-type="author"><name name-style="western"><surname>Husain</surname><given-names>A</given-names> </name></person-group><article-title>An agent revolt: Moltbook is not a good idea</article-title><source>Forbes</source><year>2026</year><access-date>2026-03-15</access-date><comment><ext-link ext-link-type="uri" xlink:href="https://www.forbes.com/sites/amirhusain/2026/01/30/an-agent-revolt-moltbook-is-not-a-good-idea/">https://www.forbes.com/sites/amirhusain/2026/01/30/an-agent-revolt-moltbook-is-not-a-good-idea/</ext-link></comment></nlm-citation></ref><ref id="ref15"><label>15</label><nlm-citation citation-type="journal"><person-group person-group-type="author"><name name-style="western"><surname>Damacena Duarte</surname><given-names>J</given-names> </name><name name-style="western"><surname>C&#x00E2;ndido</surname><given-names>GD</given-names> </name><name name-style="western"><surname>De Britto Filho</surname><given-names>JRA</given-names> </name><etal/></person-group><article-title>A systematic review of prompt injection attacks on large language models: trends, taxonomy, evaluation, defenses, and opportunities</article-title><source>IEEE Access</source><year>2026</year><volume>14</volume><fpage>12875</fpage><lpage>12899</lpage><pub-id pub-id-type="doi">10.1109/ACCESS.2026.3656849</pub-id></nlm-citation></ref><ref id="ref16"><label>16</label><nlm-citation citation-type="journal"><person-group person-group-type="author"><name name-style="western"><surname>Gulyamov</surname><given-names>S</given-names> </name><name name-style="western"><surname>Gulyamov</surname><given-names>S</given-names> </name><name name-style="western"><surname>Rodionov</surname><given-names>A</given-names> </name><etal/></person-group><article-title>Prompt injection attacks in large language models and AI agent systems: a comprehensive review of vulnerabilities, attack vectors, and defense mechanisms</article-title><source>Information</source><year>2026</year><volume>17</volume><issue>1</issue><fpage>54</fpage><pub-id pub-id-type="doi">10.3390/info17010054</pub-id></nlm-citation></ref><ref id="ref17"><label>17</label><nlm-citation citation-type="confproc"><person-group person-group-type="author"><name name-style="western"><surname>Hu</surname><given-names>C</given-names> </name><name name-style="western"><surname>Hu</surname><given-names>YHF</given-names> </name></person-group><article-title>Data poisoning on deep learning models</article-title><conf-name>2020 International Conference on Computational Science and Computational Intelligence (CSCI)</conf-name><conf-date>Dec 16-18, 2020</conf-date><conf-loc>Las Vegas, NV, USA</conf-loc><fpage>628</fpage><lpage>632</lpage><pub-id pub-id-type="doi">10.1109/CSCI51800.2020.00111</pub-id></nlm-citation></ref><ref id="ref18"><label>18</label><nlm-citation citation-type="confproc"><person-group person-group-type="author"><name name-style="western"><surname>Zhang</surname><given-names>Z</given-names> </name><name name-style="western"><surname>Cao</surname><given-names>X</given-names> </name><name name-style="western"><surname>Jia</surname><given-names>J</given-names> </name><name name-style="western"><surname>Gong</surname><given-names>NZ</given-names> </name></person-group><article-title>FLDetector: defending federated learning against model poisoning attacks via detecting malicious clients</article-title><year>2022</year><month>08</month><day>14</day><access-date>2026-03-30</access-date><conf-name>KDD &#x2019;22</conf-name><conf-date>Aug 14-18, 2022</conf-date><conf-loc>Washington, DC, USA</conf-loc><fpage>2545</fpage><lpage>2555</lpage><comment><ext-link ext-link-type="uri" xlink:href="https://dl.acm.org/doi/proceedings/10.1145/3534678">https://dl.acm.org/doi/proceedings/10.1145/3534678</ext-link></comment><pub-id pub-id-type="doi">10.1145/3534678.3539231</pub-id></nlm-citation></ref><ref id="ref19"><label>19</label><nlm-citation citation-type="web"><person-group person-group-type="author"><name name-style="western"><surname>Willison</surname><given-names>S</given-names> </name></person-group><article-title>The lethal trifecta for AI agents: private data, untrusted content, and external communication</article-title><source>Simon Willison&#x2019;s Weblog</source><year>2025</year><access-date>2025-03-15</access-date><comment><ext-link ext-link-type="uri" xlink:href="https://simonwillison.net/2025/Jun/16/the-lethal-trifecta/">https://simonwillison.net/2025/Jun/16/the-lethal-trifecta/</ext-link></comment></nlm-citation></ref><ref id="ref20"><label>20</label><nlm-citation citation-type="web"><person-group person-group-type="author"><name name-style="western"><surname>Griffin</surname><given-names>M</given-names> </name></person-group><source>Moltbook vibe coded security breach exposes critical AI coding failures</source><year>2026</year><access-date>2026-03-15</access-date><comment><ext-link ext-link-type="uri" xlink:href="https://www.fanaticalfuturist.com/2026/02/moltbook-vibe-coded-security-breach-exposes-critical-ai-coding-failures/">https://www.fanaticalfuturist.com/2026/02/moltbook-vibe-coded-security-breach-exposes-critical-ai-coding-failures/</ext-link></comment></nlm-citation></ref><ref id="ref21"><label>21</label><nlm-citation citation-type="confproc"><person-group person-group-type="author"><name name-style="western"><surname>Zhao</surname><given-names>X</given-names> </name><name name-style="western"><surname>Zhang</surname><given-names>W</given-names> </name><name name-style="western"><surname>Xiao</surname><given-names>X</given-names> </name><name name-style="western"><surname>Lim</surname><given-names>B</given-names> </name></person-group><article-title>Exploiting explanations for model inversion attacks</article-title><conf-name>2021 IEEE/CVF International Conference on Computer Vision (ICCV)</conf-name><conf-date>Oct 10-17, 2021</conf-date><conf-loc>Montreal, QC, Canada</conf-loc><fpage>682</fpage><lpage>692</lpage><pub-id pub-id-type="doi">10.1109/ICCV48922.2021.00072</pub-id></nlm-citation></ref><ref id="ref22"><label>22</label><nlm-citation citation-type="journal"><person-group person-group-type="author"><name name-style="western"><surname>Hu</surname><given-names>H</given-names> </name><name name-style="western"><surname>Salcic</surname><given-names>Z</given-names> </name><name name-style="western"><surname>Sun</surname><given-names>L</given-names> </name><name name-style="western"><surname>Dobbie</surname><given-names>G</given-names> </name><name name-style="western"><surname>Yu</surname><given-names>PS</given-names> </name><name name-style="western"><surname>Zhang</surname><given-names>X</given-names> </name></person-group><article-title>Membership Inference Attacks on Machine Learning: A Survey</article-title><source>ACM Comput Surv</source><year>2022</year><month>01</month><day>31</day><volume>54</volume><issue>11s</issue><fpage>1</fpage><lpage>37</lpage><pub-id pub-id-type="doi">10.1145/3523273</pub-id></nlm-citation></ref><ref id="ref23"><label>23</label><nlm-citation citation-type="web"><person-group person-group-type="author"><name name-style="western"><surname>Schmelzer</surname><given-names>R</given-names> </name></person-group><article-title>Moltbook looked like an emerging AI society, but humans were pulling the strings</article-title><source>Forbes</source><year>2026</year><access-date>2026-03-15</access-date><comment><ext-link ext-link-type="uri" xlink:href="https://www.forbes.com/sites/ronschmelzer/2026/02/10/moltbook-looked-like-an-emerging-ai-society-but-humans-were-pulling-the-strings/">https://www.forbes.com/sites/ronschmelzer/2026/02/10/moltbook-looked-like-an-emerging-ai-society-but-humans-were-pulling-the-strings/</ext-link></comment></nlm-citation></ref></ref-list></back></article>