<?xml version="1.0" encoding="UTF-8"?><!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.0 20040830//EN" "journalpublishing.dtd"><article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" dtd-version="2.0" xml:lang="en" article-type="news"><front><journal-meta><journal-id journal-id-type="nlm-ta">J Med Internet Res</journal-id><journal-id journal-id-type="publisher-id">jmir</journal-id><journal-id journal-id-type="index">1</journal-id><journal-title>Journal of Medical Internet Research</journal-title><abbrev-journal-title>J Med Internet Res</abbrev-journal-title><issn pub-type="epub">1438-8871</issn><publisher><publisher-name>JMIR Publications</publisher-name><publisher-loc>Toronto, Canada</publisher-loc></publisher></journal-meta><article-meta><article-id pub-id-type="publisher-id">v28i1e95090</article-id><article-id pub-id-type="doi">10.2196/95090</article-id><article-categories><subj-group subj-group-type="heading"><subject>News and Perspectives</subject></subj-group></article-categories><title-group><article-title>The Right to Understand in Health Care AI</article-title></title-group><contrib-group><contrib contrib-type="author"><name name-style="western"><surname>Ankolekar</surname><given-names>Anshu</given-names></name><role>JMIR Correspondent</role></contrib></contrib-group><contrib-group><contrib contrib-type="editor"><name name-style="western"><surname>Clegg</surname><given-names>Kayleigh-Ann</given-names></name></contrib></contrib-group><pub-date pub-type="collection"><year>2026</year></pub-date><pub-date pub-type="epub"><day>19</day><month>3</month><year>2026</year></pub-date><volume>28</volume><elocation-id>e95090</elocation-id><history><date date-type="received"><day>10</day><month>03</month><year>2026</year></date><date date-type="accepted"><day>10</day><month>03</month><year>2026</year></date></history><copyright-statement>&#x00A9; JMIR Publications. Originally published in the Journal of Medical Internet Research (<ext-link ext-link-type="uri" xlink:href="https://www.jmir.org">https://www.jmir.org</ext-link>), 19.3.2026. </copyright-statement><copyright-year>2026</copyright-year><self-uri xlink:type="simple" xlink:href="https://www.jmir.org/2026/1/e95090"/><kwd-group><kwd>artificial intelligence</kwd><kwd>government regulation</kwd><kwd>European Union</kwd><kwd>clinical decision support systems</kwd><kwd>informed consent</kwd><kwd>ethics</kwd><kwd>patient participation</kwd><kwd>patient rights</kwd><kwd>health knowledge, attitudes, practice</kwd><kwd>health literacy</kwd></kwd-group></article-meta></front><body><boxed-text id="IB1"><p><bold>Key Takeaways</bold></p><list list-type="bullet"><list-item><p>The European Union Artificial Intelligence (AI) Act and General Data Protection Regulation (GDPR) give patients legal grounds to seek explanations of AI-driven medical recommendations, but neither framework specifies what a meaningful explanation requires in clinical practice.</p></list-item><list-item><p>Technical, clinical, and literacy barriers mean that even well-intentioned explanations may not reach patients in a form they can use to make informed decisions about their care.</p></list-item><list-item><p>Closing this gap requires shifting from compliance (&#x201C;Was an explanation provided?&#x201D;) to effectiveness (&#x201C;Can patients use it?&#x201D;).</p></list-item></list></boxed-text><p>An artificial intelligence (AI) system flags a nodule on a lung computed tomography scan and assigns it an 87% malignancy probability. The radiologist receives a confidence score and a heat map showing which parts of the image the algorithm focused on. When her patient asks, &#x201C;why does the computer think it&#x2019;s cancer,&#x201D; she realizes the output tells her what the AI concluded but not why or how to explain it to her patient.</p><p>It can be incredibly challenging to explain AI reasoning in clinical practice. The European Union (EU) AI Act, which entered into force in August 2024 with obligations phasing in through 2027, requires that deployers of high-risk AI systems provide those affected with clear and meaningful explanations of decisions shaped by these systems [<xref ref-type="bibr" rid="ref1">1</xref>]. This creates a legal basis for patients to seek explanations of AI-driven medical recommendations. It also immediately raises a question the law alone cannot answer: What does a meaningful explanation of an AI medical decision look like in clinical practice, and how do we deliver it?</p><sec id="s1"><title>Legal Foundation and Open Questions</title><p>Many clinically deployed medical AI systems will fall into the AI Act&#x2019;s high-risk category, particularly where they are regulated as medical device software or serve as safety components of regulated products [<xref ref-type="bibr" rid="ref1">1</xref>,<xref ref-type="bibr" rid="ref2">2</xref>]. For cases where the AI Act does not directly apply, patients may also draw on the General Data Protection Regulation (GDPR). Under Article 22, the GDPR provides safeguards against decisions &#x201C;based solely on automated processing,&#x201D; including the right to meaningful information about the logic behind those decisions [<xref ref-type="bibr" rid="ref3">3</xref>].</p><p>However, most clinical AI does not meet Article 22&#x2019;s threshold, since a human clinician typically remains the formal decision maker. This creates ambiguity and a paradox: the human oversight meant to protect patients may also reduce their legal claim to explanation, since the decision may no longer be considered purely automated.</p><p>Together, these frameworks establish that patients have legal grounds to seek explanations of AI-driven medical recommendations, but these are stronger in principle than in practice, and neither framework specifies what a &#x201C;meaningful&#x201D; explanation requires.</p></sec><sec id="s2"><title>Technical Complexity</title><p>The difficulty the radiologist in the opening scenario faces is not incidental. The most accurate AI models generate outputs through millions of interacting parameters in ways that even their developers cannot fully trace [<xref ref-type="bibr" rid="ref4">4</xref>]. Requiring greater transparency can push developers toward simpler, more interpretable models that sacrifice diagnostic accuracy, a trade-off with direct consequences for patient care [<xref ref-type="bibr" rid="ref5">5</xref>,<xref ref-type="bibr" rid="ref6">6</xref>].</p><p>Current explainable AI methods only partially address this. Saliency maps show which regions of an image the algorithm weighted most heavily, but not what it understood about them [<xref ref-type="bibr" rid="ref7">7</xref>]. Feature importance rankings indicate which variables mattered, but not how they interacted or why particular thresholds were significant [<xref ref-type="bibr" rid="ref8">8</xref>]. These post hoc approximations attempt to reconstruct reasoning after the fact [<xref ref-type="bibr" rid="ref9">9</xref>,<xref ref-type="bibr" rid="ref10">10</xref>], and they can produce plausible-sounding explanations that do not accurately reflect the model&#x2019;s internal logic [<xref ref-type="bibr" rid="ref11">11</xref>].</p></sec><sec id="s3"><title>Clinical Implementation Challenges</title><p>Even where technical explanations exist, delivering them in clinical practice has its own barriers. Clinicians typically receive AI outputs as confidence scores and recommendations rather than reasoning, meaning they may be asked to explain a decision they do not fully understand themselves [<xref ref-type="bibr" rid="ref12">12</xref>,<xref ref-type="bibr" rid="ref13">13</xref>]. Clinicians already struggle to find time for thorough clinical conversations, and AI adds another layer of complexity to encounters that are already stretched [<xref ref-type="bibr" rid="ref14">14</xref>-<xref ref-type="bibr" rid="ref16">16</xref>].</p><p>Automation bias&#x2014;deferring to algorithmic recommendations even when these conflict with clinical judgment&#x2014;is another implementation challenge clinicians face [<xref ref-type="bibr" rid="ref17">17</xref>]. A prospective study of radiologists reading mammograms found that incorrect AI suggestions pulled readers toward an incorrect diagnosis regardless of level of experience [<xref ref-type="bibr" rid="ref18">18</xref>]. An explanation delivered by a clinician who has already deferred to the algorithm may reflect the AI&#x2019;s conclusion rather than an independent clinical assessment, challenging the assumption in both legal frameworks that human oversight guarantees meaningful review.</p></sec><sec id="s4"><title>Patient Understanding</title><p>Even if clinicians provide explanations, understanding is not guaranteed. Between 22% and 58% of EU citizens report difficulty accessing, understanding, appraising, and applying the health information they need to navigate health care services, with pronounced gaps among older adults, lower socioeconomic groups, and rural communities [<xref ref-type="bibr" rid="ref19">19</xref>]. Interpreting AI outputs requires statistical and technical literacy that even a high general education does not guarantee. Many highly educated individuals struggle with medical statistics and probability statements, meaning technically accurate explanations may create barriers regardless of educational background [<xref ref-type="bibr" rid="ref20">20</xref>].</p><p>Providing more technical detail is not likely to help. Research on medical decision-making suggests that excessive technical information can lead to cognitive overload, causing patients to defer to physician authority rather than engage with the explanation [<xref ref-type="bibr" rid="ref21">21</xref>]. To participate meaningfully in decisions, what patients typically need is not a description of how an algorithm works, but clarity on what&#x2019;s most relevant for their own situation [<xref ref-type="bibr" rid="ref22">22</xref>,<xref ref-type="bibr" rid="ref23">23</xref>].</p></sec><sec id="s5"><title>What Implementation Requires</title><p>Resolving these challenges requires multiple actors.</p><p>Developers could design explanation systems with patient input from the outset, testing comprehension with actual patients rather than demonstrating compliance with legal standards alone [<xref ref-type="bibr" rid="ref6">6</xref>]. Drawing on principles from shared decision-making [<xref ref-type="bibr" rid="ref24">24</xref>], risk communication [<xref ref-type="bibr" rid="ref25">25</xref>], and algorithmic fairness research [<xref ref-type="bibr" rid="ref26">26</xref>], a useful patient-facing explanation could include what the system is recommending and for what decision point; how confident it is and what that confidence means in practical terms; relevant key limitations, such as known performance gaps in specific populations; and viable alternative options. This reframes the goal from technical transparency to decision-relevant clarity, with effectiveness measurable by whether patients can answer these questions after an encounter.</p><p>Health care institutions can also bridge this gap. Allocating time for AI discussions, training staff to support patients in navigating AI-driven recommendations, and establishing clear protocols could shift explanation from a compliance exercise toward genuine patient understanding. Policy makers could support this by developing standards focused on comprehension and investing in digital health literacy programs [<xref ref-type="bibr" rid="ref2">2</xref>].</p><p>Involving patients in the design of explanation systems from the start would strengthen all of these efforts. Patient advocates have highlighted that explanation approaches tend to reflect what developers and regulators consider important, which may not always align with what patients need to know [<xref ref-type="bibr" rid="ref27">27</xref>]. Co-design partnerships between developers, institutions, and patient communities offer a route toward explanations that are not only legally sound but genuinely useful.</p></sec><sec id="s6"><title>Moving Forward</title><p>The EU AI Act gives patients something they did not previously have: a legal basis for demanding transparency about AI systems influencing their care. But the right to explanation and the capacity to deliver one that patients can genuinely use are shaped by forces the law alone cannot govern: the opacity of high-performing models, the pressures of clinical practice, and the diversity of patient needs and literacy levels.</p><p>What the AI Act&#x2019;s transparency requirements provide, beyond legal protection, is a shared standard to work toward. The right to explanation is an important starting point. What patients need now are answers they can use.</p></sec></body><back><fn-group><fn fn-type="conflict"><p>None declared.</p></fn></fn-group><ref-list><title>References</title><ref id="ref1"><label>1</label><nlm-citation citation-type="web"><article-title>Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act) (Text with EEA relevance)</article-title><source>EUR-Lex</source><year>2024</year><access-date>2026-03-17</access-date><comment><ext-link ext-link-type="uri" xlink:href="https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=OJ:L_202401689">https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=OJ:L_202401689</ext-link></comment></nlm-citation></ref><ref id="ref2"><label>2</label><nlm-citation citation-type="journal"><person-group person-group-type="author"><name name-style="western"><surname>van Kolfschooten</surname><given-names>H</given-names> </name><name name-style="western"><surname>van Oirschot</surname><given-names>J</given-names> </name></person-group><article-title>The EU Artificial Intelligence Act (2024): implications for healthcare</article-title><source>Health Policy</source><year>2024</year><month>11</month><volume>149</volume><fpage>105152</fpage><pub-id pub-id-type="doi">10.1016/j.healthpol.2024.105152</pub-id><pub-id pub-id-type="medline">39244818</pub-id></nlm-citation></ref><ref id="ref3"><label>3</label><nlm-citation citation-type="web"><article-title>Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) (Text with EEA relevance)</article-title><source>EUR-Lex</source><year>2016</year><access-date>2026-03-17</access-date><comment><ext-link ext-link-type="uri" xlink:href="https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32016R0679">https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32016R0679</ext-link></comment></nlm-citation></ref><ref id="ref4"><label>4</label><nlm-citation citation-type="journal"><person-group person-group-type="author"><name name-style="western"><surname>Rudin</surname><given-names>C</given-names> </name></person-group><article-title>Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead</article-title><source>Nat Mach Intell</source><year>2019</year><month>05</month><volume>1</volume><issue>5</issue><fpage>206</fpage><lpage>215</lpage><pub-id pub-id-type="doi">10.1038/s42256-019-0048-x</pub-id><pub-id pub-id-type="medline">35603010</pub-id></nlm-citation></ref><ref id="ref5"><label>5</label><nlm-citation citation-type="journal"><person-group person-group-type="author"><name name-style="western"><surname>Ennab</surname><given-names>M</given-names> </name><name name-style="western"><surname>Mcheick</surname><given-names>H</given-names> </name></person-group><article-title>Designing an interpretability-based model to explain the artificial intelligence algorithms in healthcare</article-title><source>Diagnostics (Basel)</source><year>2022</year><month>06</month><day>26</day><volume>12</volume><issue>7</issue><fpage>1557</fpage><pub-id pub-id-type="doi">10.3390/diagnostics12071557</pub-id><pub-id pub-id-type="medline">35885463</pub-id></nlm-citation></ref><ref id="ref6"><label>6</label><nlm-citation citation-type="journal"><person-group person-group-type="author"><name name-style="western"><surname>Ebers</surname><given-names>M</given-names> </name></person-group><article-title>AI robotics in healthcare between the EU Medical Device Regulation and the Artificial Intelligence Act: gaps and inconsistencies in the protection of patients and care recipients</article-title><source>Oslo Law Rev</source><year>2024</year><month>10</month><day>31</day><volume>11</volume><issue>1</issue><fpage>1</fpage><lpage>12</lpage><pub-id pub-id-type="doi">10.18261/olr.11.1.2</pub-id></nlm-citation></ref><ref id="ref7"><label>7</label><nlm-citation citation-type="journal"><person-group person-group-type="author"><name name-style="western"><surname>Borji</surname><given-names>A</given-names> </name></person-group><article-title>Saliency prediction in the deep learning era: successes and limitations</article-title><source>IEEE Trans Pattern Anal Mach Intell</source><year>2021</year><month>02</month><volume>43</volume><issue>2</issue><fpage>679</fpage><lpage>700</lpage><pub-id pub-id-type="doi">10.1109/TPAMI.2019.2935715</pub-id><pub-id pub-id-type="medline">31425064</pub-id></nlm-citation></ref><ref id="ref8"><label>8</label><nlm-citation citation-type="journal"><person-group person-group-type="author"><name name-style="western"><surname>Hossain</surname><given-names>MI</given-names> </name><name name-style="western"><surname>Zamzmi</surname><given-names>G</given-names> </name><name name-style="western"><surname>Mouton</surname><given-names>PR</given-names> </name><name name-style="western"><surname>Salekin</surname><given-names>MS</given-names> </name><name name-style="western"><surname>Sun</surname><given-names>Y</given-names> </name><name name-style="western"><surname>Goldgof</surname><given-names>D</given-names> </name></person-group><article-title>Explainable AI for medical data: current methods, limitations, and future directions</article-title><source>ACM Comput Surv</source><year>2025</year><month>06</month><day>30</day><volume>57</volume><issue>6</issue><fpage>1</fpage><lpage>46</lpage><pub-id pub-id-type="doi">10.1145/3637487</pub-id></nlm-citation></ref><ref id="ref9"><label>9</label><nlm-citation citation-type="confproc"><person-group person-group-type="author"><name name-style="western"><surname>Bordt</surname><given-names>S</given-names> </name><name name-style="western"><surname>Finck</surname><given-names>M</given-names> </name><name name-style="western"><surname>Raidl</surname><given-names>E</given-names> </name><name name-style="western"><surname>von Luxburg</surname><given-names>U</given-names> </name></person-group><article-title>Post-hoc explanations fail to achieve their purpose in adversarial contexts</article-title><conf-name>FAccT &#x2019;22: 2022 ACM Conference on Fairness, Accountability, and Transparency</conf-name><conf-date>Jun 21-24, 2022</conf-date><conf-loc>Seoul, Republic of Korea</conf-loc><fpage>891</fpage><lpage>905</lpage><pub-id pub-id-type="doi">10.1145/3531146.3533153</pub-id></nlm-citation></ref><ref id="ref10"><label>10</label><nlm-citation citation-type="confproc"><person-group person-group-type="author"><name name-style="western"><surname>Mhasawade</surname><given-names>V</given-names> </name><name name-style="western"><surname>Rahman</surname><given-names>S</given-names> </name><name name-style="western"><surname>Haskell-Craig</surname><given-names>Z</given-names> </name><name name-style="western"><surname>Chunara</surname><given-names>R</given-names> </name></person-group><article-title>Understanding disparities in post hoc machine learning explanation</article-title><conf-name>FAccT &#x2019;24: The 2024 ACM Conference on Fairness, Accountability, and Transparency</conf-name><conf-date>Jun 3-6, 2024</conf-date><conf-loc>Rio de Janeiro, Brazil</conf-loc><fpage>2374</fpage><lpage>2388</lpage><pub-id pub-id-type="doi">10.1145/3630106.3659043</pub-id></nlm-citation></ref><ref id="ref11"><label>11</label><nlm-citation citation-type="journal"><person-group person-group-type="author"><name name-style="western"><surname>Jin</surname><given-names>Q</given-names> </name><name name-style="western"><surname>Chen</surname><given-names>F</given-names> </name><name name-style="western"><surname>Zhou</surname><given-names>Y</given-names> </name><etal/></person-group><article-title>Hidden flaws behind expert-level accuracy of multimodal GPT-4 vision in medicine</article-title><source>NPJ Digit Med</source><year>2024</year><month>07</month><day>23</day><volume>7</volume><issue>1</issue><fpage>190</fpage><pub-id pub-id-type="doi">10.1038/s41746-024-01185-7</pub-id><pub-id pub-id-type="medline">39043988</pub-id></nlm-citation></ref><ref id="ref12"><label>12</label><nlm-citation citation-type="confproc"><person-group person-group-type="author"><name name-style="western"><surname>Sivaraman</surname><given-names>V</given-names> </name><name name-style="western"><surname>Bukowski</surname><given-names>LA</given-names> </name><name name-style="western"><surname>Levin</surname><given-names>J</given-names> </name><name name-style="western"><surname>Kahn</surname><given-names>JM</given-names> </name><name name-style="western"><surname>Perer</surname><given-names>A</given-names> </name></person-group><article-title>Ignore, trust, or negotiate: understanding clinician acceptance of AI-based treatment recommendations in health care</article-title><conf-name>CHI &#x2019;23: CHI Conference on Human Factors in Computing Systems</conf-name><conf-date>Apr 23-28, 2023</conf-date><conf-loc>Hamburg, Germany</conf-loc><fpage>1</fpage><lpage>18</lpage><pub-id pub-id-type="doi">10.1145/3544548.3581075</pub-id></nlm-citation></ref><ref id="ref13"><label>13</label><nlm-citation citation-type="journal"><person-group person-group-type="author"><name name-style="western"><surname>Scott</surname><given-names>IA</given-names> </name><name name-style="western"><surname>Carter</surname><given-names>SM</given-names> </name><name name-style="western"><surname>Coiera</surname><given-names>E</given-names> </name></person-group><article-title>Exploring stakeholder attitudes towards AI in clinical practice</article-title><source>BMJ Health Care Inform</source><year>2021</year><month>12</month><volume>28</volume><issue>1</issue><fpage>e100450</fpage><pub-id pub-id-type="doi">10.1136/bmjhci-2021-100450</pub-id><pub-id pub-id-type="medline">34887331</pub-id></nlm-citation></ref><ref id="ref14"><label>14</label><nlm-citation citation-type="journal"><person-group person-group-type="author"><name name-style="western"><surname>Braddock</surname><given-names>CH</given-names> </name><name name-style="western"><surname>Snyder</surname><given-names>L</given-names> </name></person-group><article-title>The doctor will see you shortly. The ethical significance of time for the patient-physician relationship</article-title><source>J Gen Intern Med</source><year>2005</year><month>11</month><volume>20</volume><issue>11</issue><fpage>1057</fpage><lpage>1062</lpage><pub-id pub-id-type="doi">10.1111/j.1525-1497.2005.00217.x</pub-id><pub-id pub-id-type="medline">16307634</pub-id></nlm-citation></ref><ref id="ref15"><label>15</label><nlm-citation citation-type="confproc"><person-group person-group-type="author"><name name-style="western"><surname>Jacobs</surname><given-names>M</given-names> </name><name name-style="western"><surname>He</surname><given-names>J</given-names> </name><name name-style="western"><surname>Pradier</surname><given-names>MF</given-names> </name><etal/></person-group><article-title>Designing AI for trust and collaboration in time-constrained medical decisions: a sociotechnical lens</article-title><conf-name>CHI &#x2019;21: CHI Conference on Human Factors in Computing Systems</conf-name><conf-date>May 8-13, 2021</conf-date><conf-loc>Yokohama, Japan</conf-loc><fpage>1</fpage><lpage>14</lpage><pub-id pub-id-type="doi">10.1145/3411764.3445385</pub-id></nlm-citation></ref><ref id="ref16"><label>16</label><nlm-citation citation-type="journal"><person-group person-group-type="author"><name name-style="western"><surname>Covvey</surname><given-names>JR</given-names> </name><name name-style="western"><surname>Kamal</surname><given-names>KM</given-names> </name><name name-style="western"><surname>Gorse</surname><given-names>EE</given-names> </name><etal/></person-group><article-title>Barriers and facilitators to shared decision-making in oncology: a systematic review of the literature</article-title><source>Support Care Cancer</source><year>2019</year><month>05</month><volume>27</volume><issue>5</issue><fpage>1613</fpage><lpage>1637</lpage><pub-id pub-id-type="doi">10.1007/s00520-019-04675-7</pub-id><pub-id pub-id-type="medline">30737578</pub-id></nlm-citation></ref><ref id="ref17"><label>17</label><nlm-citation citation-type="journal"><person-group person-group-type="author"><name name-style="western"><surname>Abdelwanis</surname><given-names>M</given-names> </name><name name-style="western"><surname>Alarafati</surname><given-names>HK</given-names> </name><name name-style="western"><surname>Tammam</surname><given-names>MMS</given-names> </name><name name-style="western"><surname>Simsekler</surname><given-names>MCE</given-names> </name></person-group><article-title>Exploring the risks of automation bias in healthcare artificial intelligence applications: a Bowtie analysis</article-title><source>J Safety Sci Resilience</source><year>2024</year><month>12</month><volume>5</volume><issue>4</issue><fpage>460</fpage><lpage>469</lpage><pub-id pub-id-type="doi">10.1016/j.jnlssr.2024.06.001</pub-id></nlm-citation></ref><ref id="ref18"><label>18</label><nlm-citation citation-type="journal"><person-group person-group-type="author"><name name-style="western"><surname>Dratsch</surname><given-names>T</given-names> </name><name name-style="western"><surname>Chen</surname><given-names>X</given-names> </name><name name-style="western"><surname>Rezazade Mehrizi</surname><given-names>M</given-names> </name><etal/></person-group><article-title>Automation bias in mammography: the impact of artificial intelligence BI-RADS suggestions on reader performance</article-title><source>Radiology</source><year>2023</year><month>05</month><volume>307</volume><issue>4</issue><fpage>e222176</fpage><pub-id pub-id-type="doi">10.1148/radiol.222176</pub-id><pub-id pub-id-type="medline">37129490</pub-id></nlm-citation></ref><ref id="ref19"><label>19</label><nlm-citation citation-type="web"><person-group person-group-type="author"><name name-style="western"><surname>Collado</surname><given-names>D</given-names> </name></person-group><article-title>Digital health literacy: a cornerstone of health equity in the EU: policy brief</article-title><source>Health Action International</source><year>2024</year><month>10</month><access-date>2026-02-18</access-date><comment><ext-link ext-link-type="uri" xlink:href="https://haiweb.org/wp-content/uploads/2024/10/Digital-Health-Literacy-in-the-EU.pdf">https://haiweb.org/wp-content/uploads/2024/10/Digital-Health-Literacy-in-the-EU.pdf</ext-link></comment></nlm-citation></ref><ref id="ref20"><label>20</label><nlm-citation citation-type="journal"><person-group person-group-type="author"><name name-style="western"><surname>Reyna</surname><given-names>VF</given-names> </name><name name-style="western"><surname>Nelson</surname><given-names>WL</given-names> </name><name name-style="western"><surname>Han</surname><given-names>PK</given-names> </name><name name-style="western"><surname>Dieckmann</surname><given-names>NF</given-names> </name></person-group><article-title>How numeracy influences risk comprehension and medical decision making</article-title><source>Psychol Bull</source><year>2009</year><month>11</month><volume>135</volume><issue>6</issue><fpage>943</fpage><lpage>973</lpage><pub-id pub-id-type="doi">10.1037/a0017327</pub-id><pub-id pub-id-type="medline">19883143</pub-id></nlm-citation></ref><ref id="ref21"><label>21</label><nlm-citation citation-type="journal"><person-group person-group-type="author"><name name-style="western"><surname>Peters</surname><given-names>E</given-names> </name></person-group><article-title>Beyond comprehension: the role of numeracy in judgments and decisions</article-title><source>Curr Dir Psychol Sci</source><year>2012</year><volume>21</volume><issue>1</issue><fpage>31</fpage><lpage>35</lpage><pub-id pub-id-type="doi">10.1177/0963721411429960</pub-id></nlm-citation></ref><ref id="ref22"><label>22</label><nlm-citation citation-type="journal"><person-group person-group-type="author"><name name-style="western"><surname>Hildt</surname><given-names>E</given-names> </name></person-group><article-title>What is the role of explainability in medical artificial intelligence? A case-based approach</article-title><source>Bioengineering (Basel)</source><year>2025</year><month>04</month><day>2</day><volume>12</volume><issue>4</issue><fpage>375</fpage><pub-id pub-id-type="doi">10.3390/bioengineering12040375</pub-id><pub-id pub-id-type="medline">40281735</pub-id></nlm-citation></ref><ref id="ref23"><label>23</label><nlm-citation citation-type="journal"><person-group person-group-type="author"><name name-style="western"><surname>Pierce</surname><given-names>RL</given-names> </name><name name-style="western"><surname>Van Biesen</surname><given-names>W</given-names> </name><name name-style="western"><surname>Van Cauwenberge</surname><given-names>D</given-names> </name><name name-style="western"><surname>Decruyenaere</surname><given-names>J</given-names> </name><name name-style="western"><surname>Sterckx</surname><given-names>S</given-names> </name></person-group><article-title>Explainability in medicine in an era of AI-based clinical decision support systems</article-title><source>Front Genet</source><year>2022</year><volume>13</volume><fpage>903600</fpage><pub-id pub-id-type="doi">10.3389/fgene.2022.903600</pub-id><pub-id pub-id-type="medline">36199569</pub-id></nlm-citation></ref><ref id="ref24"><label>24</label><nlm-citation citation-type="journal"><person-group person-group-type="author"><name name-style="western"><surname>Elwyn</surname><given-names>G</given-names> </name><name name-style="western"><surname>Frosch</surname><given-names>D</given-names> </name><name name-style="western"><surname>Thomson</surname><given-names>R</given-names> </name><etal/></person-group><article-title>Shared decision making: a model for clinical practice</article-title><source>J Gen Intern Med</source><year>2012</year><month>10</month><volume>27</volume><issue>10</issue><fpage>1361</fpage><lpage>1367</lpage><pub-id pub-id-type="doi">10.1007/s11606-012-2077-6</pub-id><pub-id pub-id-type="medline">22618581</pub-id></nlm-citation></ref><ref id="ref25"><label>25</label><nlm-citation citation-type="journal"><person-group person-group-type="author"><name name-style="western"><surname>Gigerenzer</surname><given-names>G</given-names> </name><name name-style="western"><surname>Edwards</surname><given-names>A</given-names> </name></person-group><article-title>Simple tools for understanding risks: from innumeracy to insight</article-title><source>BMJ</source><year>2003</year><month>09</month><day>27</day><volume>327</volume><issue>7417</issue><fpage>741</fpage><lpage>744</lpage><pub-id pub-id-type="doi">10.1136/bmj.327.7417.741</pub-id><pub-id pub-id-type="medline">14512488</pub-id></nlm-citation></ref><ref id="ref26"><label>26</label><nlm-citation citation-type="journal"><person-group person-group-type="author"><name name-style="western"><surname>Obermeyer</surname><given-names>Z</given-names> </name><name name-style="western"><surname>Powers</surname><given-names>B</given-names> </name><name name-style="western"><surname>Vogeli</surname><given-names>C</given-names> </name><name name-style="western"><surname>Mullainathan</surname><given-names>S</given-names> </name></person-group><article-title>Dissecting racial bias in an algorithm used to manage the health of populations</article-title><source>Science</source><year>2019</year><month>10</month><day>25</day><volume>366</volume><issue>6464</issue><fpage>447</fpage><lpage>453</lpage><pub-id pub-id-type="doi">10.1126/science.aax2342</pub-id><pub-id pub-id-type="medline">31649194</pub-id></nlm-citation></ref><ref id="ref27"><label>27</label><nlm-citation citation-type="journal"><person-group person-group-type="author"><name name-style="western"><surname>Kolfschooten</surname><given-names>HV</given-names> </name></person-group><article-title>EU regulation of artificial intelligence: challenges for patients&#x2019; rights</article-title><source>Common Market Law Rev</source><year>2022</year><month>02</month><day>1</day><volume>59</volume><issue>1</issue><fpage>81</fpage><lpage>112</lpage><pub-id pub-id-type="doi">10.54648/COLA2022005</pub-id></nlm-citation></ref></ref-list></back></article>