<?xml version="1.0" encoding="UTF-8"?><!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.0 20040830//EN" "journalpublishing.dtd"><article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" dtd-version="2.0" xml:lang="en" article-type="research-article"><front><journal-meta><journal-id journal-id-type="nlm-ta">J Med Internet Res</journal-id><journal-id journal-id-type="publisher-id">jmir</journal-id><journal-id journal-id-type="index">1</journal-id><journal-title>Journal of Medical Internet Research</journal-title><abbrev-journal-title>J Med Internet Res</abbrev-journal-title><issn pub-type="epub">1438-8871</issn><publisher><publisher-name>JMIR Publications</publisher-name><publisher-loc>Toronto, Canada</publisher-loc></publisher></journal-meta><article-meta><article-id pub-id-type="publisher-id">v28i1e91940</article-id><article-id pub-id-type="doi">10.2196/91940</article-id><article-categories><subj-group subj-group-type="heading"><subject>Viewpoint</subject></subj-group></article-categories><title-group><article-title>Governing Patient-Facing AI-Generated Video in Digital Health: A Risk-and-Ethics Matrix for Deployment, Monitoring, and Change Control</article-title></title-group><contrib-group><contrib contrib-type="author"><name name-style="western"><surname>Hu</surname><given-names>Yongzheng</given-names></name><degrees>MD</degrees><xref ref-type="aff" rid="aff1">1</xref><xref ref-type="aff" rid="aff2">2</xref></contrib><contrib contrib-type="author" corresp="yes"><name name-style="western"><surname>Jiang</surname><given-names>Wei</given-names></name><degrees>MD</degrees><xref ref-type="aff" rid="aff1">1</xref><xref ref-type="aff" rid="aff2">2</xref></contrib></contrib-group><aff id="aff1"><institution>Department of Nephrology, Affiliated Hospital of Qingdao University</institution><addr-line>16 Jiangsu Road, Shinan District</addr-line><addr-line>Qingdao</addr-line><addr-line>Shandong</addr-line><country>China</country></aff><aff id="aff2"><institution>Department of Nephrology, Qingdao University</institution><addr-line>Qingdao</addr-line><addr-line>Shandong</addr-line><country>China</country></aff><contrib-group><contrib contrib-type="editor"><name name-style="western"><surname>Mavragani</surname><given-names>Amaryllis</given-names></name></contrib></contrib-group><contrib-group><contrib contrib-type="reviewer"><name name-style="western"><surname>Shamsi</surname><given-names>Atefeh</given-names></name></contrib><contrib contrib-type="reviewer"><name name-style="western"><surname>Su</surname><given-names>Zhaohui</given-names></name></contrib></contrib-group><author-notes><corresp>Correspondence to Wei Jiang, MD, Department of Nephrology, Affiliated Hospital of Qingdao University, 16 Jiangsu Road, Shinan District, Qingdao, Shandong, 266000, China, 86 13044087725; <email>jiangwei866@qdu.edu.cn</email></corresp></author-notes><pub-date pub-type="collection"><year>2026</year></pub-date><pub-date pub-type="epub"><day>8</day><month>5</month><year>2026</year></pub-date><volume>28</volume><elocation-id>e91940</elocation-id><history><date date-type="received"><day>22</day><month>01</month><year>2026</year></date><date date-type="rev-recd"><day>14</day><month>04</month><year>2026</year></date><date date-type="accepted"><day>15</day><month>04</month><year>2026</year></date></history><copyright-statement>&#x00A9; Yongzheng Hu, Wei Jiang. Originally published in the Journal of Medical Internet Research (<ext-link ext-link-type="uri" xlink:href="https://www.jmir.org">https://www.jmir.org</ext-link>), 8.5.2026. </copyright-statement><copyright-year>2026</copyright-year><license license-type="open-access" xlink:href="https://creativecommons.org/licenses/by/4.0/"><p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (<ext-link ext-link-type="uri" xlink:href="https://creativecommons.org/licenses/by/4.0/">https://creativecommons.org/licenses/by/4.0/</ext-link>), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research (ISSN 1438-8871), is properly cited. The complete bibliographic information, a link to the original publication on <ext-link ext-link-type="uri" xlink:href="https://www.jmir.org/">https://www.jmir.org/</ext-link>, as well as this copyright and license information must be included.</p></license><self-uri xlink:type="simple" xlink:href="https://www.jmir.org/2026/1/e91940"/><abstract><p>In this Viewpoint, we argue that patient-facing high-fidelity artificial intelligence (AI)&#x2013;generated video requires governance that is operational, life cycle based, and embedded in existing institutional review pathways rather than limited to predeployment checks alone. Patient-facing high-fidelity AI-generated video&#x2014;synthetic or substantially AI-mediated video that presents realistic human likeness, voices, or clinical communication cues&#x2014;is rapidly entering patient education and clinical communication. We propose a risk-and-ethics matrix that combines residual clinical risk (likelihood &#x00D7; severity after mitigations) with an ethical alignment score that operationalizes autonomy, beneficence, nonmaleficence, and justice to yield actionable dispositions (encourage, permit with oversight, restrict or redesign, or prohibit). The framework links each disposition to dossier-based review, minimum controls, and postdeployment monitoring triggers&#x2014;focused on measurable outcomes (eg, comprehension, content-attributable follow-up burden, incidents and complaints, and equity gaps) as well as provenance and change control&#x2014;to support auditable, revisitable decisions over the system life cycle.</p></abstract><kwd-group><kwd>artificial intelligence</kwd><kwd>AI</kwd><kwd>digital health</kwd><kwd>deepfakes</kwd><kwd>risk management</kwd><kwd>postdeployment monitoring</kwd></kwd-group></article-meta></front><body><sec id="s1" sec-type="intro"><title>Introduction</title><p>High-fidelity artificial intelligence (AI)&#x2013;generated video&#x2014;from text-to-video patient explainers to deepfake-style clinician avatars&#x2014;is entering digital health via patient portals, telehealth workflows, and social platforms [<xref ref-type="bibr" rid="ref1">1</xref>,<xref ref-type="bibr" rid="ref2">2</xref>]. In this Viewpoint, we use this term to refer to synthetic or substantially AI-mediated video that presents realistic human likeness, voice, or other clinically salient communication cues in ways that may influence patient trust, comprehension, or decisions. We use &#x201C;operational governance&#x201D; to mean the institutional processes through which such systems are reviewed, approved, monitored, and re-evaluated over time. &#x201C;Real-world deployment&#x201D; refers to routine use outside controlled testing environments, including use through patient portals, telehealth workflows, apps, and social media channels where content may be redistributed or consumed without direct clinician mediation. &#x201C;Iterative system change&#x201D; refers to postdeployment modifications to models, prompts, templates, scripts, rendering pipelines, distribution channels, or disclosure and provenance controls that may materially alter system behavior. Early published examples suggest potential value for patient education and communication, including usability-tested patient digital twins for critical care education, avatar-based educational interventions associated with improved parental knowledge and care skills in hydrocephalus, and pilot or specialty use cases in radiology and postoperative patient communication [<xref ref-type="bibr" rid="ref3">3</xref>-<xref ref-type="bibr" rid="ref6">6</xref>]. However, governance often lags behind routine deployment, where content is redistributed across channels and iteratively updated (model, prompt, or template changes). These deployments are judged by outcomes beyond technical accuracy. Real-world clinical performance should be assessed using measurable end points that capture patient and system impact, including comprehension (eg, teach back checks), content-attributable follow-up burden, incident and complaint rates, and equity gaps across language and health literacy groups. This gap motivates a workflow-integrated approach that links upfront review to postdeployment monitoring, incident response, and change control across the life cycle [<xref ref-type="bibr" rid="ref7">7</xref>,<xref ref-type="bibr" rid="ref8">8</xref>].</p><p>The same properties that make AI-generated video attractive in digital health&#x2014;realism, personalization, and rapid iteration&#x2014;also create failure modes that are difficult to detect and manage once content is deployed across heterogeneous channels [<xref ref-type="bibr" rid="ref9">9</xref>,<xref ref-type="bibr" rid="ref10">10</xref>]. In routine settings, videos may be reposted or clipped outside institutional portals, updated as models and prompt templates change, and consumed without clinician context&#x2014;conditions that allow for small errors to propagate into clinically consequential misinformation, unnecessary follow-up burden, or delayed care [<xref ref-type="bibr" rid="ref11">11</xref>-<xref ref-type="bibr" rid="ref13">13</xref>]. Identity cues embedded in video (eg, clinician likeness, institutional branding, or emotionally resonant avatars) can amplify perceived authority and trust, increasing the impact of misstatements, undisclosed synthetic identity, and privacy misuse. Before introducing residual clinical risk, it is important to distinguish it from clinical risk more broadly. In this paper, &#x201C;clinical risk&#x201D; refers to the possibility that patient-facing AI-generated video contributes to clinically relevant harm, including misinformation that changes care, delayed help seeking, unnecessary follow-up burden, privacy or identity misuse, psychological distress, or inequitable performance across patient groups. Risk becomes residual when these foreseeable failure modes are reassessed after proposed safeguards&#x2014;such as clinician script review, constrained generation, authenticated distribution, disclosure, provenance controls, and escalation pathways&#x2014;have been specified. Therefore, the matrix classifies the clinically relevant risk that remains after mitigation rather than the unmitigated theoretical hazard.</p><p>We propose a workflow-integrated governance approach: the risk-and-ethics matrix. It links residual clinical risk&#x2014;defined here as the remaining likelihood and severity of clinically relevant harm after mitigation&#x2014;to a principlism-based ethical alignment score (EAS) to support deployment decisions for patient-facing AI-generated video. Plotting these dimensions yields actionable dispositions (encourage, permit with oversight, restrict or redesign, and prohibit) and links each to minimum controls&#x2014;dossier-based documentation, disclosure requirements, and human oversight where appropriate&#x2014;as well as predefined postdeployment monitoring metrics and rereview triggers. We use representative scenarios to show how health systems can translate ethical commitments and probabilistic harms into auditable, revisitable decisions across the life cycle, particularly as content is updated and redistributed beyond its original workflow. We then summarize key risk mechanisms, present the scoring rubric and workflow, and map representative use cases of monitoring and change control actions.</p><p>The framework is intended for institutional decision-makers rather than for platform-wide moderation. In this paper, the relevant videos are those created, commissioned, adapted, or sponsored for patient-facing use. Typical producers include health systems; clinicians; patient education teams; digital health vendors; and researchers working in care delivery, education, or protocolized specialist settings. The primary governing bodies are local institutional actors such as institutional review boards (IRBs), digital health or clinical governance committees, patient education and communications leaders, and safety and IT oversight teams. Their role is to decide whether a proposed use case should be approved, under what minimum controls, and with what monitoring and rereview conditions. Existing laws, policies, and AI governance frameworks remain essential, but they often operate at a higher level of abstraction and do not specify how institutions should translate transparency, consent, safety, equity, provenance, and change control expectations into case-level deployment decisions for patient-facing AI-generated video use. What the matrix adds is not a replacement for law or formal regulation but an operational layer for institutional decision-making. It converts broad requirements such as transparency, human oversight, safety, equity, provenance, and accountability into case-level classifications, minimum controls, and life cycle management actions for specific patient-facing AI-generated video use cases [<xref ref-type="bibr" rid="ref14">14</xref>].</p><p>The aim of this Viewpoint is to argue that governance of patient-facing AI-generated video should connect residual risk assessment and ethical alignment to concrete institutional decisions, life cycle monitoring, and change control. To support this argument, we outline the main risk mechanisms; present the risk-and-ethics matrix and its workflow; and then discuss implementation, validation, and adaptation across institutional and regulatory contexts.</p></sec><sec id="s2"><title>Risk Mechanisms in Routine Digital Health Deployment of AI-Generated Video</title><p>When AI-generated video is deployed routinely across patient portals, telehealth workflows, and social platforms, failure modes emerge that are not well captured by predeployment validation and, therefore, require postdeployment monitoring, incident response, and change control. First, misinformation and content or performance drift (eg, model updates, guideline changes, prompt template changes, and channel shifts) pose direct hazards to patient decision-making [<xref ref-type="bibr" rid="ref15">15</xref>]. Hyperrealistic &#x201C;clinician&#x201D; avatars can convey inaccurate advice with a credibility premium that textual chatbots rarely command. Subtle script hallucinations and the lack of standardized clinical review workflows in many deployments amplify the chance that viewers will act on falsehoods before clinicians can intervene [<xref ref-type="bibr" rid="ref16">16</xref>,<xref ref-type="bibr" rid="ref17">17</xref>]. These risks are heightened in asynchronous, public-facing channels where corrections lag behind dissemination and platform ranking may preferentially surface engaging content.</p><p>Second, identity misuse and privacy infringements are uniquely salient when the video is the medium because identity cues drive trust calibration and downstream adherence. Unauthorized cloning of a clinician or patient&#x2019;s likeness undermines autonomy, consent, and informational self-determination. Even ostensibly therapeutic recreations such as those involving deceased relatives raise unresolved questions about posthumous privacy and family interests [<xref ref-type="bibr" rid="ref18">18</xref>,<xref ref-type="bibr" rid="ref19">19</xref>]. Because mere visual plausibility confers trust, impersonation can catalyze fraud and degrade the informational environment far beyond the index case.</p><p>Third, psychological impact is bidirectional and context dependent. Video immersive qualities can deepen engagement, although they may also retraumatize, induce overattachment to synthetic figures, or blur boundaries between memory and simulation in grief and trauma work [<xref ref-type="bibr" rid="ref20">20</xref>,<xref ref-type="bibr" rid="ref21">21</xref>]. Minimizing these harms requires careful screening, clear framing, and predefined discontinuation criteria with escalation pathways to human care&#x2014;not only technical guardrails.</p><p>Operationalizing these concerns should prioritize minimal, measurable guardrails. For comprehension and misinformation, institutions should track user-reported confusion, unplanned follow-up contacts attributable to the content, and brief comprehension checks (eg, teach back&#x2013;style questions) in representative patient groups [<xref ref-type="bibr" rid="ref22">22</xref>]. For identity and autonomy, all patient-facing deployments should meet a baseline disclosure standard, including clear labeling of synthetic content, explicit affirmation that no real clinician is speaking, and an accessible opt out. Psychological risk warrants prescreening, short validated distress scales, and predefined stop rules. Equity should be audited through stratified analyses of comprehension, incident or complaint rates, and follow-up burden (eg, by language, age, and health literacy). These metrics make benefits and harms visible enough to guide iterative redesign and trigger rereviews when thresholds are crossed.</p><p>Fourth, authenticity and institutional trust are collective goods at stake. As synthetic media saturate telehealth, patients may begin to doubt legitimate communications (&#x201C;Is this my doctor or an AI?&#x201D;) [<xref ref-type="bibr" rid="ref23">23</xref>,<xref ref-type="bibr" rid="ref24">24</xref>]. The resulting frictions&#x2014;hesitation to follow instructions and demand for redundant confirmation&#x2014;impose hidden costs on clinicians and organizations. Therefore, provenance signals and disclosure norms matter not as mere formalities but as trust-preserving infrastructure.</p><p>Finally, justice and equity considerations cut across all preceding risks. Benefits may accrue first to well-resourced settings that can build multilingual, culturally attuned avatars, whereas harms&#x2014;deception, confusion, and exploitation&#x2014;disproportionately fall on groups with lower health literacy or access to verification tools [<xref ref-type="bibr" rid="ref25">25</xref>,<xref ref-type="bibr" rid="ref26">26</xref>]. Thus, equity-oriented design, performance disaggregation, and complaint path accessibility are ethical requirements, not optional enhancements.</p></sec><sec id="s3"><title>Operational Governance Framework: The Risk-And-Ethics Matrix</title><p>We present an operational governance framework&#x2014;the risk-and-ethics matrix&#x2014;that supports deployment decisions for patient-facing AI-generated video by linking residual clinical risk to a principlism-based EAS and to predefined monitoring and rereview actions. Existing laws, policies, and AI governance frameworks establish essential high-level expectations. They do not usually specify how institutions should adjudicate a concrete patient-facing AI-generated video use case, what minimum controls should accompany approval, or when iterative changes should trigger rereview. Purely technical risk scoring tends to underweight autonomy and justice [<xref ref-type="bibr" rid="ref27">27</xref>,<xref ref-type="bibr" rid="ref28">28</xref>], whereas principle-first approaches can ignore how likely and severe harms are in actual practice [<xref ref-type="bibr" rid="ref28">28</xref>-<xref ref-type="bibr" rid="ref31">31</xref>]. Our integration preserves the strengths of both approaches and translates them into decisions that IRBs, hospital digital health and patient education governance groups, communications leaders, and safety and IT oversight committees can defend, document, and audit.</p><p>Here, we distinguish inherent or unmitigated clinical hazard from residual clinical risk, which is the basis for governance classification. On the risk axis, we adapt a hospital-grade matrix consistent with common clinical risk management concepts in which risk reflects the combination of probability and severity. For each use case, we score (1) the likelihood that a specified harm scenario will occur on a 4-level ordinal scale (rare, unlikely, possible, and likely) and (2) the severity of plausible consequences on a 4-level scale (negligible, minor, major, and catastrophic). Cross-tabulation yields composite tiers of low, moderate, high, and extreme residual risk. Assessment proceeds by enumerating the failure modes specific to synthetic video&#x2014;misinformation that could change care, identity or privacy breaches, psychologically triggering content, or equity harms [<xref ref-type="bibr" rid="ref19">19</xref>,<xref ref-type="bibr" rid="ref32">32</xref>,<xref ref-type="bibr" rid="ref33">33</xref>]&#x2014;and then rating each mode and assigning the overall tier according to the highest credible residual risk after proposed mitigations. Mitigations&#x2014;such as clinician review of scripts, constrained generation, authenticated distribution, and provenance or disclosure controls&#x2014;are recorded in the dossier with versioning and change logs so that residual (not theoretical) risk is the basis of classification over time.</p><p>On the ethics side, we operationalize the 4 principles&#x2014;autonomy, beneficence, nonmaleficence, and justice&#x2014;into an EAS ranging from 0 to 8. Each principle receives a score of 0 when violated, a score of 1 when partially upheld, and a score of 2 when clearly upheld, guided by concrete criteria that map abstract duties to observable practices. Autonomy considers the transparency of AI use, the accuracy of identity representation, voluntariness, and the adequacy of consent in a video medium [<xref ref-type="bibr" rid="ref34">34</xref>]; beneficence requires a credible, evidence-informed benefit that is proportionate to the foreseeable burdens [<xref ref-type="bibr" rid="ref35">35</xref>]; nonmaleficence emphasizes minimizing physical, psychological, informational, and reputational harms and guarding against foreseeable misuse [<xref ref-type="bibr" rid="ref36">36</xref>]; and justice attends to equitable access and performance across groups, bias mitigation, nonexploitation of vulnerable populations, and preservation of public trust [<xref ref-type="bibr" rid="ref37">37</xref>]. We band the EAS scores as high (7-8), medium (4-6), or low (0-3); where evidence is limited, conservative scoring and explicit uncertainty statements are needed. <xref ref-type="table" rid="table1">Table 1</xref> shows the scales and rubric.</p><table-wrap id="t1" position="float"><label>Table 1.</label><caption><p>Scales and rubric for the risk-and-ethics matrix.</p></caption><table id="table1" frame="hsides" rules="groups"><thead><tr><td align="left" valign="bottom">Component and level or principle</td><td align="left" valign="bottom">Definition or criterion</td></tr></thead><tbody><tr><td align="left" valign="top" colspan="2">Likelihood</td></tr><tr><td align="left" valign="top"><named-content content-type="indent">&#x00A0;&#x00A0;&#x00A0;&#x00A0;</named-content>Rare</td><td align="left" valign="top">Rare under routine conditions; requires multiple safeguards to fail</td></tr><tr><td align="left" valign="top"><named-content content-type="indent">&#x00A0;&#x00A0;&#x00A0;&#x00A0;</named-content>Unlikely</td><td align="left" valign="top">Single lapse or unusual context</td></tr><tr><td align="left" valign="top"><named-content content-type="indent">&#x00A0;&#x00A0;&#x00A0;&#x00A0;</named-content>Possible</td><td align="left" valign="top">Common precursor conditions present</td></tr><tr><td align="left" valign="top"><named-content content-type="indent">&#x00A0;&#x00A0;&#x00A0;&#x00A0;</named-content>Likely</td><td align="left" valign="top">Reproducible under routine conditions</td></tr><tr><td align="left" valign="top" colspan="2">Severity</td></tr><tr><td align="left" valign="top"><named-content content-type="indent">&#x00A0;&#x00A0;&#x00A0;&#x00A0;</named-content>Negligible</td><td align="left" valign="top">No decision impact; self-correcting (eg, brief uncertainty without behavior change)</td></tr><tr><td align="left" valign="top"><named-content content-type="indent">&#x00A0;&#x00A0;&#x00A0;&#x00A0;</named-content>Minor</td><td align="left" valign="top">Transient confusion; extra contact (eg, 1 follow-up call or portal message for clarification)</td></tr><tr><td align="left" valign="top"><named-content content-type="indent">&#x00A0;&#x00A0;&#x00A0;&#x00A0;</named-content>Major</td><td align="left" valign="top">Clinically consequential misinformation or marked psychological harm (eg, delayed care, inappropriate self-management, or significant distress requiring clinician intervention)</td></tr><tr><td align="left" valign="top"><named-content content-type="indent">&#x00A0;&#x00A0;&#x00A0;&#x00A0;</named-content>Catastrophic</td><td align="left" valign="top">Severe harm or system-level misinformation (eg, serious injury, widespread harmful misinformation, or crisis-level psychological destabilization)</td></tr><tr><td align="left" valign="top" colspan="2">EAS<sup><xref ref-type="table-fn" rid="table1fn1">a</xref></sup>&#x2014;autonomy</td></tr><tr><td align="char" char="." valign="top"><named-content content-type="indent">&#x00A0;&#x00A0;&#x00A0;&#x00A0;</named-content>0</td><td align="left" valign="top">No or unclear disclosure; misleading identity; no opt out</td></tr><tr><td align="char" char="." valign="top"><named-content content-type="indent">&#x00A0;&#x00A0;&#x00A0;&#x00A0;</named-content>1</td><td align="left" valign="top">Disclosure present but incomplete or hard to understand</td></tr><tr><td align="char" char="." valign="top"><named-content content-type="indent">&#x00A0;&#x00A0;&#x00A0;&#x00A0;</named-content>2</td><td align="left" valign="top">Clear disclosure; accurate identity; voluntary, informed consent</td></tr><tr><td align="left" valign="top" colspan="2">EAS&#x2014;beneficence</td></tr><tr><td align="char" char="." valign="top"><named-content content-type="indent">&#x00A0;&#x00A0;&#x00A0;&#x00A0;</named-content>0</td><td align="left" valign="top">No credible benefit</td></tr><tr><td align="char" char="." valign="top"><named-content content-type="indent">&#x00A0;&#x00A0;&#x00A0;&#x00A0;</named-content>1</td><td align="left" valign="top">Plausible benefit; limited evidence</td></tr><tr><td align="char" char="." valign="top"><named-content content-type="indent">&#x00A0;&#x00A0;&#x00A0;&#x00A0;</named-content>2</td><td align="left" valign="top">Evidence-informed benefit; proportional to burdens</td></tr><tr><td align="left" valign="top" colspan="2">EAS&#x2014;nonmaleficence</td></tr><tr><td align="char" char="." valign="top"><named-content content-type="indent">&#x00A0;&#x00A0;&#x00A0;&#x00A0;</named-content>0</td><td align="left" valign="top">Foreseeable significant harms</td></tr><tr><td align="char" char="." valign="top"><named-content content-type="indent">&#x00A0;&#x00A0;&#x00A0;&#x00A0;</named-content>1</td><td align="left" valign="top">Harms possible with partial mitigation</td></tr><tr><td align="char" char="." valign="top"><named-content content-type="indent">&#x00A0;&#x00A0;&#x00A0;&#x00A0;</named-content>2</td><td align="left" valign="top">Robust mitigation (human in the loop, constrained generation, or crisis plan)</td></tr><tr><td align="left" valign="top" colspan="2">EAS&#x2014;justice</td></tr><tr><td align="char" char="." valign="top"><named-content content-type="indent">&#x00A0;&#x00A0;&#x00A0;&#x00A0;</named-content>0</td><td align="left" valign="top">Exacerbates inequity or exploitation</td></tr><tr><td align="char" char="." valign="top"><named-content content-type="indent">&#x00A0;&#x00A0;&#x00A0;&#x00A0;</named-content>1</td><td align="left" valign="top">Neutral or unclear</td></tr><tr><td align="char" char="." valign="top"><named-content content-type="indent">&#x00A0;&#x00A0;&#x00A0;&#x00A0;</named-content>2</td><td align="left" valign="top">Equitable access; bias mitigation; accessible complaint path</td></tr><tr><td align="left" valign="top" colspan="2">EAS banding</td></tr><tr><td align="left" valign="top"><named-content content-type="indent">&#x00A0;&#x00A0;&#x00A0;&#x00A0;</named-content>High</td><td align="char" char="." valign="top">7-8</td></tr><tr><td align="left" valign="top"><named-content content-type="indent">&#x00A0;&#x00A0;&#x00A0;&#x00A0;</named-content>Medium</td><td align="char" char="." valign="top">4-6</td></tr><tr><td align="left" valign="top"><named-content content-type="indent">&#x00A0;&#x00A0;&#x00A0;&#x00A0;</named-content>Low</td><td align="char" char="." valign="top">0-3</td></tr></tbody></table><table-wrap-foot><fn id="table1fn1"><p><sup>a</sup>EAS: ethical alignment score.</p></fn></table-wrap-foot></table-wrap><p>Residual clinical risk is scored on a 4-level likelihood scale (rare, unlikely, possible, and likely) and a 4-level severity scale (negligible, minor, major, and catastrophic) interpreted after proposed mitigations. Ethical alignment is scored using the EAS (0-8), operationalizing autonomy, beneficence, nonmaleficence, and justice on a scale from 0 to 2 per principle and then banded as high (7-8), medium (4-6), or low (0-3). Ratings should reflect residual (not theoretical) risk, with conservative defaults and explicitly recorded uncertainty when evidence is sparse, and should be versioned for rereview after material changes.</p><p>Plotting the residual risk tier against EAS bands yields 4 deployment dispositions with explicit entry rules (<xref ref-type="fig" rid="figure1">Figure 1</xref>): encourage, permit with oversight, restrict or redesign, and prohibit. &#x201C;Encourage&#x201D; applies when residual risk is low and ethical alignment is high, supporting routine deployment with disclosure and periodic quality assurance. &#x201C;Permit with oversight&#x201D; covers moderate risk with at least medium ethical alignment (or low risk with medium ethical alignment) and requires human-in-the-loop review where appropriate, audit trails, incident reporting, time-limited approvals, and a postdeployment monitoring plan with predefined triggers for rereview. &#x201C;Restrict or redesign&#x201D; is appropriate when residual risk is high or when ethical alignment is low in the absence of intrinsic deception (ie, the use case is not fundamentally based on impersonation, undisclosed synthetic clinicians, or manipulative identity cues); here, scope narrowing, stronger transparency and safety guardrails, and protocolized pilots are prerequisites for reconsideration. &#x201C;Prohibit&#x201D; is reserved for extreme risk or for low ethical alignment tied to intrinsic deception or manipulation that exploits vulnerabilities; in such cases, deployment is disallowed and takedown, reporting, and other platform- or legal-level remedies may be warranted. These thresholds aim less for numerical precision than for defensible consistency across cases and clarity about what concrete changes would move an application toward a safer, more ethically aligned quadrant.</p><fig position="float" id="figure1"><label>Figure 1.</label><caption><p>Risk-and-ethics matrix for patient-facing generative artificial intelligence video in health care. Residual clinical risk (likelihood &#x00D7; severity after mitigations) is plotted against ethical alignment (ethical alignment score; EAS), yielding 4 deployment dispositions: encourage, permit with oversight, restrict or redesign, and prohibit. The horizontal axis represents the residual risk tier (low, moderate, high, or extreme), and the vertical axis represents EAS band (high, medium, and low), where EAS operationalizes autonomy, beneficence, nonmaleficence, and justice on a scale from 0 to 8. Entry rules prioritize residual (not theoretical) risk and link each disposition to minimum controls and rereview triggers.</p></caption><graphic alt-version="no" mimetype="image" position="float" xlink:type="simple" xlink:href="jmir_v28i1e91940_fig01.png"/></fig><p>To support consistency, the rubric anchors abstract principles to concrete artifacts (eg, disclosure language, escalation pathways, evidence of benefit, and provenance controls) so that different panels can converge even when data are sparse [<xref ref-type="bibr" rid="ref38">38</xref>,<xref ref-type="bibr" rid="ref39">39</xref>]. Interrater reliability is promoted through independent prescoring, structured reconciliation, and written rationales for deviations from precedent. Because both risk and ethics are provisional in fast-moving sociotechnical contexts [<xref ref-type="bibr" rid="ref40">40</xref>], institutions should version scores with date-stamped assumptions and require rereview after model updates, distribution channel changes, or sentinel events. Therefore, classification is not a verdict but a living record of judgment under stated conditions.</p><p>Finally, we specify a lightweight workflow that fits existing governance rather than creating parallel structures. Because review burden should be proportional to risk and novelty, we do not assume a single fixed evaluation time for all use cases. Low-risk, template-based, clinician-vetted educational videos that closely follow prior approved formats may undergo an expedited review focused on dossier updates, disclosure, and any material changes, whereas novel, higher-risk, psychologically sensitive, identity-based, or publicly disseminated use cases warrant fuller panel deliberation and documentation. Proponents submit a use case dossier describing purpose and audience, generation pipeline, distribution channel, mitigation plan, anticipated failure modes, and versioning or change logs. A triad panel&#x2014;a clinician or health educator, bioethicist, and safety or IT lead&#x2014;scores risk and EAS independently, reconciles differences, and documents residual disagreements; panels may co-opt patient advocacy or health literacy expertise for patient-facing deployments when needed. Decisions link directly to the chosen disposition and to a postdeployment monitoring plan with predefined indicators (eg, misinformation incidents, user-reported confusion, complaint rates, follow-up burden, and equity gaps) and time-bound rereview triggers. Operational steps are summarized in <xref ref-type="fig" rid="figure2">Figure 2</xref>; a printable evaluator&#x2019;s checklist and monitoring triggers can be found in <xref ref-type="supplementary-material" rid="app1">Multimedia Appendix 1</xref>, and a structured use case dossier template can be found in <xref ref-type="supplementary-material" rid="app2">Multimedia Appendix 2</xref>. For institutional use, matrix-guided evaluation should be embedded into release governance such that publication or distribution through official channels requires completed dossier documentation, named sign-off, and versioned approval records. Materials that bypass review or breach minimum controls should trigger withholding of institutional dissemination; pause or takedown; incident review; and, where applicable, corrective action under local policy. In this sense, the key incentive structure is not speed-based reward. It is the linkage of authorization, traceability, and consequences to the right to deploy patient-facing AI-generated video. Where feasible, dossiers should trace content provenance from prompt to rendered asset to distribution and specify abuse-resistant defaults (eg, prohibited impersonation classes and persistent labeling), with escalation procedures when out-of-distribution use or unequal performance emerges. <xref ref-type="table" rid="table2">Table 2</xref> shows governance categories and minimum controls.</p><fig position="float" id="figure2"><label>Figure 2.</label><caption><p>Workflow from use case dossier to classification and postdeployment monitoring. Proponents submit a versioned use case dossier; a triad panel (clinician or health educator, bioethicist, and safety or IT lead) independently prescores residual risk and ethical alignment score (EAS), reconciles differences, and records a written rationale. The resulting disposition maps to minimum controls (<xref ref-type="table" rid="table2">Table 2</xref>) and a monitoring plan with predefined indicators and rereview triggers (eg, incidents, user-reported confusion, equity gaps, and version changes). Operational checklist and monitoring triggers are provided in <xref ref-type="supplementary-material" rid="app1">Multimedia Appendix 1</xref>; the dossier template is provided in <xref ref-type="supplementary-material" rid="app2">Multimedia Appendix 2</xref>.</p></caption><graphic alt-version="no" mimetype="image" position="float" xlink:type="simple" xlink:href="jmir_v28i1e91940_fig02.png"/></fig><table-wrap id="t2" position="float"><label>Table 2.</label><caption><p>Deployment dispositions, entry rules, and minimum controls.</p></caption><table id="table2" frame="hsides" rules="groups"><thead><tr><td align="left" valign="bottom">Category</td><td align="left" valign="bottom">Residual risk tier &#x00D7; EAS<sup><xref ref-type="table-fn" rid="table2fn1">a</xref></sup> band</td><td align="left" valign="bottom">Minimum controls<sup><xref ref-type="table-fn" rid="table2fn2">b</xref></sup></td><td align="left" valign="bottom">Illustrative examples<sup><xref ref-type="table-fn" rid="table2fn3">c</xref></sup></td></tr></thead><tbody><tr><td align="left" valign="top">Encourage</td><td align="left" valign="top">Low risk and high EAS</td><td align="left" valign="top">Disclosure and routine QA<sup><xref ref-type="table-fn" rid="table2fn4">d</xref></sup></td><td align="left" valign="top">Transparent, clinician-vetted patient education avatar</td></tr><tr><td align="left" valign="top">Permit with oversight</td><td align="left" valign="top">Moderate risk and &#x2265;medium EAS or low risk and medium EAS</td><td align="left" valign="top">Human in the loop, audit trail, incident reporting, and time-limited approval</td><td align="left" valign="top">Protocolized &#x201C;deepfake therapy&#x201D; in specialist or IRB<sup><xref ref-type="table-fn" rid="table2fn5">e</xref></sup> settings</td></tr><tr><td align="left" valign="top">Restrict or redesign</td><td align="left" valign="top">High risk or low EAS (no intrinsic deception)</td><td align="left" valign="top">Scope narrowing, stronger transparency and safety, and piloting then rescoring</td><td align="left" valign="top">Free-text explainer without clear disclosure</td></tr><tr><td align="left" valign="top">Prohibit</td><td align="left" valign="top">Extreme risk or low EAS with intrinsic deception or manipulation</td><td align="left" valign="top">Takedown and platform- or legal-level remedies</td><td align="left" valign="top">Impersonated physician endorsements</td></tr></tbody></table><table-wrap-foot><fn id="table2fn1"><p><sup>a</sup>EAS: ethical alignment score.</p></fn><fn id="table2fn2"><p><sup>b</sup>Minimum controls specify documentation, human oversight, incident reporting, time-limited approval, and rereview triggers as applicable.</p></fn><fn id="table2fn3"><p><sup>c</sup>Examples illustrate typical placements.</p></fn><fn id="table2fn4"><p><sup>d</sup>QA: quality assurance.</p></fn><fn id="table2fn5"><p><sup>e</sup>IRB: institutional review board.</p></fn></table-wrap-foot></table-wrap></sec><sec id="s4"><title>Use Case Mapping: From Dossier Evidence to Monitoring Plans</title><p>The following use cases illustrate a repeatable mapping from dossier evidence to residual risk or EAS scoring, a deployment disposition, and a minimal monitoring plan with explicit triggers for rereview.</p><sec id="s4-1"><title>Case 1: Patient Education Avatar (Transparent, Vetted, and Multilingual)</title><p>A hospital deploys short, tailored videos explaining procedures and postoperative care through a clearly disclosed AI avatar delivered via an authenticated patient portal. Scripts are clinician vetted, linguistically and culturally adapted, and version controlled. The portal supports replay, adjustable playback speed, and an easy pathway to request human follow-up or report confusion. Residual risk is low: under constrained scripts plus clinical review and secure distribution, misinformation is rare or unlikely and typically minor (eg, transient confusion or extra contact), and privacy exposure is limited because the content is generic. Ethical alignment would likely be high (autonomy=2, beneficence=2, nonmaleficence=2, and justice=2): disclosure and an opt out support autonomy, measurable gains in comprehension and reduced avoidable follow-up burden support beneficence, constrained generation and review reduce foreseeable harms, and multilingual access advances justice. The resulting disposition is to encourage, with routine quality assurance and a time-bounded review cadence plus rereview triggers for guideline changes, prompt template updates, channel changes, or stratified performance gaps across language and health literacy groups [<xref ref-type="bibr" rid="ref41">41</xref>,<xref ref-type="bibr" rid="ref42">42</xref>].</p></sec><sec id="s4-2"><title>Case 2: &#x201C;Deepfake Therapy&#x201D; for Grief (Therapist Led, Protocolized, and Time Limited)</title><p>Under IRB-approved protocols, a psychotherapist offers a time-bounded intervention in which a synthetic likeness of a deceased relative delivers scripted messages to facilitate goodbye rituals. A research or specialist setting matters because it enables structured screening, standardized outcome capture, adverse event reporting, and enforceable stop rules. Residual risk is moderate to high: even with careful preparation, clinically meaningful psychological harms may occur (severity level: major), and the likelihood may be possible or likely in vulnerable subgroups. Ethical alignment would likely be medium (autonomy=1, beneficence=2, nonmaleficence=1, and justice=1): beneficence may be credible for selected patients; autonomy depends on robust consent that frames the video as a simulation and checks understanding; nonmaleficence relies on screening, therapist presence, predefined discontinuation criteria, and escalation pathways; and justice requires equitable access criteria and avoidance of coercive commercialization. The resulting disposition is to permit with oversight only in research or specialist settings, with predefined outcome measures (including follow-up windows), incident logging, and immediate cessation upon adverse reactions; approvals should be time limited, with rereview triggers tied to protocol deviations, adverse events, or material changes to the model or pipeline [<xref ref-type="bibr" rid="ref5">5</xref>,<xref ref-type="bibr" rid="ref43">43</xref>,<xref ref-type="bibr" rid="ref44">44</xref>].</p></sec><sec id="s4-3"><title>Case 3: Impersonated Physician Endorsements (No Consent and Public Dissemination)</title><p>A synthetic clone of a prominent clinician appears on social media platforms to promote unverified health products or claims. Residual risk is extreme: harm is likely because the video exploits identity-based trust and can divert patients from evidence-based care; severity level is major to catastrophic if it prompts medication changes, delays appropriate care, or amplifies misinformation at scale. Ethical alignment would score low across all 4 principles: deception negates autonomy; benefits are not patient centered; harms are foreseeable and unmitigated; and the practice exploits vulnerable audiences, undermining justice and public trust. The resulting disposition is to prohibit. Rapid takedown and reporting workflows, provenance checks, public clarification through verified institutional channels, and legal remedies are warranted; organizations should precommit to a zero-tolerance policy for unauthorized likeness use and log incidents to strengthen postmarket monitoring and prevention [<xref ref-type="bibr" rid="ref44">44</xref>-<xref ref-type="bibr" rid="ref46">46</xref>].</p></sec></sec><sec id="s5" sec-type="discussion"><title>Discussion</title><sec id="s5-1"><title>Implications for Digital Health Implementation</title><p>Our risk-and-ethics matrix translates life cycle AI governance expectations into a practical format for clinical decision-makers by pairing probabilistic risk appraisal with principled ethics in a way that is explicit, documentable, and revisitable. The distinctive governance value of the risk-and-ethics matrix is that it bridges the gap between high-level regulatory expectations and local operational decisions. Rather than offering another abstract set of principles, it enables institutions to classify a specific use case; document the rationale for approval or restriction; assign minimum controls; and connect deployment to monitoring, incident response, and change control over time. This orientation is consistent with major governance frameworks that emphasize postdeployment monitoring, mechanisms for capturing user input, incident response, and change management as core components of responsible AI use in real-world settings [<xref ref-type="bibr" rid="ref47">47</xref>]. We localize these expectations to patient-facing generative AI video by grounding &#x201C;risk tiering&#x201D; in concrete clinical harm scenarios (eg, misinformation that changes care, identity misuse, psychological triggering content, and equity harms) and by converting principlism&#x2014;autonomy, beneficence, nonmaleficence, and justice&#x2014;into action-guiding criteria through an EAS. Together, the 2 axes yield implementable dispositions&#x2014;encourage, permit with oversight, restrict or redesign, and prohibit&#x2014;whose entry rules can be recorded, audited, and defended as part of evaluating real-world clinical performance.</p><p>Beyond classification, the matrix provides a shared structure for interdisciplinary deliberation and practical redesign. The framework also functions as a design instrument: because dossiers trace why a proposal lands in a given disposition, developers are directed toward concrete modifications&#x2014;clear disclosure and comprehension-checked consent to strengthen autonomy; scope limitation, constrained generation, and human-in-the-loop review to reduce residual risk; and stratified monitoring to strengthen justice. In parallel, professional guidance on generative AI in medicine underscores the importance of preserving human oversight and aligning deployments with clinical workflows rather than displacing them&#x2014;an emphasis that is especially salient for persuasive patient-facing media. Conversely, the matrix clarifies &#x201C;red-line&#x201D; cases grounded in intrinsic deception (eg, impersonated clinician endorsements), supporting rapid takedown, institutional clarification through verified channels, and incident logging to strengthen future prevention.</p><p>National-level safeguards become especially important in cases in which institutional incentives favor speed, visibility, or monetization over careful review. In China, the relevant governance architecture is emerging but remains distributed across health sector, platform, and generative AI rules. Existing measures already provide building blocks, including synthetic content labeling and traceability, filing and disclosure for certain public-facing AI services, and health sector expectations for account registration and monitoring. The next step is to connect these elements through sector-specific requirements for disclosure, provenance, verified identity, monitoring, rapid correction or takedown, and enforceable accountability. Such national-level guardrails would not replace local review tools such as the risk-and-ethics matrix; rather, they would create the incentive environment in which institutional review is more likely to be performed seriously and consistently.</p><p>Feasibility in smaller or resource-limited settings will depend on tiered implementation rather than assuming the full model from the outset. The core minimum is not a large committee but a documented review pathway with clearly assigned accountability, use case documentation, explicit disclosure checks, and a mechanism for escalation when risk exceeds local expertise. For familiar low-risk educational videos, institutions with limited resources may use a simplified pathway involving a clinically accountable reviewer plus a second reviewer with operational or technical oversight supported by a standardized checklist and basic postdeployment signals such as complaints, follow-up contacts, and disclosure failures. The fuller triad panel model&#x2014;with dedicated bioethics input, richer analytics, formal incident reporting, and equity stratification&#x2014;should be viewed as an expanded configuration for higher-risk or more mature settings. Where dedicated bioethics expertise is unavailable, regional ethics consortia, shared review pools, tele-ethics consultation, or referral pathways to larger centers may provide a practical alternative, especially for first-in-class, psychologically sensitive, identity-based, or publicly disseminated use cases.</p></sec><sec id="s5-2"><title>Limitations and Validation Agenda</title><p>This approach has limitations. Early deployments will often rely on expert judgment because empirical evidence on the frequency and magnitude of novel harms remains sparse, and both residual risk estimates and EAS components can vary with local context [<xref ref-type="bibr" rid="ref48">48</xref>-<xref ref-type="bibr" rid="ref51">51</xref>]. To temper subjectivity, we emphasize independent prescoring, structured reconciliation, and written rationales anchored to observable artifacts (eg, disclosure language, evidence of benefit, escalation pathways, and provenance controls) [<xref ref-type="bibr" rid="ref52">52</xref>]. Because the EAS is intended as an operational rubric rather than a purely intuitive checklist, institutions should prospectively calibrate and evaluate its reliability before routine use. A practical approach would be to begin with a set of anchor case vignettes spanning low-, medium-, and high-alignment scenarios; require independent prescoring by panel members; conduct structured reconciliation with written reasons for disagreement; and repeat calibration periodically using shared cases across panels or sites [<xref ref-type="bibr" rid="ref53">53</xref>]. For the ratings of 0 to 2 assigned to each principle, agreement could be summarized using percentage agreement and weighted &#x03BA;, whereas the reliability of the summed EAS from 0 to 8 could be examined using an intraclass correlation coefficient. Institutions could additionally track agreement on EAS banding and on the final deployment disposition because these outputs are directly tied to governance decisions. Content validity could be strengthened through multidisciplinary expert review of whether the rubric adequately captures observable manifestations of autonomy, beneficence, nonmaleficence, and justice, with iterative refinement through pilot-testing or Delphi-style consensus procedures [<xref ref-type="bibr" rid="ref54">54</xref>]. Construct validity could then be explored by testing whether the EAS discriminates between use cases that are expected a priori to differ in ethical alignment (eg, transparent clinician-vetted education avatars vs impersonated clinician endorsements) [<xref ref-type="bibr" rid="ref55">55</xref>]. Importantly, classification should be treated as a living record&#x2014;versioned with date-stamped assumptions&#x2014;so that uncertainty becomes auditable and revisable rather than implicit [<xref ref-type="bibr" rid="ref56">56</xref>,<xref ref-type="bibr" rid="ref57">57</xref>]. The same use case may plausibly yield different profiles across clinical domains (eg, perioperative education vs mental health) or across resource settings; documenting contextual assumptions and applying prespecified domain modifiers can improve consistency without suppressing legitimate local variation. Finally, risk severity overlaps conceptually with nonmaleficence, and beneficence often embeds risk-benefit trade-offs [<xref ref-type="bibr" rid="ref58">58</xref>,<xref ref-type="bibr" rid="ref59">59</xref>]. Therefore, treating the axes as orthogonal is a usability heuristic&#x2014;intended to promote clarity and reproducibility&#x2014;rather than a claim of theoretical independence; cross-referencing during deliberation should be expected.</p></sec><sec id="s5-3"><title>Measurement, Monitoring, and Rereview Triggers</title><p>These caveats point to a concrete real-world evaluation agenda. Retrospective incident reviews and prospective pilots can stress test thresholds and calibrate rubrics using patient-centered and workflow-relevant end points (eg, comprehension or teach back performance, unplanned follow-up contacts attributable to content, complaint and incident rates, and stratified equity gaps). <xref ref-type="other" rid="box1">Textbox 1</xref> summarizes a core monitoring set, operational data sources, and example rereview triggers that can be embedded into routine digital health workflows. Governance-relevant measurement should be paired with life cycle mechanisms for capturing user input and adjudicating overrides and with explicit incident response and recovery pathways&#x2014;elements foregrounded in the National Institute of Standards and Technology&#x2019;s risk management guidance for deployed AI systems [<xref ref-type="bibr" rid="ref14">14</xref>]. Harmonization with provenance and disclosure standards can further improve auditability and reduce identity-related misuse, enabling versioned reassessments as models, prompts, distribution channels, and guardrails evolve. For systems that will undergo iterative change, &#x201C;change control&#x201D; should be planned rather than improvised; regulatory thinking around predetermined change control plans provides a useful template for specifying anticipated modifications and the evidence required to validate them over time. Comparative ethical analysis may also be valuable in edge cases where principlism and alternative lenses diverge; documenting such divergences can refine decision rules while preserving usability [<xref ref-type="bibr" rid="ref60">60</xref>,<xref ref-type="bibr" rid="ref61">61</xref>].</p><boxed-text id="box1"><title> Core monitoring metrics, operational data sources, and rereview triggers for patient-facing artificial intelligence&#x2013;generated video.</title><p><bold>Core metrics (minimum set)</bold></p><list list-type="bullet"><list-item><p>Comprehension: brief teach back&#x2013;style checks or short postview questions and user-reported confusion</p></list-item><list-item><p>Content-attributable follow-up burden: messages, calls, or telehealth follow-ups attributable to the video (eg, tagged reason codes or postview &#x201C;contact clinician&#x201D; clicks)</p></list-item><list-item><p>Incidents and complaints: safety reports and formal complaints linked to the content (misinformation, identity misuse, or privacy concerns)</p></list-item><list-item><p>Equity gaps: stratified differences in comprehension, follow-up burden, and incidents or complaints (eg, by language and health literacy proxies)</p></list-item><list-item><p>Provenance and trust: visibility of synthetic content disclosure, verification friction (eg, &#x201C;Is this my doctor?&#x201D; queries), and confirmed impersonation attempts</p></list-item></list><p><bold>Data sources (digital health operations)</bold></p><list list-type="bullet"><list-item><p>Patient portal and telehealth analytics (views, completion, and click-through to contact or obtain support)</p></list-item><list-item><p>Secure messaging and call center logs (tags and reason codes and clinician note templates for attribution)</p></list-item><list-item><p>Incident reporting and complaint systems (patient safety and privacy or security tickets)</p></list-item><list-item><p>Patient feedback channels (postview survey and &#x201C;report confusion/request human help&#x201D; buttons)</p></list-item><list-item><p>If distributed externally, verified channel monitoring (eg, takedown requests and platform reports)</p></list-item></list><p><bold>Rereview triggers (examples; adapt locally)</bold></p><list list-type="bullet"><list-item><p>Material change: model, prompt or template, script or guideline, channel, or language rollout updates</p></list-item><list-item><p>Signal excursion: sustained rise in follow-up burden or confusion reports vs baseline</p></list-item><list-item><p>Safety event: any major incident or clustered minor incidents attributable to the content</p></list-item><list-item><p>Equity flag: new or widening stratified gaps in end points</p></list-item><list-item><p>Provenance or identity event: confirmed impersonation, unauthorized likeness use, or disclosure failure</p></list-item></list></boxed-text><p>In early deployments, trigger thresholds should be treated as provisional rather than fixed regulatory cutoff points. A pragmatic starting approach is to establish a local baseline during an initial pilot or first complete rollout cycle and then update thresholds iteratively as more observations accumulate. During this early phase, institutions may use structured expert consensus or Delphi-style calibration among early adopters to define provisional trigger ranges, with wider tolerance bands and explicit uncertainty notes until local rates stabilize. Thresholds should ideally be interpreted against rolling local baselines rather than in isolation and should be re-estimated after material workflow, model, channel, or language changes. For equity analyses in particular, subgroup differences should not be overinterpreted when denominators are sparse; institutions should prespecify minimum subgroup counts before treating observed gaps as decision relevant and, where counts remain small, use descriptive flagging and continued data collection rather than strong statistical conclusions.</p><p>Here, &#x201C;rereview&#x201D; denotes the procedural reassessment triggered by monitoring signals, incidents, or material changes. &#x201C;Reclassification&#x201D; denotes a substantive change in risk tier, EAS band, or deployment disposition that may result from that reassessment. Because ethical priorities and risk tolerance vary across cultures and health systems, the framework is designed to be portable yet tunable [<xref ref-type="bibr" rid="ref62">62</xref>,<xref ref-type="bibr" rid="ref63">63</xref>]. To preserve comparability while allowing for local adaptation, the framework distinguishes core elements that should remain stable across sites from parameters that may be tuned to local context (<xref ref-type="table" rid="table3">Table 3</xref>).</p><table-wrap id="t3" position="float"><label>Table 3.</label><caption><p>Core standardized elements and locally tunable parameters of the risk-and-ethics matrix.</p></caption><table id="table3" frame="hsides" rules="groups"><thead><tr><td align="left" valign="bottom">Domain</td><td align="left" valign="bottom">Core elements (preserve across sites)<sup><xref ref-type="table-fn" rid="table3fn1">a</xref></sup></td><td align="left" valign="bottom">Tunable parameters (adapt locally)<sup><xref ref-type="table-fn" rid="table3fn2">b</xref></sup></td></tr></thead><tbody><tr><td align="left" valign="top">Ethical architecture</td><td align="left" valign="top">Four principles; 0-2 EAS<sup><xref ref-type="table-fn" rid="table3fn3">c</xref></sup> scoring logic; high, medium, and low EAS banding</td><td align="left" valign="top">Local case anchors, examples, and training materials</td></tr><tr><td align="left" valign="top">Residual risk assessment</td><td align="left" valign="top">Residual risk logic after mitigation; 4-level likelihood and severity structure</td><td align="left" valign="top">Default risk modifiers for high-vulnerability domains or populations</td></tr><tr><td align="left" valign="top">Deployment decisions</td><td align="left" valign="top">Four disposition categories (encourage, permit with oversight, restrict or redesign, and prohibit)</td><td align="left" valign="top">Local approving body, escalation route, and implementation authority</td></tr><tr><td align="left" valign="top">Documentation and oversight</td><td align="left" valign="top">Use case dossier, rationale, versioning, accountability, and human oversight</td><td align="left" valign="top">Dossier format; full triad panel vs simplified or shared review pathway</td></tr><tr><td align="left" valign="top">Disclosure and provenance</td><td align="left" valign="top">Minimum synthetic content disclosure, identity accuracy, and provenance expectations</td><td align="left" valign="top">Disclosure wording, language level, format, and content credential implementation</td></tr><tr><td align="left" valign="top">Monitoring and rereview</td><td align="left" valign="top">Monitoring, incident capture, equity review, and rereview trigger logic</td><td align="left" valign="top">Indicator thresholds, baseline methods, observation windows, subgroup definitions, and language coverage minimums</td></tr></tbody></table><table-wrap-foot><fn id="table3fn1"><p><sup>a</sup>Core elements are intended to preserve conceptual and operational comparability across sites.</p></fn><fn id="table3fn2"><p><sup>b</sup>Tunable parameters may be adapted to local workflow capacity, legal context, language needs, and risk tolerance provided that deviations are specified prospectively and documented consistently within the adopting institution.</p></fn><fn id="table3fn3"><p><sup>c</sup>EAS: ethical alignment score.</p></fn></table-wrap-foot></table-wrap><p>Crucially, classification must remain revisitable. Approvals in &#x201C;permit with oversight&#x201D; should be time limited and coupled with predefined rereview triggers (eg, model or prompt updates, distribution channel changes, sentinel incidents, or emerging inequities). Successful mitigation and accumulating evidence may move a use case toward &#x201C;encourage,&#x201D; whereas incidents or drift may push it toward &#x201C;restrict or redesign&#x201D; or &#x201C;prohibit.&#x201D; Embedding this cadence operationalizes continuous risk management and aligns oversight with the broader shift toward postdeployment monitoring systems for AI in real-world settings.</p><p>The risk-and-ethics matrix is intended to complement, not replace, formal regulatory review. Its role is to translate broad regulatory expectations into case-level institutional governance for patient-facing AI-generated video use. In cross-jurisdictional terms, the framework&#x2019;s disclosure, identity, and provenance elements align with transparency-oriented requirements; its human-in-the-loop review, escalation pathways, and authority to pause or withdraw deployments align with expectations around human oversight; its dossier, versioning, and documented rationale align with technical documentation and record-keeping expectations; and its monitoring indicators, incident triggers, and rereview cadence align with postmarket monitoring and iterative change control requirements. This means that, where a use case is already subject to sector-specific regulation&#x2014;such as the European Union AI Act&#x2019;s risk-based obligations for certain AI systems or medical device review pathways that incorporate predetermined change control planning&#x2014;the matrix is not a substitute for those legal processes. Rather, it provides a local operational layer that helps institutions implement, document, and monitor responsible use under real-world conditions.</p></sec><sec id="s5-4"><title>Conclusions and Next Steps for Implementation</title><p>AI-generated video is becoming a routine modality for patient-facing communication, making life cycle governance essential to safe real-world deployment [<xref ref-type="bibr" rid="ref1">1</xref>,<xref ref-type="bibr" rid="ref64">64</xref>,<xref ref-type="bibr" rid="ref65">65</xref>]. This Viewpoint presents an operational risk-and-ethics matrix that links residual clinical risk and ethical alignment to auditable, revisitable deployment decisions.</p><p>Future work should prioritize 3 deliverables. First, real-world evaluation should calibrate thresholds with patient-centered and workflow-relevant end points (eg, comprehension and teach back, content-attributable follow-up burden, incidents and complaints, and stratified equity gaps) and embed monitoring, incident response, and change management with time-limited approvals and rereview triggers. Second, provenance should be strengthened through clear disclosure and interoperable content credentials to reduce identity misuse and support verification. Third, change control should be planned rather than improvised: institutions should prespecify anticipated model, prompt, or pipeline updates and the evidence required to maintain assurance over time, drawing on the Food and Drug Administration&#x2019;s predetermined change control plan approach for AI-enabled systems.</p></sec></sec></body><back><ack><p>The authors used ChatGPT (OpenAI) for limited language translation and language editing during manuscript preparation. The authors reviewed and revised all such output and take full responsibility for the final manuscript.</p></ack><notes><sec><title>Funding</title><p>This research was supported by grants from the National Natural Science Foundation of China (82370724), the Qingdao Key Health Discipline Development Fund, and the Qingdao Key Clinical Specialty Elite Discipline project.</p></sec></notes><fn-group><fn fn-type="con"><p>WJ and YH conceived the presented idea and developed the theoretical framework for the risk-and-ethics evaluation scaffold for patient-facing generative artificial intelligence video. YH conducted the literature synthesis, drafted the manuscript, and designed the figures. WJ supervised the project, secured funding, and provided critical revision of the manuscript for important intellectual content. Both authors discussed the concepts and approved the final version of the manuscript.</p></fn><fn fn-type="conflict"><p>None declared.</p></fn></fn-group><glossary><title>Abbreviations</title><def-list><def-item><term id="abb1">AI</term><def><p>artificial intelligence</p></def></def-item><def-item><term id="abb2">EAS</term><def><p>ethical alignment score</p></def></def-item><def-item><term id="abb3">IRB</term><def><p>institutional review board</p></def></def-item></def-list></glossary><ref-list><title>References</title><ref id="ref1"><label>1</label><nlm-citation citation-type="journal"><person-group person-group-type="author"><name name-style="western"><surname>Netland</surname><given-names>T</given-names> </name><name name-style="western"><surname>von Dzengelevski</surname><given-names>O</given-names> </name><name name-style="western"><surname>Tesch</surname><given-names>K</given-names> </name><name name-style="western"><surname>Kwasnitschka</surname><given-names>D</given-names> </name></person-group><article-title>Comparing human-made and AI-generated teaching videos: an experimental study on learning effects</article-title><source>Comput Educ</source><year>2025</year><month>01</month><volume>224</volume><fpage>105164</fpage><pub-id pub-id-type="doi">10.1016/j.compedu.2024.105164</pub-id></nlm-citation></ref><ref id="ref2"><label>2</label><nlm-citation citation-type="journal"><person-group person-group-type="author"><name name-style="western"><surname>Lin</surname><given-names>J</given-names> </name><name name-style="western"><surname>Gu</surname><given-names>Y</given-names> </name><name name-style="western"><surname>Du</surname><given-names>G</given-names> </name><etal/></person-group><article-title>2D/3D image morphing technology from traditional to modern: a survey</article-title><source>Inf Fusion</source><year>2025</year><month>05</month><volume>117</volume><fpage>102913</fpage><pub-id pub-id-type="doi">10.1016/j.inffus.2024.102913</pub-id></nlm-citation></ref><ref id="ref3"><label>3</label><nlm-citation citation-type="journal"><person-group person-group-type="author"><name name-style="western"><surname>Rovati</surname><given-names>L</given-names> </name><name name-style="western"><surname>Gary</surname><given-names>PJ</given-names> </name><name name-style="western"><surname>Cubro</surname><given-names>E</given-names> </name><etal/></person-group><article-title>Development and usability testing of a patient digital twin for critical care education: a mixed methods study</article-title><source>Front Med (Lausanne)</source><year>2024</year><volume>10</volume><fpage>1336897</fpage><pub-id pub-id-type="doi">10.3389/fmed.2023.1336897</pub-id><pub-id pub-id-type="medline">38274456</pub-id></nlm-citation></ref><ref id="ref4"><label>4</label><nlm-citation citation-type="journal"><person-group person-group-type="author"><name name-style="western"><surname>Islam</surname><given-names>MZ</given-names> </name><name name-style="western"><surname>Wang</surname><given-names>G</given-names> </name></person-group><article-title>Avatars in the educational metaverse</article-title><source>Vis Comput Ind Biomed Art</source><year>2025</year><month>06</month><day>10</day><volume>8</volume><issue>1</issue><fpage>15</fpage><pub-id pub-id-type="doi">10.1186/s42492-025-00196-9</pub-id><pub-id pub-id-type="medline">40493326</pub-id></nlm-citation></ref><ref id="ref5"><label>5</label><nlm-citation citation-type="journal"><person-group person-group-type="author"><name name-style="western"><surname>Hoek</surname><given-names>S</given-names> </name><name name-style="western"><surname>Metselaar</surname><given-names>S</given-names> </name><name name-style="western"><surname>Ploem</surname><given-names>C</given-names> </name><name name-style="western"><surname>Bak</surname><given-names>M</given-names> </name></person-group><article-title>Promising for patients or deeply disturbing? The ethical and legal aspects of deepfake therapy</article-title><source>J Med Ethics</source><year>2025</year><month>07</month><day>7</day><volume>51</volume><issue>7</issue><fpage>481</fpage><lpage>486</lpage><pub-id pub-id-type="doi">10.1136/jme-2024-109985</pub-id><pub-id pub-id-type="medline">38981659</pub-id></nlm-citation></ref><ref id="ref6"><label>6</label><nlm-citation citation-type="journal"><person-group person-group-type="author"><name name-style="western"><surname>D&#x00FC;zg&#x00FC;n</surname><given-names>MV</given-names> </name><name name-style="western"><surname>&#x0130;&#x015F;ler</surname><given-names>A</given-names> </name><name name-style="western"><surname>Kazan</surname><given-names>MS</given-names> </name></person-group><article-title>Effect of avatar-based education program in hydrocephalus on ventriculoperitoneal shunt complications and parents&#x2019; knowledge and care skills: multicenter randomized controlled trials</article-title><source>Pediatr Neurol</source><year>2025</year><month>08</month><volume>169</volume><fpage>131</fpage><lpage>139</lpage><pub-id pub-id-type="doi">10.1016/j.pediatrneurol.2025.05.019</pub-id><pub-id pub-id-type="medline">40505427</pub-id></nlm-citation></ref><ref id="ref7"><label>7</label><nlm-citation citation-type="journal"><person-group person-group-type="author"><name name-style="western"><surname>Queiroz</surname><given-names>ABL</given-names> </name><name name-style="western"><surname>Sartori</surname><given-names>LRM</given-names> </name><name name-style="western"><surname>Lima</surname><given-names>G da S</given-names> </name><name name-style="western"><surname>Moraes</surname><given-names>RR</given-names> </name><name name-style="western"><surname>Lima</surname><given-names>GS</given-names> </name></person-group><article-title>Editorial policies for use and acknowledgment of artificial intelligence in dental journals</article-title><source>J Dent</source><year>2025</year><month>10</month><volume>161</volume><fpage>105923</fpage><pub-id pub-id-type="doi">10.1016/j.jdent.2025.105923</pub-id><pub-id pub-id-type="medline">40545230</pub-id></nlm-citation></ref><ref id="ref8"><label>8</label><nlm-citation citation-type="journal"><person-group person-group-type="author"><name name-style="western"><surname>Wekenborg</surname><given-names>MK</given-names> </name><name name-style="western"><surname>Gilbert</surname><given-names>S</given-names> </name><name name-style="western"><surname>Kather</surname><given-names>JN</given-names> </name></person-group><article-title>Examining human-AI interaction in real-world healthcare beyond the laboratory</article-title><source>NPJ Digit Med</source><year>2025</year><month>03</month><day>19</day><volume>8</volume><issue>1</issue><fpage>169</fpage><pub-id pub-id-type="doi">10.1038/s41746-025-01559-5</pub-id><pub-id pub-id-type="medline">40108434</pub-id></nlm-citation></ref><ref id="ref9"><label>9</label><nlm-citation citation-type="journal"><person-group person-group-type="author"><name name-style="western"><surname>Eutamene</surname><given-names>HB</given-names> </name><name name-style="western"><surname>Hamidouche</surname><given-names>W</given-names> </name><name name-style="western"><surname>Keita</surname><given-names>M</given-names> </name><name name-style="western"><surname>Taleb-Ahmed</surname><given-names>A</given-names> </name><name name-style="western"><surname>Hadid</surname><given-names>A</given-names> </name></person-group><article-title>Integrating perceptual quality analysis and caption-based features for robust deepfake video detection</article-title><source>Comput Electr Eng</source><year>2025</year><month>12</month><volume>128</volume><fpage>110699</fpage><pub-id pub-id-type="doi">10.1016/j.compeleceng.2025.110699</pub-id></nlm-citation></ref><ref id="ref10"><label>10</label><nlm-citation citation-type="journal"><person-group person-group-type="author"><name name-style="western"><surname>Benezeth</surname><given-names>Y</given-names> </name><name name-style="western"><surname>Krishnamoorthy</surname><given-names>D</given-names> </name><name name-style="western"><surname>Botina Monsalve</surname><given-names>DJ</given-names> </name><name name-style="western"><surname>Nakamura</surname><given-names>K</given-names> </name><name name-style="western"><surname>Gomez</surname><given-names>R</given-names> </name><name name-style="western"><surname>Mit&#x00E9;ran</surname><given-names>J</given-names> </name></person-group><article-title>Video-based heart rate estimation from challenging scenarios using synthetic video generation</article-title><source>Biomed Signal Process Control</source><year>2024</year><month>10</month><volume>96</volume><fpage>106598</fpage><pub-id pub-id-type="doi">10.1016/j.bspc.2024.106598</pub-id></nlm-citation></ref><ref id="ref11"><label>11</label><nlm-citation citation-type="journal"><person-group person-group-type="author"><name name-style="western"><surname>Zahedi</surname><given-names>FM</given-names> </name><name name-style="western"><surname>Zhao</surname><given-names>H</given-names> </name><name name-style="western"><surname>Sanvanson</surname><given-names>P</given-names> </name><name name-style="western"><surname>Walia</surname><given-names>N</given-names> </name><name name-style="western"><surname>Jain</surname><given-names>H</given-names> </name><name name-style="western"><surname>Shaker</surname><given-names>R</given-names> </name></person-group><article-title>My real avatar has a doctor appointment in the wepital: a system for persistent, efficient, and ubiquitous medical care</article-title><source>Inf Manag</source><year>2022</year><month>12</month><volume>59</volume><issue>8</issue><fpage>103706</fpage><pub-id pub-id-type="doi">10.1016/j.im.2022.103706</pub-id></nlm-citation></ref><ref id="ref12"><label>12</label><nlm-citation citation-type="journal"><person-group person-group-type="author"><name name-style="western"><surname>Sestino</surname><given-names>A</given-names> </name><name name-style="western"><surname>D&#x2019;Angelo</surname><given-names>A</given-names> </name></person-group><article-title>My doctor is an avatar! The effect of anthropomorphism and emotional receptivity on individuals&#x2019; intention to use digital-based healthcare services</article-title><source>Technol Forecast Soc Change</source><year>2023</year><month>06</month><volume>191</volume><fpage>122505</fpage><pub-id pub-id-type="doi">10.1016/j.techfore.2023.122505</pub-id></nlm-citation></ref><ref id="ref13"><label>13</label><nlm-citation citation-type="journal"><person-group person-group-type="author"><name name-style="western"><surname>Decety</surname><given-names>J</given-names> </name><name name-style="western"><surname>Li</surname><given-names>J</given-names> </name></person-group><article-title>The value of empathy in medical practice: a neurobehavioral perspective</article-title><source>Soc Sci Humanit Open</source><year>2025</year><volume>12</volume><fpage>101956</fpage><pub-id pub-id-type="doi">10.1016/j.ssaho.2025.101956</pub-id></nlm-citation></ref><ref id="ref14"><label>14</label><nlm-citation citation-type="report"><person-group person-group-type="author"><name name-style="western"><surname>Tabassi</surname><given-names>E</given-names> </name></person-group><article-title>Artificial intelligence risk management framework (AI RMF 1.0)</article-title><year>2023</year><access-date>2026-04-27</access-date><publisher-name>National Institute of Standards and Technology</publisher-name><comment><ext-link ext-link-type="uri" xlink:href="https://www.nist.gov/publications/artificial-intelligence-risk-management-framework-ai-rmf-10">https://www.nist.gov/publications/artificial-intelligence-risk-management-framework-ai-rmf-10</ext-link></comment></nlm-citation></ref><ref id="ref15"><label>15</label><nlm-citation citation-type="journal"><person-group person-group-type="author"><name name-style="western"><surname>Masayoshi</surname><given-names>K</given-names> </name><name name-style="western"><surname>Katada</surname><given-names>Y</given-names> </name><name name-style="western"><surname>Ozawa</surname><given-names>N</given-names> </name><name name-style="western"><surname>Ibuki</surname><given-names>M</given-names> </name><name name-style="western"><surname>Negishi</surname><given-names>K</given-names> </name><name name-style="western"><surname>Kurihara</surname><given-names>T</given-names> </name></person-group><article-title>Deep learning segmentation of non-perfusion area from color fundus images and AI-generated fluorescein angiography</article-title><source>Sci Rep</source><year>2024</year><month>05</month><day>11</day><volume>14</volume><issue>1</issue><fpage>10801</fpage><pub-id pub-id-type="doi">10.1038/s41598-024-61561-x</pub-id><pub-id pub-id-type="medline">38734727</pub-id></nlm-citation></ref><ref id="ref16"><label>16</label><nlm-citation citation-type="journal"><person-group person-group-type="author"><name name-style="western"><surname>Piot</surname><given-names>MA</given-names> </name><name name-style="western"><surname>Attoe</surname><given-names>C</given-names> </name><name name-style="western"><surname>Billon</surname><given-names>G</given-names> </name><name name-style="western"><surname>Cross</surname><given-names>S</given-names> </name><name name-style="western"><surname>Rethans</surname><given-names>JJ</given-names> </name><name name-style="western"><surname>Falissard</surname><given-names>B</given-names> </name></person-group><article-title>Simulation training in psychiatry for medical education: a review</article-title><source>Front Psychiatry</source><year>2021</year><volume>12</volume><fpage>658967</fpage><pub-id pub-id-type="doi">10.3389/fpsyt.2021.658967</pub-id><pub-id pub-id-type="medline">34093275</pub-id></nlm-citation></ref><ref id="ref17"><label>17</label><nlm-citation citation-type="journal"><person-group person-group-type="author"><name name-style="western"><surname>Zhang</surname><given-names>J</given-names> </name><name name-style="western"><surname>Ding</surname><given-names>Y</given-names> </name><name name-style="western"><surname>Zhang</surname><given-names>H</given-names> </name><etal/></person-group><article-title>An experimental study on embodiment forms and interaction modes in affective robots for anxiety relief and emotional connection</article-title><source>Int J Hum Comput Stud</source><year>2025</year><month>09</month><volume>203</volume><fpage>103584</fpage><pub-id pub-id-type="doi">10.1016/j.ijhcs.2025.103584</pub-id></nlm-citation></ref><ref id="ref18"><label>18</label><nlm-citation citation-type="journal"><person-group person-group-type="author"><name name-style="western"><surname>Soto-Sanfiel</surname><given-names>MT</given-names> </name><name name-style="western"><surname>Wu</surname><given-names>Q</given-names> </name></person-group><article-title>How audiences make sense of deepfake resurrections: a multilevel analysis of realism, ethics, and cultural meaning</article-title><source>Comput Hum Behav</source><year>2026</year><month>01</month><volume>174</volume><fpage>108822</fpage><pub-id pub-id-type="doi">10.1016/j.chb.2025.108822</pub-id></nlm-citation></ref><ref id="ref19"><label>19</label><nlm-citation citation-type="journal"><person-group person-group-type="author"><name name-style="western"><surname>Ma&#x2019;arif</surname><given-names>A</given-names> </name><name name-style="western"><surname>Maghfiroh</surname><given-names>H</given-names> </name><etal/></person-group><article-title>Social, legal, and ethical implications of AI-generated deepfake pornography on digital platforms: a systematic literature review</article-title><source>Soc Sci Humanit Open</source><year>2025</year><volume>12</volume><fpage>101882</fpage><pub-id pub-id-type="doi">10.1016/j.ssaho.2025.101882</pub-id></nlm-citation></ref><ref id="ref20"><label>20</label><nlm-citation citation-type="journal"><person-group person-group-type="author"><name name-style="western"><surname>Wagner</surname><given-names>R</given-names> </name><name name-style="western"><surname>Pardi</surname><given-names>G</given-names> </name><name name-style="western"><surname>M&#x00FC;ller</surname><given-names>J</given-names> </name><name name-style="western"><surname>Brucker</surname><given-names>B</given-names> </name><name name-style="western"><surname>Schwarzer</surname><given-names>S</given-names> </name><name name-style="western"><surname>Gerjets</surname><given-names>P</given-names> </name></person-group><article-title>Listening to scientists in immersive videos: how levels of immersion and points of view influence learning experiences</article-title><source>Comput Educ</source><year>2025</year><volume>234</volume><fpage>1</fpage><lpage>19</lpage><pub-id pub-id-type="doi">10.1016/j.compedu.2025.105326</pub-id></nlm-citation></ref><ref id="ref21"><label>21</label><nlm-citation citation-type="journal"><person-group person-group-type="author"><name name-style="western"><surname>Buji&#x0107;</surname><given-names>M</given-names> </name><name name-style="western"><surname>Salminen</surname><given-names>M</given-names> </name><name name-style="western"><surname>Hamari</surname><given-names>J</given-names> </name></person-group><article-title>Effects of immersive media on emotion and memory: an experiment comparing article, 360-video, and virtual reality</article-title><source>Int J Hum Comput Stud</source><year>2023</year><month>11</month><volume>179</volume><fpage>103118</fpage><pub-id pub-id-type="doi">10.1016/j.ijhcs.2023.103118</pub-id></nlm-citation></ref><ref id="ref22"><label>22</label><nlm-citation citation-type="journal"><person-group person-group-type="author"><name name-style="western"><surname>Fridman</surname><given-names>I</given-names> </name><name name-style="western"><surname>Bylund</surname><given-names>CL</given-names> </name><name name-style="western"><surname>Elston Lafata</surname><given-names>J</given-names> </name></person-group><article-title>Trust of social media content and risk of making misinformed decisions: survey of people affected by cancer and their caregivers</article-title><source>PEC Innov</source><year>2024</year><volume>5</volume><fpage>100332</fpage><pub-id pub-id-type="doi">10.1016/j.pecinn.2024.100332</pub-id><pub-id pub-id-type="medline">39323933</pub-id></nlm-citation></ref><ref id="ref23"><label>23</label><nlm-citation citation-type="journal"><person-group person-group-type="author"><name name-style="western"><surname>Diwanji</surname><given-names>VS</given-names> </name></person-group><article-title>Should your brand hire virtual influencers? How realism and gender presentation shape trust and purchase intentions</article-title><source>J Retail Consum Serv</source><year>2026</year><month>01</month><volume>88</volume><fpage>104491</fpage><pub-id pub-id-type="doi">10.1016/j.jretconser.2025.104491</pub-id></nlm-citation></ref><ref id="ref24"><label>24</label><nlm-citation citation-type="journal"><person-group person-group-type="author"><name name-style="western"><surname>Letafati</surname><given-names>M</given-names> </name><name name-style="western"><surname>Otoum</surname><given-names>S</given-names> </name></person-group><article-title>On the privacy and security for e-health services in the metaverse: an overview</article-title><source>Ad Hoc Netw</source><year>2023</year><month>11</month><volume>150</volume><fpage>103262</fpage><pub-id pub-id-type="doi">10.1016/j.adhoc.2023.103262</pub-id></nlm-citation></ref><ref id="ref25"><label>25</label><nlm-citation citation-type="journal"><person-group person-group-type="author"><name name-style="western"><surname>Pirhofer</surname><given-names>J</given-names> </name><name name-style="western"><surname>B&#x00FC;kki</surname><given-names>J</given-names> </name><name name-style="western"><surname>Vaismoradi</surname><given-names>M</given-names> </name><name name-style="western"><surname>Glarcher</surname><given-names>M</given-names> </name><name name-style="western"><surname>Paal</surname><given-names>P</given-names> </name></person-group><article-title>A qualitative exploration of cultural safety in nursing from the perspectives of Advanced Practice Nurses: meaning, barriers, and prospects</article-title><source>BMC Nurs</source><year>2022</year><month>07</month><day>4</day><volume>21</volume><issue>1</issue><fpage>178</fpage><pub-id pub-id-type="doi">10.1186/s12912-022-00960-9</pub-id><pub-id pub-id-type="medline">35787799</pub-id></nlm-citation></ref><ref id="ref26"><label>26</label><nlm-citation citation-type="journal"><person-group person-group-type="author"><name name-style="western"><surname>Sterponi</surname><given-names>L</given-names> </name><name name-style="western"><surname>Fatigante</surname><given-names>M</given-names> </name><name name-style="western"><surname>Zucchermaglio</surname><given-names>C</given-names> </name><name name-style="western"><surname>Alby</surname><given-names>F</given-names> </name></person-group><article-title>Companions in immigrant oncology visits: uncovering social dynamics through the lens of Goffman&#x2019;s footing and Conversation Analysis</article-title><source>SSM Qual Res Health</source><year>2024</year><month>06</month><volume>5</volume><fpage>100432</fpage><pub-id pub-id-type="doi">10.1016/j.ssmqr.2024.100432</pub-id></nlm-citation></ref><ref id="ref27"><label>27</label><nlm-citation citation-type="journal"><person-group person-group-type="author"><name name-style="western"><surname>Krijger</surname><given-names>J</given-names> </name></person-group><article-title>What about justice and power imbalances? A relational approach to ethical risk assessments for AI</article-title><source>DISO</source><year>2024</year><volume>3</volume><fpage>56</fpage><pub-id pub-id-type="doi">10.1007/s44206-024-00139-6</pub-id></nlm-citation></ref><ref id="ref28"><label>28</label><nlm-citation citation-type="journal"><person-group person-group-type="author"><name name-style="western"><surname>Ploug</surname><given-names>T</given-names> </name><name name-style="western"><surname>J&#x00F8;rgensen</surname><given-names>RF</given-names> </name><name name-style="western"><surname>Motzfeldt</surname><given-names>HM</given-names> </name><name name-style="western"><surname>Ploug</surname><given-names>N</given-names> </name><name name-style="western"><surname>Holm</surname><given-names>S</given-names> </name></person-group><article-title>The need for patient rights in AI-driven healthcare - risk-based regulation is not enough</article-title><source>J R Soc Med</source><year>2025</year><month>08</month><volume>118</volume><issue>8</issue><fpage>248</fpage><lpage>252</lpage><pub-id pub-id-type="doi">10.1177/01410768251344707</pub-id><pub-id pub-id-type="medline">40562393</pub-id></nlm-citation></ref><ref id="ref29"><label>29</label><nlm-citation citation-type="journal"><person-group person-group-type="author"><name name-style="western"><surname>Baard</surname><given-names>P</given-names> </name><name name-style="western"><surname>Sandin</surname><given-names>P</given-names> </name></person-group><article-title>Principlism and citizen science: the possibilities and limitations of principlism for guiding responsible citizen science conduct</article-title><source>Res Ethics</source><year>2022</year><volume>18</volume><issue>4</issue><fpage>304</fpage><lpage>318</lpage><pub-id pub-id-type="doi">10.1177/17470161221116558</pub-id></nlm-citation></ref><ref id="ref30"><label>30</label><nlm-citation citation-type="journal"><person-group person-group-type="author"><name name-style="western"><surname>Clouser</surname><given-names>KD</given-names> </name><name name-style="western"><surname>Gert</surname><given-names>B</given-names> </name></person-group><article-title>A critique of principlism</article-title><source>J Med Philos</source><year>1990</year><month>04</month><volume>15</volume><issue>2</issue><fpage>219</fpage><lpage>236</lpage><pub-id pub-id-type="doi">10.1093/jmp/15.2.219</pub-id><pub-id pub-id-type="medline">2351895</pub-id></nlm-citation></ref><ref id="ref31"><label>31</label><nlm-citation citation-type="journal"><person-group person-group-type="author"><name name-style="western"><surname>Mittelstadt</surname><given-names>B</given-names> </name></person-group><article-title>Principles alone cannot guarantee ethical AI</article-title><source>Nat Mach Intell</source><year>2019</year><volume>1</volume><issue>11</issue><fpage>501</fpage><lpage>507</lpage><pub-id pub-id-type="doi">10.1038/s42256-019-0114-4</pub-id></nlm-citation></ref><ref id="ref32"><label>32</label><nlm-citation citation-type="journal"><person-group person-group-type="author"><name name-style="western"><surname>Whittaker</surname><given-names>L</given-names> </name><name name-style="western"><surname>Kietzmann</surname><given-names>J</given-names> </name><name name-style="western"><surname>Letheren</surname><given-names>K</given-names> </name><name name-style="western"><surname>Mulcahy</surname><given-names>R</given-names> </name><name name-style="western"><surname>Russell-Bennett</surname><given-names>R</given-names> </name></person-group><article-title>Brace yourself! Why managers should adopt a synthetic media incident response playbook in an age of falsity and synthetic media</article-title><source>Bus Horiz</source><year>2023</year><volume>66</volume><issue>2</issue><fpage>277</fpage><lpage>290</lpage><pub-id pub-id-type="doi">10.1016/j.bushor.2022.07.004</pub-id></nlm-citation></ref><ref id="ref33"><label>33</label><nlm-citation citation-type="book"><person-group person-group-type="author"><name name-style="western"><surname>Mylrea</surname><given-names>M</given-names> </name></person-group><person-group person-group-type="editor"><name name-style="western"><surname>Lawless</surname><given-names>W</given-names> </name><name name-style="western"><surname>Mittu</surname><given-names>R</given-names> </name><name name-style="western"><surname>Sofge</surname><given-names>D</given-names> </name><name name-style="western"><surname>Fouad</surname><given-names>H</given-names> </name></person-group><article-title>The generative AI weapon of mass destruction: evolving disinformation threats, vulnerabilities, and mitigation frameworks</article-title><source>Interdependent Human-Machine Teams: The Path to Autonomy</source><year>2024</year><publisher-name>Academic Press</publisher-name><fpage>315</fpage><lpage>347</lpage><pub-id pub-id-type="doi">10.1016/B978-0-443-29246-0.00007-9</pub-id></nlm-citation></ref><ref id="ref34"><label>34</label><nlm-citation citation-type="journal"><person-group person-group-type="author"><name name-style="western"><surname>Omrani</surname><given-names>N</given-names> </name><name name-style="western"><surname>Rivieccio</surname><given-names>G</given-names> </name><name name-style="western"><surname>Fiore</surname><given-names>U</given-names> </name><name name-style="western"><surname>Schiavone</surname><given-names>F</given-names> </name><name name-style="western"><surname>Agreda</surname><given-names>SG</given-names> </name></person-group><article-title>To trust or not to trust? An assessment of trust in AI-based systems: concerns, ethics and contexts</article-title><source>Technol Forecast Soc Change</source><year>2022</year><month>08</month><volume>181</volume><issue>2</issue><fpage>121763</fpage><pub-id pub-id-type="doi">10.1016/j.techfore.2022.121763</pub-id></nlm-citation></ref><ref id="ref35"><label>35</label><nlm-citation citation-type="journal"><person-group person-group-type="author"><name name-style="western"><surname>Liao</surname><given-names>SM</given-names> </name><name name-style="western"><surname>Haykel</surname><given-names>I</given-names> </name><name name-style="western"><surname>Cheung</surname><given-names>K</given-names> </name><name name-style="western"><surname>Matalon</surname><given-names>T</given-names> </name></person-group><article-title>Navigating the complexities of AI and digital governance: the 5W1H framework</article-title><source>J Responsible Technol</source><year>2025</year><month>09</month><volume>23</volume><fpage>100127</fpage><pub-id pub-id-type="doi">10.1016/j.jrt.2025.100127</pub-id></nlm-citation></ref><ref id="ref36"><label>36</label><nlm-citation citation-type="journal"><person-group person-group-type="author"><name name-style="western"><surname>Braga</surname><given-names>CM</given-names> </name><name name-style="western"><surname>Serrano</surname><given-names>MA</given-names> </name><name name-style="western"><surname>Fern&#x00E1;ndez-Medina</surname><given-names>E</given-names> </name></person-group><article-title>Towards a methodology for ethical artificial intelligence system development: a necessary trustworthiness taxonomy</article-title><source>Expert Syst Appl</source><year>2025</year><month>08</month><volume>286</volume><fpage>128034</fpage><pub-id pub-id-type="doi">10.1016/j.eswa.2025.128034</pub-id></nlm-citation></ref><ref id="ref37"><label>37</label><nlm-citation citation-type="journal"><person-group person-group-type="author"><name name-style="western"><surname>Pettersson</surname><given-names>M</given-names> </name><name name-style="western"><surname>Hedstr&#x00F6;m</surname><given-names>M</given-names> </name><name name-style="western"><surname>H&#x00F6;glund</surname><given-names>AT</given-names> </name></person-group><article-title>The ethics of DNR-decisions in oncology and hematology care: a qualitative study</article-title><source>BMC Med Ethics</source><year>2020</year><month>07</month><day>31</day><volume>21</volume><issue>1</issue><fpage>66</fpage><pub-id pub-id-type="doi">10.1186/s12910-020-00508-z</pub-id><pub-id pub-id-type="medline">32736556</pub-id></nlm-citation></ref><ref id="ref38"><label>38</label><nlm-citation citation-type="book"><person-group person-group-type="author"><name name-style="western"><surname>Lanerolle</surname><given-names>G</given-names> </name><name name-style="western"><surname>Roberts</surname><given-names>ES</given-names> </name><name name-style="western"><surname>Haroon</surname><given-names>A</given-names> </name><name name-style="western"><surname>Shetty</surname><given-names>A</given-names> </name></person-group><article-title>Introduction</article-title><source>Quality Assurance Management: A Comprehensive Overview of Real-World Applications for High Risk Specialties</source><year>2024</year><publisher-name>Elsevier Science</publisher-name><fpage>1</fpage><lpage>21</lpage></nlm-citation></ref><ref id="ref39"><label>39</label><nlm-citation citation-type="journal"><person-group person-group-type="author"><name name-style="western"><surname>Li</surname><given-names>S</given-names> </name><name name-style="western"><surname>Wang</surname><given-names>Z</given-names> </name><name name-style="western"><surname>Shang</surname><given-names>Y</given-names> </name><etal/></person-group><article-title>Developing federated time-to-event scores using heterogeneous real-world survival data</article-title><source>Comput Biol Med</source><year>2025</year><month>10</month><volume>197</volume><issue>Pt B</issue><fpage>111084</fpage><pub-id pub-id-type="doi">10.1016/j.compbiomed.2025.111084</pub-id><pub-id pub-id-type="medline">40976210</pub-id></nlm-citation></ref><ref id="ref40"><label>40</label><nlm-citation citation-type="journal"><person-group person-group-type="author"><name name-style="western"><surname>Patriarca</surname><given-names>R</given-names> </name><name name-style="western"><surname>Falegnami</surname><given-names>A</given-names> </name><name name-style="western"><surname>Costantino</surname><given-names>F</given-names> </name><name name-style="western"><surname>Di Gravio</surname><given-names>G</given-names> </name><name name-style="western"><surname>De Nicola</surname><given-names>A</given-names> </name><name name-style="western"><surname>Villani</surname><given-names>ML</given-names> </name></person-group><article-title>WAx: an integrated conceptual framework for the analysis of cyber-socio-technical systems</article-title><source>Saf Sci</source><year>2021</year><month>04</month><volume>136</volume><fpage>105142</fpage><pub-id pub-id-type="doi">10.1016/j.ssci.2020.105142</pub-id></nlm-citation></ref><ref id="ref41"><label>41</label><nlm-citation citation-type="journal"><person-group person-group-type="author"><name name-style="western"><surname>Badawy</surname><given-names>MK</given-names> </name><name name-style="western"><surname>Khamwan</surname><given-names>K</given-names> </name><name name-style="western"><surname>Carrion</surname><given-names>D</given-names> </name></person-group><article-title>A pilot study of generative AI video for patient communication in radiology and nuclear medicine</article-title><source>Health Technol</source><year>2025</year><month>03</month><volume>15</volume><fpage>395</fpage><lpage>404</lpage><pub-id pub-id-type="doi">10.1007/s12553-025-00945-z</pub-id></nlm-citation></ref><ref id="ref42"><label>42</label><nlm-citation citation-type="journal"><person-group person-group-type="author"><name name-style="western"><surname>Adeboye</surname><given-names>W</given-names> </name><name name-style="western"><surname>Tayal</surname><given-names>V</given-names> </name><name name-style="western"><surname>Odubanjo</surname><given-names>E</given-names> </name><etal/></person-group><article-title>Artificial intelligence in the delivery of patient care: avatar-generated videos for patient education post breast surgery</article-title><source>Eur J Surg Oncol</source><year>2024</year><month>05</month><volume>50</volume><fpage>108076</fpage><pub-id pub-id-type="doi">10.1016/j.ejso.2024.108076</pub-id></nlm-citation></ref><ref id="ref43"><label>43</label><nlm-citation citation-type="web"><article-title>Emotional documentary explores new compassionate possibilities of VR</article-title><source>Unreal Engine</source><year>2020</year><access-date>2025-10-17</access-date><comment><ext-link ext-link-type="uri" xlink:href="https://www.unrealengine.com/developer-interviews/emotional-documentary-explores-new-compassionate-possibilities-of-vr?lang=fr">https://www.unrealengine.com/developer-interviews/emotional-documentary-explores-new-compassionate-possibilities-of-vr?lang=fr</ext-link></comment></nlm-citation></ref><ref id="ref44"><label>44</label><nlm-citation citation-type="journal"><person-group person-group-type="author"><name name-style="western"><surname>Pizzoli</surname><given-names>SF</given-names> </name><name name-style="western"><surname>Monzani</surname><given-names>D</given-names> </name><name name-style="western"><surname>Vergani</surname><given-names>L</given-names> </name><name name-style="western"><surname>Sanchini</surname><given-names>V</given-names> </name><name name-style="western"><surname>Mazzocco</surname><given-names>K</given-names> </name></person-group><article-title>From virtual to real healing: a critical overview of the therapeutic use of virtual reality to cope with mourning</article-title><source>Curr Psychol</source><year>2023</year><volume>42</volume><issue>11</issue><fpage>8697</fpage><lpage>8704</lpage><pub-id pub-id-type="doi">10.1007/s12144-021-02158-9</pub-id><pub-id pub-id-type="medline">34429574</pub-id></nlm-citation></ref><ref id="ref45"><label>45</label><nlm-citation citation-type="journal"><person-group person-group-type="author"><name name-style="western"><surname>Liu</surname><given-names>T</given-names> </name><name name-style="western"><surname>Wang</surname><given-names>P</given-names> </name><name name-style="western"><surname>Pan</surname><given-names>D</given-names> </name><name name-style="western"><surname>Liu</surname><given-names>R</given-names> </name></person-group><article-title>Credibility of AI generated and human video doctors and the relationship to social media use</article-title><source>Front Public Health</source><year>2025</year><volume>13</volume><fpage>1559378</fpage><pub-id pub-id-type="doi">10.3389/fpubh.2025.1559378</pub-id></nlm-citation></ref><ref id="ref46"><label>46</label><nlm-citation citation-type="journal"><person-group person-group-type="author"><name name-style="western"><surname>Stokel-Walker</surname><given-names>C</given-names> </name></person-group><article-title>Deepfakes and doctors: how people are being fooled by social media scams</article-title><source>BMJ</source><year>2024</year><month>07</month><day>17</day><volume>386</volume><fpage>q1319</fpage><pub-id pub-id-type="doi">10.1136/bmj.q1319</pub-id><pub-id pub-id-type="medline">39019557</pub-id></nlm-citation></ref><ref id="ref47"><label>47</label><nlm-citation citation-type="journal"><person-group person-group-type="author"><name name-style="western"><surname>Nordling</surname><given-names>L</given-names> </name></person-group><article-title>Scientists are falling victim to deepfake AI video scams - here&#x2019;s how to fight back</article-title><source>Nature New Biol</source><year>2024</year><month>08</month><day>7</day><pub-id pub-id-type="doi">10.1038/d41586-024-02521-3</pub-id><pub-id pub-id-type="medline">39112581</pub-id></nlm-citation></ref><ref id="ref48"><label>48</label><nlm-citation citation-type="journal"><person-group person-group-type="author"><name name-style="western"><surname>Oehmen</surname><given-names>J</given-names> </name><name name-style="western"><surname>Locatelli</surname><given-names>G</given-names> </name><name name-style="western"><surname>Wied</surname><given-names>M</given-names> </name><name name-style="western"><surname>Willumsen</surname><given-names>P</given-names> </name></person-group><article-title>Risk, uncertainty, ignorance and myopia: Their managerial implications for B2B firms</article-title><source>Industrial Marketing Management</source><year>2020</year><month>07</month><volume>88</volume><fpage>330</fpage><lpage>338</lpage><pub-id pub-id-type="doi">10.1016/j.indmarman.2020.05.018</pub-id></nlm-citation></ref><ref id="ref49"><label>49</label><nlm-citation citation-type="journal"><person-group person-group-type="author"><name name-style="western"><surname>Singh</surname><given-names>S</given-names> </name><name name-style="western"><surname>Dhumane</surname><given-names>A</given-names> </name></person-group><article-title>Unmasking digital deceptions: an integrative review of deepfake detection, multimedia forensics, and cybersecurity challenges</article-title><source>MethodsX</source><year>2025</year><volume>15</volume><fpage>103632</fpage><pub-id pub-id-type="doi">10.1016/j.mex.2025.103632</pub-id><pub-id pub-id-type="medline">41080432</pub-id></nlm-citation></ref><ref id="ref50"><label>50</label><nlm-citation citation-type="journal"><person-group person-group-type="author"><name name-style="western"><surname>McAndrew</surname><given-names>T</given-names> </name><name name-style="western"><surname>Reich</surname><given-names>NG</given-names> </name></person-group><article-title>An expert judgment model to predict early stages of the COVID-19 pandemic in the United States</article-title><source>PLoS Comput Biol</source><year>2022</year><month>09</month><volume>18</volume><issue>9</issue><fpage>e1010485</fpage><pub-id pub-id-type="doi">10.1371/journal.pcbi.1010485</pub-id><pub-id pub-id-type="medline">36149916</pub-id></nlm-citation></ref><ref id="ref51"><label>51</label><nlm-citation citation-type="journal"><person-group person-group-type="author"><name name-style="western"><surname>Awodi</surname><given-names>NJ</given-names> </name><name name-style="western"><surname>Liu</surname><given-names>YK</given-names> </name><name name-style="western"><surname>Ayodeji</surname><given-names>A</given-names> </name><name name-style="western"><surname>Adibeli</surname><given-names>JO</given-names> </name></person-group><article-title>Expert judgement-based risk factor identification and analysis for an effective nuclear decommissioning risk assessment modeling</article-title><source>Prog Nucl Energy</source><year>2021</year><month>06</month><volume>136</volume><fpage>103733</fpage><pub-id pub-id-type="doi">10.1016/j.pnucene.2021.103733</pub-id></nlm-citation></ref><ref id="ref52"><label>52</label><nlm-citation citation-type="journal"><person-group person-group-type="author"><name name-style="western"><surname>Maltby</surname><given-names>KM</given-names> </name><name name-style="western"><surname>Howes</surname><given-names>E</given-names> </name><name name-style="western"><surname>Lincoln</surname><given-names>S</given-names> </name><etal/></person-group><article-title>Marine climate change risks to biodiversity and society in the ROPME Sea Area</article-title><source>Clim Risk Manag</source><year>2022</year><volume>35</volume><issue>2121</issue><fpage>100411</fpage><pub-id pub-id-type="doi">10.1016/j.crm.2022.100411</pub-id></nlm-citation></ref><ref id="ref53"><label>53</label><nlm-citation citation-type="journal"><person-group person-group-type="author"><name name-style="western"><surname>Hallgren</surname><given-names>KA</given-names> </name></person-group><article-title>Computing inter-rater reliability for observational data: an overview and tutorial</article-title><source>Tutor Quant Methods Psychol</source><year>2012</year><volume>8</volume><issue>1</issue><fpage>23</fpage><lpage>34</lpage><pub-id pub-id-type="doi">10.20982/tqmp.08.1.p023</pub-id><pub-id pub-id-type="medline">22833776</pub-id></nlm-citation></ref><ref id="ref54"><label>54</label><nlm-citation citation-type="journal"><person-group person-group-type="author"><name name-style="western"><surname>Polit</surname><given-names>DF</given-names> </name><name name-style="western"><surname>Beck</surname><given-names>CT</given-names> </name></person-group><article-title>The content validity index: are you sure you know what&#x2019;s being reported? Critique and recommendations</article-title><source>Res Nurs Health</source><year>2006</year><month>10</month><volume>29</volume><issue>5</issue><fpage>489</fpage><lpage>497</lpage><pub-id pub-id-type="doi">10.1002/nur.20147</pub-id><pub-id pub-id-type="medline">16977646</pub-id></nlm-citation></ref><ref id="ref55"><label>55</label><nlm-citation citation-type="journal"><person-group person-group-type="author"><name name-style="western"><surname>Boateng</surname><given-names>GO</given-names> </name><name name-style="western"><surname>Neilands</surname><given-names>TB</given-names> </name><name name-style="western"><surname>Frongillo</surname><given-names>EA</given-names> </name><name name-style="western"><surname>Melgar-Qui&#x00F1;onez</surname><given-names>HR</given-names> </name><name name-style="western"><surname>Young</surname><given-names>SL</given-names> </name></person-group><article-title>Best practices for developing and validating scales for health, social, and behavioral research: a primer</article-title><source>Front Public Health</source><year>2018</year><volume>6</volume><fpage>149</fpage><pub-id pub-id-type="doi">10.3389/fpubh.2018.00149</pub-id><pub-id pub-id-type="medline">29942800</pub-id></nlm-citation></ref><ref id="ref56"><label>56</label><nlm-citation citation-type="journal"><person-group person-group-type="author"><name name-style="western"><surname>Haltaufderheide</surname><given-names>J</given-names> </name><name name-style="western"><surname>Ranisch</surname><given-names>R</given-names> </name></person-group><article-title>The ethics of ChatGPT in medicine and healthcare: a systematic review on Large Language Models (LLMs)</article-title><source>NPJ Digit Med</source><year>2024</year><month>07</month><day>8</day><volume>7</volume><fpage>183</fpage><pub-id pub-id-type="doi">10.1038/s41746-024-01157-x</pub-id><pub-id pub-id-type="medline">38977771</pub-id></nlm-citation></ref><ref id="ref57"><label>57</label><nlm-citation citation-type="journal"><person-group person-group-type="author"><name name-style="western"><surname>Lekens</surname><given-names>AL</given-names> </name><name name-style="western"><surname>Drageset</surname><given-names>S</given-names> </name><name name-style="western"><surname>Hansen</surname><given-names>BS</given-names> </name></person-group><article-title>Knowing how, arguing why: nurse anaesthetists&#x2019; experiences of nursing when caring for the surgical patient</article-title><source>BMC Nurs</source><year>2025</year><month>02</month><day>7</day><volume>24</volume><issue>1</issue><fpage>144</fpage><pub-id pub-id-type="doi">10.1186/s12912-025-02752-3</pub-id><pub-id pub-id-type="medline">39920699</pub-id></nlm-citation></ref><ref id="ref58"><label>58</label><nlm-citation citation-type="journal"><person-group person-group-type="author"><name name-style="western"><surname>Jansen</surname><given-names>SN</given-names> </name><name name-style="western"><surname>Kamphorst</surname><given-names>BA</given-names> </name><name name-style="western"><surname>Mulder</surname><given-names>BC</given-names> </name><etal/></person-group><article-title>Ethics of early detection of disease risk factors: a scoping review</article-title><source>BMC Med Ethics</source><year>2024</year><month>03</month><day>5</day><volume>25</volume><issue>1</issue><fpage>25</fpage><pub-id pub-id-type="doi">10.1186/s12910-024-01012-4</pub-id><pub-id pub-id-type="medline">38443930</pub-id></nlm-citation></ref><ref id="ref59"><label>59</label><nlm-citation citation-type="journal"><person-group person-group-type="author"><name name-style="western"><surname>Pathni</surname><given-names>RK</given-names> </name></person-group><article-title>Beyond algorithms: ethical implications of AI in healthcare</article-title><source>Med J Armed Forces India</source><year>2025</year><volume>81</volume><issue>6</issue><fpage>630</fpage><lpage>636</lpage><pub-id pub-id-type="doi">10.1016/j.mjafi.2024.10.014</pub-id><pub-id pub-id-type="medline">41268011</pub-id></nlm-citation></ref><ref id="ref60"><label>60</label><nlm-citation citation-type="book"><person-group person-group-type="author"><name name-style="western"><surname>Rauprich</surname><given-names>O</given-names> </name></person-group><person-group person-group-type="editor"><name name-style="western"><surname>Chadwick</surname><given-names>R</given-names> </name></person-group><article-title>Principlism</article-title><source>Encyclopedia of Applied Ethics</source><year>2012</year><publisher-name>Academic Press</publisher-name><fpage>590</fpage><lpage>598</lpage><pub-id pub-id-type="other">9780123739322</pub-id></nlm-citation></ref><ref id="ref61"><label>61</label><nlm-citation citation-type="journal"><person-group person-group-type="author"><name name-style="western"><surname>Bello</surname><given-names>P</given-names> </name><name name-style="western"><surname>Bridewell</surname><given-names>W</given-names> </name></person-group><article-title>Self-control on the path toward artificial moral agency</article-title><source>Cogn Syst Res</source><year>2025</year><month>01</month><volume>89</volume><fpage>101316</fpage><pub-id pub-id-type="doi">10.1016/j.cogsys.2024.101316</pub-id></nlm-citation></ref><ref id="ref62"><label>62</label><nlm-citation citation-type="journal"><person-group person-group-type="author"><name name-style="western"><surname>Lysaght</surname><given-names>T</given-names> </name><name name-style="western"><surname>Chan</surname><given-names>HY</given-names> </name><name name-style="western"><surname>Scheibner</surname><given-names>J</given-names> </name><name name-style="western"><surname>Toh</surname><given-names>HJ</given-names> </name><name name-style="western"><surname>Richards</surname><given-names>B</given-names> </name></person-group><article-title>An ethical code for collecting, using and transferring sensitive health data: outcomes of a modified Policy Delphi process in Singapore</article-title><source>BMC Med Ethics</source><year>2023</year><month>10</month><day>4</day><volume>24</volume><issue>1</issue><fpage>78</fpage><pub-id pub-id-type="doi">10.1186/s12910-023-00952-7</pub-id><pub-id pub-id-type="medline">37794387</pub-id></nlm-citation></ref><ref id="ref63"><label>63</label><nlm-citation citation-type="journal"><person-group person-group-type="author"><name name-style="western"><surname>Guenduez</surname><given-names>AA</given-names> </name><name name-style="western"><surname>Walker</surname><given-names>N</given-names> </name><name name-style="western"><surname>Demircioglu</surname><given-names>MA</given-names> </name></person-group><article-title>Digital ethics: global trends and divergent paths</article-title><source>Gov Inf Q</source><year>2025</year><month>09</month><volume>42</volume><issue>3</issue><fpage>102050</fpage><pub-id pub-id-type="doi">10.1016/j.giq.2025.102050</pub-id></nlm-citation></ref><ref id="ref64"><label>64</label><nlm-citation citation-type="journal"><person-group person-group-type="author"><name name-style="western"><surname>Vehi</surname><given-names>J</given-names> </name><name name-style="western"><surname>Mujahid</surname><given-names>O</given-names> </name><name name-style="western"><surname>Beneyto</surname><given-names>A</given-names> </name><name name-style="western"><surname>Contreras</surname><given-names>I</given-names> </name></person-group><article-title>Generative artificial intelligence in diabetes healthcare</article-title><source>iScience</source><year>2025</year><month>08</month><day>15</day><volume>28</volume><issue>8</issue><fpage>113051</fpage><pub-id pub-id-type="doi">10.1016/j.isci.2025.113051</pub-id><pub-id pub-id-type="medline">40703444</pub-id></nlm-citation></ref><ref id="ref65"><label>65</label><nlm-citation citation-type="journal"><person-group person-group-type="author"><name name-style="western"><surname>Crowe</surname><given-names>B</given-names> </name><name name-style="western"><surname>Shah</surname><given-names>S</given-names> </name><name name-style="western"><surname>Teng</surname><given-names>D</given-names> </name><etal/></person-group><article-title>Recommendations for clinicians, technologists, and healthcare organizations on the use of generative artificial intelligence in medicine: a position statement from the Society of General Internal Medicine</article-title><source>J Gen Intern Med</source><year>2025</year><month>02</month><volume>40</volume><issue>3</issue><fpage>694</fpage><lpage>702</lpage><pub-id pub-id-type="doi">10.1007/s11606-024-09102-0</pub-id><pub-id pub-id-type="medline">39531100</pub-id></nlm-citation></ref></ref-list><app-group><supplementary-material id="app1"><label>Multimedia Appendix 1</label><p>Operational tool pack (evaluator&#x2019;s checklist and monitoring triggers). This pack operationalizes the risk-ethics matrix for routine governance. Part A provides a 10- to 12-item evaluator&#x2019;s checklist. Part B lists monitoring indicators with example triggers. Crossing a trigger prompts rereview and reclassification per <xref ref-type="table" rid="table2">Table 2</xref> and <xref ref-type="fig" rid="figure2">Figure 2</xref>.</p><media xlink:href="jmir_v28i1e91940_app1.doc" xlink:title="DOC File, 314 KB"/></supplementary-material><supplementary-material id="app2"><label>Multimedia Appendix 2</label><p>Use case dossier template (versioned submission form). The dossier captures purpose and audience, generation pipeline, distribution channel, content and privacy, failure modes with mitigations, ethical alignment score (EAS) evidence (0-2 per principle), residual risk tier, governance ask, monitoring plan and thresholds, sign-offs, and change log. It aligns with <xref ref-type="table" rid="table1">Table 1</xref> (scales and EAS rubric), <xref ref-type="fig" rid="figure1">Figure 1</xref> (quadrant mapping), <xref ref-type="table" rid="table2">Table 2</xref> (entry rules and minimum controls), and <xref ref-type="fig" rid="figure2">Figure 2</xref> (workflow). Panels perform independent prescores, reconciliation, and written rationales; approvals in yellow zones are time limited, with predefined triggers for rereview.</p><media xlink:href="jmir_v28i1e91940_app2.doc" xlink:title="DOC File, 236 KB"/></supplementary-material></app-group></back></article>