Accessibility settings

Published on in Vol 28 (2026)

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/81628, first published .
Integrating Generative AI Into Patient-Centered Clinical Decision Support: Viewpoint on Research and Practice Considerations

Integrating Generative AI Into Patient-Centered Clinical Decision Support: Viewpoint on Research and Practice Considerations

Integrating Generative AI Into Patient-Centered Clinical Decision Support: Viewpoint on Research and Practice Considerations

1Health Sciences Department, NORC at the University of Chicago, 1828 L Street NW, 9th Floor, Washington, DC, United States

2Patient Orator Inc, New York, NY, United States

3Informatics Review LLC, Lake Oswego, OR, United States

Corresponding Author:

Prashila Dullabh, MD


There is growing interest in understanding how generative artificial intelligence (GenAI) can support patients and caregivers in making informed health care decisions, known as patient-centered clinical decision support (PC CDS). In this viewpoint, we present example applications for GenAI-supported PC CDS for patients, caregivers, clinicians, and patient-clinician interactions and examine the opportunities, challenges, and potential solutions associated with these applications. We conducted a targeted document review of our work in the Agency for Healthcare Research and Quality’s Clinical Decision Support Innovation Collaborative focusing on GenAI-enabled PC CDS, supplemented by snowball sampling and targeted searches to identify additional applications. Findings were refined and validated through solicited feedback from a 20-member multidisciplinary expert panel. Through our work, we highlight six critical needs that must be addressed to fully realize GenAI’s potential in PC CDS: (1) engage and ensure representation of patients and caregivers in design and development; (2) build the science of effective PC CDS implementation to support patient engagement; (3) develop risk-based policies for when to use GenAI; (4) establish independent testing and vetting criteria; (5) periodically reassess to identify and address algorithmic drift and verify performance; and (6) establish policies to promote transparency and patient consent in the use of GenAI. Understanding the applications and their potential implications for health care quality is essential to further the beneficial, ethical, and safe development of GenAI-supported PC CDS.

J Med Internet Res 2026;28:e81628

doi:10.2196/81628

Keywords



The clinical landscape is experiencing rapid growth in artificial intelligence (AI), especially generative AI (GenAI) systems that learn the underlying structure of their training data to create new content based on those learned patterns, such as new text, images, and data prompts [1]. This technology has sparked interest in its potential to integrate new data and analytics into patient-centered clinical decision support (PC CDS). PC CDS encompasses decision-making tools that significantly incorporate patient-centered factors across 4 key dimensions: knowledge (evidence based on comparative effectiveness research and research findings on patient-centered outcomes), informed by data that are directly generated from patients (ie, patient-generated health data [PGHD], patient-reported outcomes [PROs], and/or preferences), delivery (engaging patients and caregivers across settings through patient portals, apps, and other digital tools), and use (facilitating shared decision-making [SDM], a process where patients, caregivers, and care teams share and discuss health information and patients’ values and preferences to reach mutually acceptable health-related decisions) [2]. Studies have suggested that GenAI has the potential to impact health care delivery, facilitate more personalized care, influence patient outcomes, and reshape the clinician-patient and clinician-caregiver relationship [3-5]. At the same time, researchers have identified barriers to realizing GenAI’s potential, including limited GenAI implementation frameworks that consider the full sociotechnical environment [6], variability in requirements for reporting and validating GenAI [7], and mixed perceptions about the utility and safety of the technology among clinicians and patients [8]. Additionally, while many articles have described GenAI’s short-term impact on clinicians and health care systems, such as relieving clinician workload and improving operational efficiency, to our knowledge, relatively few focus on the decision-making tools that support patient-centered care or on tools where the patient is the user [9-11]. Because these tools directly engage patients in understanding health information, managing their conditions, and participating in care decisions, they raise distinct expectations around health literacy, such as supporting patients’ ability to obtain, process, and understand health information [12]; self-management [13], such as strengthening patients’ problem-solving, decision-making, and action-planning skills; and SDM, such as enhancing patients’ knowledge, awareness, and ability to cope and engage in collaborative decisions with their care teams [1,14]. There is a need to understand the various ways and intricacies of leveraging GenAI to engage patients and caregivers in their care and how these tools can enhance patient-clinician communications and support the delivery of evidence-based care.

This viewpoint summarizes the novel applications of GenAI in PC CDS through illustrative use cases that can provide a better understanding of its potential among patients, caregivers, clinicians, and PC CDS developers. On the basis of these illustrative use cases, we identified common themes related to the benefits and challenges of GenAI-supported PC CDS tools on health care quality and delivery, with the purpose of acknowledging their advantages while mitigating their potential risks. Drawing from the challenges, we discuss ways to advance trustworthy GenAI-based PC CDS to chart a path forward for research and practice.


We used 3 methods to identify the use cases.

First, we conducted an in-depth review of 4 reports from the Agency for Healthcare Research and Quality’s Clinical Decision Support Innovation Collaborative that addressed AI applications, including a landscape assessment on using AI to scale PC CDS [15], a report on patient and caregiver perspectives on GenAI in PC CDS [16], and 2 assessments of AI-supported PC CDS tools [17,18].

Second, we used a snowball sampling approach to identify additional relevant literature from the reference lists and conducted targeted searches in the PubMed and Google Scholar databases. In total, we identified 53 peer-reviewed sources. From these sources, we abstracted the type of AI technology used, the specific use cases, and the intended end users of these systems.

Third, we solicited feedback from a 20-member Steering Committee including patients, clinicians, informaticians, researchers, electronic health record (EHR) developers, payers, and policymakers. After the research team (PD, CZ, NG, and AA) completed the targeted review analysis and developed a draft use case exhibit, the lead author presented the preliminary findings to the Steering Committee and invited feedback on additional GenAI-supported PC CDS use cases to consider, the categorization of use cases, the validation of the identified benefits and considerations, and recommendations regarding critical needs for the effective use of GenAI-supported PC CDS. Meeting minutes documented key points of agreement, which were subsequently incorporated by the research team.


Figure 1 depicts use cases for GenAI-supported PC CDS that operate across categories based on their primary users and functions. Before reaching end users, GenAI can streamline the technical development of PC CDS for developers and implementers by optimizing knowledge artifacts and reducing development time and cost [15,19]. For end users, patient- and caregiver-focused tools emphasize health and well-being, chronic condition self-management, and educational resources. Tools supporting patient-clinician interactions facilitate communication through conversational agents, symptom monitoring systems, and treatment personalization for patients, which are discussed through SDM with their clinician. Clinician-focused applications assist with information management, particularly for analyzing large volumes of PGHD, while also providing diagnostic support and risk prediction. The potential impact and clinical significance of these use cases will vary across specialties, care settings, and patient populations, and the figure is not intended to imply a hierarchy of importance among them.

Figure 1. Use cases of generative artificial intelligence in patient-centered clinical decision support (PC CDS) technology. PGHD: patient-generated health data.

Overview

The use cases for GenAI-supported PC CDS illuminate several crosscutting themes that highlight the benefits and challenges of implementing these systems in health care settings. The themes do not apply uniformly across all use cases but rather vary in relevance depending on factors such as the clinical context, the decisions being made, and the patient-clinician interactions involved. Some themes also naturally intersect. For example, challenges such as hallucinations and misinformation cut across multiple applications and mediate a range of GenAI’s benefits. In the following discussion, the examples specify the applicable use cases and raise challenges where they are highly relevant, illustrating how these themes can manifest across different applications.

Personalization and Precision

GenAI can support the delivery of personalized care that is responsive to a patient’s unique health profile. GenAI is equipped to rapidly process extensive amounts of patient-centered health information from a range of sources, such as patients’ EHRs, genomic data, medical devices, wearable technologies, and questionnaires assessing patients’ preferences and goals [20]. It can then leverage complex algorithms to generate personalized treatment advice and recommendations, facilitating SDM [21-25]. For example, an AI-powered remote monitoring system for diabetes can collect and analyze patients’ blood glucose levels, exercise levels, and eating habits to generate personalized meal plans and exercise routines for patients [26]. One study developed an AI-assisted Internet of Things wearable smartwatch prototype for older people to proactively detect and manage frailty through the collection of their physical activity data [27].

However, one of the key limitations of using GenAI to provide individualized health recommendations is its inconsistent reasoning capabilities and clinical judgment, particularly when it must synthesize information across long conversations, complex clinical scenarios, or high-volume data sources such as wearables. A systematic review found several issues with large language models (LLMs) such as fragile reasoning performance, diagnostic inaccuracies, and overly cautious clinical judgments across a range of use cases including treatment personalization, monitoring systems, communication support, and diagnostic support and risk prediction [28]. Others have raised concerns about personalized AI’s potential to limit exposure to critical information, fostering “echo chambers” that reinforce users’ specific beliefs [29].

Patient Engagement and Empowerment

Emerging use cases for GenAI center on its potential to empower patients through self-management and communication support. GenAI can make health information more understandable and equip patients with the information needed to take a more active role in their care. For example, health systems can use GenAI-powered chatbots or digital health assistants as patient-facing self-management resources and/or as communication support tools for clinicians to provide health information, answer questions, help assess symptoms, share medication reminders, and facilitate appointment scheduling [30,31].

GenAI has the potential to support SDM between patients and clinicians by providing data or educational resources [32]. One study explored perspectives on the use of GenAI in SDM for knee replacement surgery, finding that patients viewed GenAI as another source of information that could enhance their understanding of their risk profile, empowering them to make decisions based on their values and preferences [10]. Despite its potential, patients with less experience using technology or navigating health information from electronic sources (ie, limited digital health literacy) may find using these tools challenging, leading to a sense of disempowerment [16]. With GenAI in particular, emerging evidence suggests that patients may mistake the confident tone of model outputs for factual accuracy, increasing the likelihood that they follow guidance that is incomplete or inaccurate [33-35].

Quality of Care and Safety

GenAI shows promise in improving patient safety and quality of care. It can facilitate real-time physiological monitoring and prediction of patients’ conditions and send recommendations to patients or alerts to clinicians to support care coordination [36]. An ongoing study of a GenAI-supported PC CDS monitoring system for patients with asthma uses voice biomarker technology to recognize variations in the patient’s recorded voice and sends immediate alerts to the patient for treatment intervention. The digital tool calculates a Respiratory Symptoms Risk Score and allows for remote care coordination, as necessary [37]. GenAI can also enhance diagnostic accuracy and risk prediction based on patient health histories [38], unstructured clinical notes [39,40], PGHD [41], or medical imaging analysis [42]. In some cases, this can inform allocation of resources and treatments based on individual- and system-level data. Mercy Healthcare System integrated the Chen Chemotherapy Model into their EHR to predict hospitalization risk from chemotherapy side effects in adult patients without leukemia. The texting platform prompts patients to report and rate their symptoms, then sends this PRO data to clinicians for review [43]. This allows clinicians to proactively manage symptoms before hospitalization becomes necessary. Furthermore, GenAI can summarize disparate information for clinicians to allow them more time for direct patient interactions, which can improve care quality [44]. With ambient listening technology, GenAI-supported tools convert recorded patient-clinician conversations into structured visit summary notes. These tools reduce the time clinicians spend on documentation during visits, enabling them to focus more attention on direct patient communication and interaction [45].

At the same time, overreliance on GenAI can pose risks to patient safety. One study found that GenAI models exhibited larger cognitive biases for medical decisions when compared with practicing clinicians, illustrating how LLMs could lead to diagnostic or treatment errors in complex medical cases [46]. Another study found that more than half of a sample of primary care physicians reviewing AI-generated patient portal messages containing errors did not identify and remedy all the errors and 35% to 45% of physicians submitted erroneous messages entirely unedited, indicating the need for additional guardrails beyond physician oversight [47]. Additionally, liability questions arise when GenAI is involved in decision-making, especially for “black box” algorithms where the internal decision-making process is hidden, meaning users can only see the input data they provide and the resulting output without understanding the reasoning process for medical predictions or decisions [48]. This makes it more difficult to determine who is accountable if there are adverse outcomes or whether the health system should take responsibility if they do not properly implement, maintain, or train users on the GenAI-supported system [5,49].

Access to Care

GenAI-supported PC CDS has the potential to improve access to medical care. It can support translation service use cases by using speech recognition and generating speech output in any language, thus overcoming language barriers between patients and their clinicians [50]. It can also be used for self-management support use cases in settings where there is a shortage of health care professionals, such as in rural areas or when more immediate access to clinical expertise is needed (regardless of geographic setting) to assist patients with nonurgent medical conditions. For example, Spänig et al [51] demonstrated that an autonomous medical AI interface designed to identify patients at risk for type 2 diabetes mellitus could function as an initial point of guidance in settings with limited clinician availability by offering preliminary risk assessment and directing patients toward appropriate follow-up care.

Despite this potential, GenAI systems may produce uneven health outcomes if not carefully implemented. Models trained on limited or skewed datasets could lead to less accurate self-management recommendations or physiological monitoring predictions for different patient populations [52]. Biased GenAI outputs can result from a range of common data quality issues in training models, such as limited demographic information [53,54], inaccurate or missing data [55], and unrepresentative data that do not include adequate information on specific patient subpopulations [56,57]. While bias can be mitigated by preprocessing data used to train AI models with retrieval-augmented generation (coupling models with an external and reliable database or knowledge base) [58], selecting models that prioritize transparency, and postprocessing the model output to correct for bias, these approaches require developer assessments of what constitutes fairness [59].

Beyond its potential for bias and error, a wide range of factors limit GenAI’s integration into patients’ lives, which can influence access to care when a GenAI-supported PC CDS tool serves as the first point of guidance in patient-facing use cases (eg, chronic health management and treatment planning). There may be usability challenges if digital and health literacy levels, community-specific factors, and local context are not considered during the design of GenAI-supported tools [60,61]. Additionally, chief medical information officers and other executives are essential since GenAI-supported tools require technological and financial resources to deploy and train care teams on their effective use. This creates the potential to exacerbate the digital divide between patients in health care organizations with robust health information technology support systems versus patients in lower-resourced health care organizations, leaving some patients without access to GenAI-supported self-management, communication, or monitoring tools that can help them recognize when care is needed and/or remain engaged between visits.

Development of Clinical Decision Support Artifacts and Scaling

In addition to the clinician and patient and caregiver use cases, GenAI has the potential to improve the technology driving PC CDS. It can write code, map variables, create value sets [15], and generate realistic synthetic patient data that reduce the time needed for training and validation cycles [62]. GenAI can also suggest improvements to PC CDS logic [19] by analyzing alert overrides that contribute to clinician fatigue and personalizing alert criteria [63]. Furthermore, GenAI could solve a critical challenge in health care interoperability: the use of multiple incompatible information systems to record health data [64]. By using LLMs to quickly transform heterogeneous data into structured, standardized formats, it can facilitate the sharing of patient information across health systems. Despite this potential, developing interoperable GenAI systems is complex due to the quality of available patient data, the lack of implementation frameworks and standards for GenAI applications, and the cost of computational resources [65].


Overview

The successful integration of GenAI in PC CDS across the use cases requires health system leadership, researchers, developers, informaticians, and policymakers to address at least the 6 areas discussed in subsequent sections to ensure these tools deliver meaningful, safe, and unbiased benefits. As with the benefits and challenges, these areas of opportunity are interconnected in nature and vary in relevance based on the clinical context, the decisions being made, and the patient-clinician interactions involved in the use case. Several of these areas align with the National Academy of Medicine’s AI Code of Conduct framework for the development and application of responsible AI in health and medicine [66]. In addition, since PC CDS does not occur in isolation, its success depends on effective public health strategies that promote community health and well-being.

Engaging and Ensuring Representation of Patients and/or Caregivers in Design and Development

To capitalize on GenAI’s ability to support the PC CDS use cases illustrated in this manuscript, engaging patients and caregivers during the design and development process is essential [67]. As reflected in the National Academy of Medicine’s AI Code of Conduct, by involving end users in usability testing and other design activities—particularly those with varying language preferences, cultural norms, and digital and health literacy—developers can ensure GenAI-supported tools are tailored to account for the unique preferences, contexts, communication styles, and needs of individual patients [68,69]. Additionally, patient input can help ensure personalized treatment recommendations provided by GenAI are actionable for different populations [70] and accommodate the complexity of patients with multimorbidities. The degree to which patients and caregivers influence the design of GenAI-supported PC CDS tools will vary by use case, with greater influence typically possible in patient-facing applications and more limited involvement in clinician-facing functions such as diagnostic support or risk prediction.

Building the Science of Effective PC CDS Implementation to Support Patient Engagement

Advances in methods to promote the adoption of evidence-based practices are needed to address the barriers to patient engagement posed by patient-facing GenAI-supported PC CDS in particular. Research is needed on the multilevel factors that (1) influence the use of communication support tools such as chatbots and digital health assistants, such as inaccurately processing patient inputs or lack of personal connection [70]; (2) support their seamless integration into clinical workflows and patient lifeflows so that the information provided by the PC CDS does not undermine clinician communication; and (3) support GenAI’s ability to facilitate SDM through visualizations that contain the appropriate level of detail [71,72]. A randomized clinical trial for a GenAI-enabled clinical decision aid used some of the aforementioned strategies through the incorporation of PRO measures, patient education, and tailoring to patient preferences, resulting in improvements in patients’ decisional quality, SDM, and functional outcomes [73]. Although the research on effective PC CDS implementation is nascent, this evidence supports the importance of further exploring multilevel factors influencing patient engagement to improve uptake of GenAI-supported PC CDS.

Developing Risk-Based Policies for Deciding When GenAI Use Is Appropriate and What Level of Clinician Involvement Is Required

GenAI’s ability to integrate into such a wide range of PC CDS use cases highlights the importance of developing clear policies for its use. Risk-based guardrails should define when GenAI can operate autonomously in supporting patients’ knowledge and data capture (eg, by translating clinician-authored recommendations into plain language, generating visit summaries, or helping patients record symptoms and goals for minor issues) and where clinician review and sign-off are required before outputs are used to guide care (eg, for patients with multimorbidities who may face conflicting guidelines) [74]. Human-in-the-loop represents a promising approach for risk-based guardrails. Although its current use has largely focused on clinician-facing use cases such as verifying the accuracy and reliability of AI-generated discharge summaries or personalized recommendations, it also has potential to support patients with low digital health literacy in patient-facing use cases such as self-management support, communication support, and monitoring systems, where information is delivered digitally and patients may require additional support in interpretation. As seen in SDM research, patients with low digital health literacy benefit from clinician facilitation to interpret and apply complex health information, suggesting similar human-in-the-loop support may be beneficial for patient-facing GenAI-supported PC CDS tools [75]. While organizations bear primary responsibility for ensuring the safe and appropriate use of the GenAI tools they deploy, patients’ own GenAI literacy can also play a role in promoting appropriate use. Critical health AI literacy, the ability to critically evaluate GenAI outputs, recognizing how health systems and structural factors may shape them, and using this awareness to make informed decisions, is an emerging concept that can complement organizational safeguards by helping patients engage more safely and effectively with GenAI PC CDS [76]. Overall, well-defined policies will support postdeployment monitoring and help mitigate risks such as overreliance on GenAI, unclear liability, and inappropriate delegation of clinical judgment to technology.

Establishing Independent Testing and Vetting Criteria to Ensure Safety and Accuracy

To ensure the quality and safety of GenAI-supported PC CDS, developers must rigorously evaluate and independently test processes before deployment [74]. Unlike static decision support systems, GenAI-supported tools require initial validation not only for accuracy but also for fairness, interpretability, and robustness against biases [46]. Suggested approaches include conducting centralized testing through third parties, as well as requiring local oversight by adapting models such as the Clinical Laboratory Improvement Amendments for AI [77]. Establishing standardized oversight for GenAI helps build trust in its recommendations and ensure its deployment improves, not compromises, care quality.

Periodically Reassessing to Ensure No Algorithmic Drift and Verify Performance

GenAI algorithms and the data they operate on evolve, necessitating reassessment by the organization implementing GenAI-supported PC CDS to identify and address algorithmic drift, where shifts in data patterns lead to degraded performance or unintended outputs [74]. Because GenAI-supported PC CDS is integrated into real clinical workflows, monitoring approaches and metrics need to remain feasible and usable by clinical and organizational leaders, not just data scientists and researchers [78]. Several early prototypes, such as recent work developing human-centered AI monitoring systems for health care, demonstrate the utility of evaluating models along 4 core dimensions: performance, process, outcomes, and fairness [78]. For GenAI-supported PC CDS, outcomes and fairness can be assessed using similar principles, that is, evaluating whether recommendations lead to desired clinical and safety outcomes overall and across patient subgroups. Performance and process evaluation are more complex for GenAI because outputs are nondeterministic and language-based, and validated measures and methods are still emerging. Monitoring must therefore address behaviors such as hallucinations [79], reasoning quality [80], sycophantic behavior [81], and input robustness or feature drift [82]. While some elements require human expert review, automated and semiautomated tools are emerging. Examples include LLM-as-judge for evaluating correctness and guideline adherence or assessing fidelity to expected answers or text-based summaries of clinical information [83]; and statistical monitors that track token-level patterns or vocabulary shifts indicating changes in model behavior [82]. Organizations should establish clear triggers for reassessment—such as major guideline updates, new evidence affecting care recommendations, demographic changes in the patient population, the passage of time, or early signals of performance deterioration—and establish a reassessment frequency based on the clinical risk and operational impact of the GenAI-supported application [84,85].

Establishing Policies to Promote Transparency and Patient Consent in the Use of GenAI

Patients need clear explanations of when and how GenAI is used in PC CDS tools that influence their care, along with the technology’s limitations [86]. The fair, appropriate, valid, effective, and safe principles [87] offer a blueprint for providing this transparency through specific, standardized technical and performance information. Furthermore, GenAI-supported PC CDS tools may require access to sensitive health information (eg, ambient listening tools and monitoring systems) with which patients may have varying levels of familiarity and comfort. Patients should have the right to opt out of the use of GenAI in their care and have full knowledge of who will access their data and how it will be used if they choose to participate. The development of patient-centered transparency and consent guidelines represents a critical area for advancing GenAI-supported PC CDS, including establishing which information is most relevant for patients to understand, determining how it should be effectively communicated, and considering when dynamic consent models may be warranted given the adaptive nature of GenAI systems [88].

Conclusions

The integration of GenAI into PC CDS holds promise for transforming health care delivery. GenAI can enhance the personalization and precision of care; empower patients through improved engagement and SDM; and support clinicians, patients, and caregivers in making more informed decisions. However, the successful implementation of GenAI in PC CDS requires consideration of several critical factors. These include addressing potential biases in GenAI algorithms, ensuring the transparency and explainability of GenAI-driven recommendations, and fostering trust among patients and clinicians. Additionally, it is essential to engage patients and caregivers in the design and development of GenAI-supported tools, develop clear policies for GenAI use, and establish rigorous testing and validation processes to ensure safety and accuracy. By addressing these challenges, the full potential of GenAI can begin to be harnessed to improve patient outcomes and advance the field of PC CDS.

Acknowledgments

The authors would like to acknowledge Edwin Lomotan, MD, and James Swiger, MBE, for their critical review and feedback on the manuscript. In addition, the authors acknowledge the following members of the Clinical Decision Support Innovation Collaborative (CDSiC) Steering Committee and CDSiC Innovation Center Planning Committee: Dr Joel Andress; Dr James Cimino; Deborah Collyar; Angela Dobes; Dr Jordan Everson; Dr Sonja Fulmer; Dr Robert Greenes; Dr Tonya Hongsermeier; Dr Kensaku Kawamoto; Dr Gilad Kuperman; Dr Brian Levy; Dr Dave Little; Dr David Lobach; Dr J Marc Overhage; Tiffany Peterson; Dr Gerasimos Petratos; Dr Jonathan Teich; Dr Lipika Samal; Wesley Sargent; Patrick Schoen; Dr Richard Schreiber; Dr Scott Weingarten; Michael Wittie; and Dr Haipeng (Mark) Zhang.

Funding

This work is based on research conducted by NORC at the University of Chicago under contract to the Agency for Healthcare Research and Quality (AHRQ), Rockville, Maryland (contract number 75Q80120D00018/75Q80121F32003). The views expressed in this paper are those of the authors and do not necessarily represent the official positions of the AHRQ or the US Department of Health and Human Services.

Data Availability

No datasets were generated or analyzed during this study.

Authors' Contributions

PD conceived of the presented idea and supervised the project. PD, CZ, NG, CP, and AA wrote the manuscript with input from all authors. DFS and KM provided critical feedback and helped shape the figure and manuscript.

Conflicts of Interest

None declared.

  1. Feuerriegel S, Hartmann J, Janiesch C, Zschech P. Generative AI. Bus Inf Syst Eng. Feb 2024;66:111-126. [CrossRef]
  2. Dullabh P, Sandberg SF, Heaney-Huls K, et al. Challenges and opportunities for advancing patient-centered clinical decision support: findings from a horizon scan. J Am Med Inform Assoc. Jun 14, 2022;29(7):1233-1243. [CrossRef] [Medline]
  3. Ouanes K, Farhah N. Effectiveness of artificial intelligence (AI) in clinical decision support systems and care delivery. J Med Syst. Aug 12, 2024;48(1):74. [CrossRef] [Medline]
  4. Alowais SA, Alghamdi SS, Alsuhebany N, et al. Revolutionizing healthcare: the role of artificial intelligence in clinical practice. BMC Med Educ. Sep 22, 2023;23(1):689. [CrossRef] [Medline]
  5. Braun M, Hummel P, Beck S, Dabrock P. Primer on an ethics of AI-based decision support systems in the clinic. J Med Ethics. Apr 3, 2020;47(12):e3. [CrossRef] [Medline]
  6. McCradden MD, Joshi S, Anderson JA, London AJ. A normative framework for artificial intelligence as a sociotechnical system in healthcare. Patterns (N Y). Nov 10, 2023;4(11):100864. [CrossRef] [Medline]
  7. Liaw ST, Liyanage H, Kuziemsky C, et al. Ethical use of electronic health record data and artificial intelligence: recommendations of the Primary Care Informatics Working Group of the International Medical Informatics Association. Yearb Med Inform. Aug 2020;29(1):51-57. [CrossRef] [Medline]
  8. Anjara SG, Janik A, Dunford-Stenger A, et al. Examining explainable clinical decision support systems with think aloud protocols. PLoS One. Sep 14, 2023;18(9):e0291443. [CrossRef] [Medline]
  9. Funer F, Schneider D, Heyen NB, et al. Impacts of clinical decision support systems on the relationship, communication, and shared decision-making between health care professionals and patients: multistakeholder interview study. J Med Internet Res. Aug 23, 2024;26:e55717. [CrossRef] [Medline]
  10. Gould DJ, Dowsey MM, Glanville-Hearst M, et al. Patients' views on AI for risk prediction in shared decision-making for knee replacement surgery: qualitative interview study. J Med Internet Res. Sep 18, 2023;25:e43632. [CrossRef] [Medline]
  11. Bjerring JC, Busch J. Artificial intelligence and patient-centered decision-making. Philos Technol. Jun 2021;34:349-371. [CrossRef]
  12. Sørensen K, Van den Broucke S, Fullam J, et al. Health literacy and public health: a systematic review and integration of definitions and models. BMC Public Health. Jan 25, 2012;12:80. [CrossRef] [Medline]
  13. Lorig KR, Holman H. Self-management education: history, definition, outcomes, and mechanisms. Ann Behav Med. Aug 2003;26(1):1-7. [CrossRef] [Medline]
  14. Elwyn G, Durand MA, Song J, et al. A three-talk model for shared decision making: multistage consultation process. BMJ. Nov 6, 2017;359:j4891. [CrossRef] [Medline]
  15. Kawamoto K, Ryan S, Heaney-Huls K, et al. Implementation, Adoption, and Scaling Workgroup: Landscape Assessment on the Use of Artificial Intelligence to Scale PC CDS. Agency for Healthcare Research and Quality; 2024.
  16. Desai P, Dobes A, Shah A, et al. Trust and Patient-Centeredness Workgroup: Patient and Caregiver Perspectives on Generative Artificial Intelligence in Patient-Centered Clinical Decision Support [Internet]. Agency for Healthcare Research and Quality (US); 2024.
  17. Zott C, Sittig DF, Gauthreaux N, et al. PAIGE Chatbot For Patient-Clinician Communication: Usability and Utility Assessment. Agency for Healthcare Research and Quality (US); 2024.
  18. Gauthreaux N, Zott C, Boxwala A, Dullabh PM, Sittig DF. Quartz App to Support Medication Adherence: Usability and Feasibility Assessment [Internet]. Agency for Healthcare Research and Quality (US); 2024.
  19. Liu S, Wright AP, Patterson BL, et al. Using AI-generated suggestions from ChatGPT to optimize clinical decision support. J Am Med Inform Assoc. Jun 20, 2023;30(7):1237-1245. [CrossRef] [Medline]
  20. Bohr A, Memarzadeh K. The rise of artificial intelligence in healthcare applications. In: Artificial Intelligence in Healthcare. Academic Press; 2020:25-60. [CrossRef]
  21. Love-Koh J, Peel A, Rejon-Parrilla JC, et al. The future of precision medicine: potential impacts for health technology assessment. Pharmacoeconomics. Dec 2018;36(12):1439-1451. [CrossRef] [Medline]
  22. Horiuchi D, Tatekawa H, Shimono T, et al. Accuracy of ChatGPT generated diagnosis from patient’s medical history and imaging findings in neuroradiology cases. Neuroradiology. Jan 2024;66(1):73-79. [CrossRef] [Medline]
  23. Fathima M, Moulana M. Revolutionizing breast cancer care: AI-enhanced diagnosis and patient history. Comput Methods Biomech Biomed Engin. Apr 2025;28(5):642-654. [CrossRef] [Medline]
  24. Ramkumar PN, Haeberle HS, Ramanathan D, et al. Remote patient monitoring using mobile health for total knee arthroplasty: validation of a wearable and machine learning-based surveillance platform. J Arthroplasty. Oct 2019;34(10):2253-2259. [CrossRef] [Medline]
  25. Fozoonmayeh D, Le HV, Wittfoth E, et al. A scalable smartwatch-based medication intake detection system using distributed machine learning. J Med Syst. Feb 28, 2020;44(4):76. [CrossRef] [Medline]
  26. Khalifa M, Albadawy M, Iqbal U. Advancing clinical decision support: the role of artificial intelligence across six domains. Comput Methods Programs Biomed Update. 2024;5:100142. Retracted in: Comput Methods Programs Biomed Update. 2025;8:100204. [CrossRef]
  27. Ciubotaru BI, Sasu GV, Goga N, et al. Prototype results of an Internet of Things system using wearables and artificial intelligence for the detection of frailty in elderly people. Appl Sci. 2023;13(15):8702. [CrossRef]
  28. Souza GD, Melo G, Schneider D. Tailoring treatment in the age of AI: a systematic review of large language models in personalized healthcare. Informatics. 2025;12(4):113. [CrossRef]
  29. Kostick-Quenet KM. A caution against customized AI in healthcare. NPJ Digit Med. Jan 7, 2025;8(1):13. [CrossRef] [Medline]
  30. Thorat V, Rao P, Joshi N, Talreja P, Shetty AR. Role of artificial intelligence (AI) in patient education and communication in dentistry. Cureus. May 7, 2024;16(5):e59799. [CrossRef] [Medline]
  31. Altamimi I, Altamimi A, Alhumimidi AS, Altamimi A, Temsah MH. Artificial intelligence (AI) chatbots in medicine: a supplement, not a substitute. Cureus. Jun 25, 2023;15(6):e40922. [CrossRef] [Medline]
  32. Abbasgholizadeh Rahimi S, Cwintal M, Huang Y, et al. Application of artificial intelligence in shared decision making: scoping review. JMIR Med Inform. Aug 9, 2022;10(8):e36199. [CrossRef] [Medline]
  33. Hart R. Chatbots can trigger a mental health crisis. What to know about “AI psychosis”. TIME. Aug 5, 2025. URL: https://time.com/7307589/ai-psychosis-chatgpt-mental-health/ [Accessed 2025-12-10]
  34. Dober C. Using generative AI for therapy might feel like a lifeline—but there’s danger in seeking certainty in a chatbot. The Guardian. Aug 3, 2025. URL: https://www.theguardian.com/commentisfree/2025/aug/03/generative-ai-chatbot-therapy-dangers-risks [Accessed 2025-12-10]
  35. Draelos RL, Afreen S, Blasko B, et al. Large language models provide unsafe answers to patient-posed medical questions. arXiv. Preprint posted online on Jul 25, 2025. [CrossRef]
  36. A novel patient-facing mobile platform to collect and implement patient-reported outcomes and voice biomarkers in underserved adult patients with asthma. Agency for Healthcare Research and Quality. URL: https:/​/digital.​ahrq.gov/​ahrq-funded-projects/​novel-patient-facing-mobile-platform-collect-and-implement-patient-reported [Accessed 2025-02-11]
  37. ASTHMAXcel voice mobile application to improve chronic disease management and patient outcomes. Agency for Healthcare Research and Quality URL: https:/​/web.​archive.org/​web/​20251110170833/​https:/​/digital.​ahrq.gov/​program-overview/​research-stories/​asthmaxcel-voice-mobile-application-improve-chronic-disease [Accessed 2025-12-03]
  38. Feng T. Applications of artificial intelligence to diagnosis of neurodegenerative diseases. Stud Health Technol Inform. Nov 23, 2023;308:648-655. [CrossRef] [Medline]
  39. Jee J, Fong C, Pichotta K, et al. Automated real-world data integration improves cancer outcome prediction. Nature. Dec 2024;636(8043):728-736. [CrossRef] [Medline]
  40. Yang X, Chen A, PourNejatian N, et al. A large language model for electronic health records. NPJ Digit Med. Dec 26, 2022;5(1):194. [CrossRef] [Medline]
  41. Ye J, Woods D, Jordan N, Starren J. The role of artificial intelligence for the application of integrating electronic health records and patient-generated data in clinical decision support. AMIA Jt Summits Transl Sci Proc. May 31, 2024;2024:459-467. [Medline]
  42. Armato SG, Drukker K, Hadjiiski L. AI in medical imaging grand challenges: translation from competition to research benefit and patient care. Br J Radiol. Oct 2023;96(1150):20221152. [CrossRef] [Medline]
  43. DeFreitas M. Can AI help monitor chemotherapy side effects? HealthLeaders. Jan 8, 2024. URL: https://www.healthleadersmedia.com/technology/can-ai-help-monitor-chemotherapy-side-effects [Accessed 2025-12-09]
  44. Maleki Varnosfaderani S, Forouzanfar M. The role of AI in hospitals and clinics: transforming healthcare in the 21st century. Bioengineering (Basel). Mar 29, 2024;11(4):337. [CrossRef] [Medline]
  45. Eastwood B. Ambient listening in healthcare: dictation, documentation and AI. HealthTech. Aug 7, 2024. URL: https://healthtechmagazine.net/article/2024/08/ambient-listening-inhealthcare-perfcon [Accessed 2025-04-08]
  46. Wang J, Redelmeier DA. Cognitive biases and artificial intelligence. NEJM AI. Nov 27, 2024;1(12). [CrossRef]
  47. Biro JM, Handley JL, Malcolm McCurry J, et al. Opportunities and risks of artificial intelligence in patient portal messaging in primary care. NPJ Digit Med. Apr 24, 2025;8(1):222. [CrossRef] [Medline]
  48. Durán JM, Jongsma KR. Who is afraid of black box algorithms? On the epistemological and ethical basis of trust in medical AI. J Med Ethics. Mar 18, 2021:medethics-2020-106820. [CrossRef] [Medline]
  49. Maliha G, Gerke S, Cohen IG, Parikh RB. Artificial intelligence and liability in medicine: balancing safety and innovation. Milbank Q. Sep 2021;99(3):629-647. [CrossRef] [Medline]
  50. Kim H, Jin HM, Jung YB, You SC. Patient-friendly discharge summaries in Korea based on ChatGPT: software development and validation. J Korean Med Sci. Apr 29, 2024;39(16):e148. [CrossRef] [Medline]
  51. Spänig S, Emberger-Klein A, Sowa JP, Canbay A, Menrad K, Heider D. The virtual doctor: an interactive clinical-decision-support system based on deep learning for non-invasive prediction of diabetes. Artif Intell Med. Sep 2019;100:101706. [CrossRef] [Medline]
  52. Bullerman B. Joint commission on accreditation of healthcare organizations. United States of America. Implicit Bias in Healthcare: A Quick Safety Review. Mar 23, 2026. URL: https://coilink.org/20.500.12592/gbpfpg [Accessed 2024-05-01]
  53. Akay EM, Hilbert A, Carlisle BG, Madai VI, Mutke MA, Frey D. Artificial intelligence for clinical decision support in acute ischemic stroke: a systematic review. Stroke. Jun 2023;54(6):1505-1516. [CrossRef] [Medline]
  54. Jeong HK, Park C, Henao R, Kheterpal M. Deep learning in dermatology: a systematic review of current approaches, outcomes, and limitations. JID Innov. 2022;3(1):100150. [CrossRef] [Medline]
  55. Balla Y, Tirunagari S, Windridge D. Pediatrics in artificial intelligence era: a systematic review on challenges, opportunities, and explainability. Indian Pediatr. Jul 15, 2023;60(7):561-569. [Medline]
  56. Clement J, Maldonado AQ. Augmenting the transplant team with artificial intelligence: toward meaningful AI use in solid organ transplant. Front Immunol. 2021;12:694222. [CrossRef] [Medline]
  57. Buchlak QD, Esmaili N, Leveque JC, et al. Machine learning applications to clinical decision support in neurosurgery: an artificial intelligence augmented systematic review. Neurosurg Rev. Oct 2020;43(5):1235-1253. [CrossRef] [Medline]
  58. Shi W, Zhuang Y, Zhu Y, Iwinski H, Wattenbarger M, Wang MD. Retrieval-augmented large language models for adolescent idiopathic scoliosis patients in shared decision-making. Presented at: ACM-BCB 2023: 14th ACM Conference on Bioinformatics, Computational Biology, and Health Informatics; Sep 3-6, 2023. [CrossRef]
  59. Ferrara E. Fairness and bias in artificial intelligence: a brief survey of sources, impacts, and mitigation strategies. Sci. 2024;6(1):3. [CrossRef]
  60. Aung YYM, Wong DCS, Ting DSW. The promise of artificial intelligence: a review of the opportunities and challenges of artificial intelligence in healthcare. Br Med Bull. Sep 10, 2021;139(1):4-15. [CrossRef] [Medline]
  61. Nair M, Svedberg P, Larsson I, Nygren JM. A comprehensive overview of barriers and strategies for AI implementation in healthcare: mixed-method design. PLoS One. 2024;19(8):e0305949. [CrossRef] [Medline]
  62. Reddy S. Generative AI in healthcare: an implementation science informed translational path on application, integration and governance. Implement Sci. Mar 15, 2024;19(1):27. [CrossRef] [Medline]
  63. Liu S, McCoy AB, Peterson JF, et al. Leveraging explainable artificial intelligence to optimize clinical decision support. J Am Med Inform Assoc. Apr 3, 2024;31(4):968-974. [CrossRef] [Medline]
  64. Iroju O, Soriyan A, Gambo I, Olaleke J. Interoperability in healthcare: benefits, challenges and resolutions. Int J Innov Appl Stud. 2013;3(1):262-270. URL: https://ijias.issr-journals.org/abstract.php?article=IJIAS-13-090-01 [Accessed 2026-03-21]
  65. Yim D, Khuntia J, Parameswaran V, Meyers A. Preliminary evidence of the use of generative AI in health care clinical services: systematic narrative review. JMIR Med Inform. Mar 20, 2024;12:e52073. [CrossRef] [Medline]
  66. An Artificial Intelligence Code of Conduct for Health and Medicine: Essential Guidance for Aligned Action. National Academy of Medicine; 2025.
  67. Dullabh P, Dungan R, Raj M, et al. Trust & Patient-Centeredness Workgroup: Methods for Involving End-Users in PC CDS Co-Design [Internet]. Agency for Healthcare Research and Quality (US); 2023.
  68. Desai PJ, Zott C, Gauthreaux N, et al. Trust & Patient-Centeredness Workgroup: An Introductory Handbook for Patient Engagement Throughout the Patient-Centered Clinical Decision Support Lifecycle [Internet]. Agency for Healthcare Research and Quality (US); 2023.
  69. Pham Q, Gamble A, Hearn J, Cafazzo JA. The need for ethnoracial equity in artificial intelligence for diabetes management: review and recommendations. J Med Internet Res. Feb 10, 2021;23(2):e22320. [CrossRef] [Medline]
  70. Nadarzynski T, Knights N, Husbands D, et al. Achieving health equity through conversational AI: a roadmap for design and implementation of inclusive chatbots in healthcare. PLOS Digit Health. May 2, 2024;3(5):e0000492. [CrossRef] [Medline]
  71. Milne-Ives M, de Cock C, Lim E, et al. The effectiveness of artificial intelligence conversational agents in health care: systematic review. J Med Internet Res. Oct 22, 2020;22(10):e20346. [CrossRef] [Medline]
  72. Robinson R, Liday C, Lee S, et al. Artificial intelligence in health care-understanding patient information needs and designing comprehensible transparency: qualitative study. JMIR AI. 2023;2:e46487. [CrossRef] [Medline]
  73. Jayakumar P, Moore MG, Furlough KA, et al. Comparison of an artificial intelligence-enabled patient decision aid vs educational material on decision quality, shared decision-making, patient experience, and functional outcomes in adults with knee osteoarthritis: a randomized clinical trial. JAMA Netw Open. Feb 1, 2021;4(2):e2037107. [CrossRef] [Medline]
  74. Labkoff S, Oladimeji B, Kannry J, et al. Toward a responsible future: recommendations for AI-enabled clinical decision support. J Am Med Inform Assoc. Nov 1, 2024;31(11):2730-2739. [CrossRef] [Medline]
  75. Gartner B, Leysen D, Mcgowan H, et al. Digitally supported shared decision-making for exercise prescription in the secondary prevention of cardiovascular disease. Eur J Prev Cardiol. May 19, 2025;32(Supplement_1). [CrossRef]
  76. Campos H, Salmi L. Critical AI health literacy as liberation technology: a new skill for patient empowerment. NAM Perspect. 2025;12. [CrossRef]
  77. Jackson BR, Sendak MP, Solomonides A, Balu S, Sittig DF. Regulation of artificial intelligence in healthcare: Clinical Laboratory Improvement Amendments (CLIA) as a model. J Am Med Inform Assoc. Feb 1, 2025;32(2):404-407. [CrossRef] [Medline]
  78. Salwei ME, Davis SE, Reale C, et al. Human-centered design of an artificial intelligence monitoring system: the Vanderbilt Algorithmovigilance Monitoring and Operations System. JAMIA Open. Oct 2025;8(5):ooaf136. [CrossRef] [Medline]
  79. Asgari E, Montaña-Brown N, Dubois M, et al. A framework to assess clinical safety and hallucination rates of LLMs for medical text summarisation. NPJ Digit Med. May 13, 2025;8(1):274. [CrossRef] [Medline]
  80. Qiu P, Wu C, Liu S, et al. Quantifying the reasoning abilities of LLMs on clinical cases. Nat Commun. Nov 6, 2025;16(1):9799. [CrossRef] [Medline]
  81. Chen S, Gao M, Sasse K, et al. When helpfulness backfires: LLMs and the risk of false medical information due to sycophantic behavior. NPJ Digit Med. Oct 17, 2025;8(1):605. [CrossRef] [Medline]
  82. Kellogg KC, Ye B, Hu Y, Savova GK, Wallace B, Bitterman DS. Large language models require a new form of oversight: capability-based monitoring. arXiv. Preprint posted online on Nov 5, 2025. [CrossRef]
  83. Croxford E, Gao Y, First E, et al. Evaluating clinical AI summaries with large language models as judges. NPJ Digit Med. Nov 5, 2025;8(1):640. [CrossRef] [Medline]
  84. Davis SE, Walsh CG, Matheny ME. Open questions and research gaps for monitoring and updating AI-enabled tools in clinical settings. Front Digit Health. 2022;4:958284. [CrossRef] [Medline]
  85. Davis SE, Embí PJ, Matheny ME. Sustainable deployment of clinical prediction tools-a 360° approach to model maintenance. J Am Med Inform Assoc. Apr 19, 2024;31(5):1195-1198. [CrossRef] [Medline]
  86. Sittig DF, Singh H. Recommendations to ensure safety of AI in real-world clinical care. JAMA. Feb 11, 2025;333(6):457-458. [CrossRef] [Medline]
  87. Health data, technology, and interoperability: certification program updates, algorithm transparency, and information sharing. Office of the National Coordinator for Health Information Technology (ONC), Department of Health and Human Services (HHS). 2023. URL: https://healthit.gov/wp-content/uploads/2023/12/hti-1-final-rule.pdf [Accessed 2025-03-17]
  88. Dullabh P, Dhopeshwarkar R, Leaphart D, et al. Trustworthy Artificial Intelligence (TAI) for Patient-Centered Outcomes Research (PCOR). Office of the Assistant Secretary for Planning and Evaluation (ASPE); 2023.


AI: artificial intelligence
EHR: electronic health record
GenAI: generative artificial intelligence
LLM: large language model
CDS: patient-centered clinical decision support
PGHD: patient-generated health data
PRO: patient-reported outcome
SDM: shared decision-making


Edited by Amaryllis Mavragani; submitted 01.Aug.2025; peer-reviewed by Denise Hynes, Huasheng Lv, Karen Kier, Nicki Newton; final revised version received 03.Feb.2026; accepted 27.Feb.2026; published 01.Apr.2026.

Copyright

© Prashila Dullabh, Courtney Zott, Nicole Gauthreaux, Caroline Peterson, Abigail Aronoff, Kistein Monkhouse, Dean F Sittig. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 1.Apr.2026.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research (ISSN 1438-8871), is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.