Published on in Vol 27 (2025)

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/73189, first published .
Application of Nudges to Design Clinical Decision Support Tools: Systematic Approach Guided by Implementation Science

Application of Nudges to Design Clinical Decision Support Tools: Systematic Approach Guided by Implementation Science

Application of Nudges to Design Clinical Decision Support Tools: Systematic Approach Guided by Implementation Science

1Department of Family Medicine, School of Medicine, University of Colorado Anschutz Medical Campus, 1890 North Revere Court, Aurora, CO, United States

2Division of Geriatrics, School of Medicine, University of Colorado Anschutz Medical Campus, Aurora, CO, United States

3VA Eastern Colorado Geriatric Research Education and Clinical Center, Aurora, CO, United States

4UCHealth Colorado, Aurora, CO, United States

5Adult and Child Center for Outcomes Research and Delivery Science, School of Medicine, University of Colorado Anschutz Medical Campus, Aurora, CO, United States

6Department of Biomedical Informatics, School of Medicine, University of Colorado Anschutz Medical Campus, Aurora, CO, United States

7Department of Cardiology, School of Medicine, University of Colorado Anschutz Medical Campus, Aurora, CO, United States

8Department of General Internal Medicine, School of Medicine, University of Colorado Anschutz Medical Campus, Aurora, CO, United States

9Ludeman Family Center for Women’s Health Research, School of Medicine, University of Colorado Anschutz Medical Campus, Aurora, CO, United States

10Division of Pulmonary and Critical Care Medicine, Denver Health Medical Center, Denver, CO, United States

11Division of Hospital Medicine, School of Medicine, University of Colorado Anschutz Medical Campus, Aurora, CO, United States

Corresponding Author:

Katy E Trinkley, PharmD, PhD


Background: Clinical decision support (CDS) is one strategy to increase evidence-based practices by clinicians. Despite its potential, CDS tools produce mixed results and are often disliked by clinicians. Principles from behavioral economics such as “nudges” may improve the effectiveness and clinician satisfaction of CDS tools. This paper outlines a pragmatic approach grounded in implementation science to identify and prioritize how to incorporate different types of nudges into CDS tools.

Objective: The purpose of this paper is to describe a systematic and pragmatic approach grounded in implementation science to identify and prioritize how best to incorporate different types of nudges into CDS tools. We provide a case example of how this systematic approach was applied to design a CDS tool to improve guideline-concordant prescribing of mineralocorticoid receptor antagonists for patients with heart failure and reduced ejection fraction.

Methods: We applied the Messenger, Incentives, Norms, Defaults, Salience, Priming, Affect, Commitments, and Ego nudge framework and the Practical, Robust Implementation and Sustainability Model implementation science framework to systematically and pragmatically identify and prioritize different types of nudges for CDS tools. To illustrate how these frameworks can be applied in a real-life scenario, we use a case example of a CDS tool to improve guideline-concordant prescribing for patients with heart failure. We describe a process of how these frameworks can be used pragmatically by clinicians and informaticists or more technical CDS builders to apply nudge theory to CDS tools.

Results: We defined four iterative steps guided by the Practical, Robust Implementation and Sustainability Model: (1) engage partners for user-centered design, (2) develop a shared understanding of the nudge types, (3) determine the overarching CDS format, and (4) brainstorm and prioritize nudge types to address each modifiable contextual issue. These steps are iterative and intended to be adapted to align with the local resources and needs of various clinical scenarios and settings. We provide illustrative examples of how this approach was applied to the case example, including who we engaged, details of nudge design decisions, and lessons learned.

Conclusions: We present a pragmatic approach to guide the selection and prioritization of nudges, informed by implementation science. This approach can be used to comprehensively and systematically consider key issues when designing CDS to optimize clinician satisfaction, effectiveness, equity, and sustainability while minimizing the potential for unintended consequences. This approach can be adapted and generalized to other health settings and clinical situations, advancing the goals of learning health systems to expedite the translation of evidence into practice.

J Med Internet Res 2025;27:e73189

doi:10.2196/73189

Keywords



Uptake of evidence-based medicine into routine practice is slow and rare [1]; it takes an estimated 17 years for just 14% of effective practices to be translated [2]. Learning health systems and the field of implementation science aim to expedite the translation of evidence into practice [3] and have had some success [4-9]. One strategy increasingly used to expedite knowledge translation is the use of clinical decision support (CDS) tools [10-13]. CDS tools are often automated within the electronic health record (EHR) and are designed to encourage clinicians to follow more evidence-based practices. While CDS interventions have shown promise in improving the translation of evidence into routine practice settings [14-19], barriers to their widespread acceptance include the need to make them more user-friendly for clinicians. Unfortunately, CDS tools are not consistently designed well with clinician needs and preferences in mind, which has led to significant clinician dissatisfaction with CDS and the phenomenon of “alert fatigue” [20-22]. Thus, opportunities remain [23] to optimize the design and implementation of CDS.

One increasingly popular opportunity to optimize CDS is to use principles from behavioral economics to augment CDS [24,25]. Behavioral economics principles draw upon cognitive science and social psychology and can be used to develop more user-friendly interventions [25]. Some of the most widely used behavioral economics tools are “nudges,” which alter the manner or environment in which decisions are made, otherwise known as “choice architecture” [25,26]. Strategic use of nudges within CDS can guide clinicians toward making guideline-concordant decisions without restricting their choice [26]. There are multiple examples of nudges positively promoting behavior change across diverse settings and populations [27]. We have found many examples of nudges being used for CDS, including efforts to increase statin prescribing during primary care visits using active choice prompts and monthly peer comparison feedback; strategies implemented across pharmacies in 37 different states to increase recombinant zoster vaccine second dose rates, which prioritized compliance with organizational goals and incentives; and efforts to increase the adoption of a pulmonary embolism risk prediction tool in an emergency department of a large academic health care system [28-31].

There are many factors to consider when applying nudges to CDS. As is the case for CDS tools generally, nudge solutions should be aligned with the specific clinical situation and consider the context, such as associated workflows, available resources, and perspectives of the target audience [15,32,33]. Nudges also need to be designed carefully to ensure positive ethical behavior change, avoid unintended consequences [34], and consider maintenance of behavior change [35]. If not designed well, nudges could lead to unintended consequences or pose ethical concerns.

Across a variety of research and operational implementation projects, we have observed the need for guidance on how to select and apply the different types of nudges, particularly guidance that is systematic, comprehensive, and standardized to facilitate alignment with the clinical context [20,36-38]. Such guidance is needed to enable selection of nudges that are effective, sustainable, ethical, generalizable, and adaptable. Such a guiding framework can help developers create CDS with embedded nudges that are both locally relevant and scalable. Further, guidance is needed to strategically select from among the many types of nudges that could be applied, those that are best aligned with the specific CDS situation and that will best guide decision-making to maximize impact.

The purpose of this paper is to describe a systematic and pragmatic approach grounded in implementation science to identify and prioritize how best to incorporate different types of nudges into CDS tools. We provide a case example of how this systematic approach was applied to design a CDS tool to improve guideline-concordant prescribing of mineralocorticoid receptor antagonists for patients with heart failure and reduced ejection fraction.


Overview

To develop our pragmatic approach, we convened a multidisciplinary group representing clinicians (primary care, geriatrics, cardiology, critical care), psychologists; informaticists; implementation scientists; health services researchers; and experts in human-computer interaction, decision science, and behavioral economics. Members of the group were purposefully selected based on their relevant expertise. They were contacted electronically by emails and synchronous meetings to discuss the goals of the approach, framework selection, and processes. This group was tasked with iteratively developing a systematic approach to select nudge “types” and design nudge “forms” for clinician-facing CDS tools within an EHR. Here, we define nudge “types” based on the Messenger, Incentives, Norms, Defaults, Salience, Priming, Affect, Commitments, and Ego (MINDSPACE) categories (see framework selection, below) and “forms” as different ways the nudge types can be designed [39].

At the outset, we defined key characteristics and goals of this approach including that it needed to (1) be feasible and pragmatic; (2) be applicable across different types of CDS and clinical situations; (3) comprehensively consider the breadth of different types of nudges; (4) balance the potential to impact clinician behavior change with the risk of unintended consequences and resources and effort required to build the nudge and CDS; and (5) promote or foster equity, sustainability, and generalizability of the nudge and overarching CDS solution.

Framework Selection

To facilitate comprehensive consideration of different types of nudges, we selected the MINDSPACE framework [39]. While other frameworks could be used, we selected MINDSPACE because it is commonly used [40] and is easy to use for diverse audiences, including those without extensive training in psychology or behavioral economics. MINDSPACE categorizes nudges into 9 types based on how they alter the choice architecture. The MINDSPACE nudge types [39] are described in Table 1 along with the theoretical logic behind the behavior the nudge seeks to address. Each nudge type can then be applied in different ways or forms for specific situations. For example, the nudge type of salience can take on the form of bolded text or a pop-up that causes an interruption. To understand the potential effectiveness of different nudge types and forms on changing behavior, we also used the nudge “ladder” [25]. The nudge ladder categorizes nudges on an effectiveness scale of 1-5 (1=strongest/most effective) in which default options are the strongest and simply providing information is the weakest or least likely to change behavior [25]. As described in Table 1, we applied the nudge ladder ratings to the 9 MINDSPACE nudge types.

Table 1. MINDSPACE definitions for nudge types and their “ladder” strength with notes on how the study team came to a shared understanding.
MINDSPACE nudge typeMINDSPACE definition (mechanisms)Notes from teamPlace on ladder (1=least effective, 5=most effective)Examples in CDS setting
MessengerWe are heavily influenced by who communicates informationMessenger also includes differential impacts on behavior when information is communicated by experts (real or perceived) versus novices, for example, or by organizational leaders or people with authoritative power versus peers2
  • Insert language or institutional branding to indicate when the alert is personalized to the clinician’s institution or recommended by specific clinic leadership versus provided by a third-party data vendor
  • Present reference links to clinical guidelines and literature from public health agencies or other governing bodies prominently within the alert
IncentivesOur responses to incentives are shaped by predictable mental shortcuts, such as strongly avoiding lossesNo amendment necessary4
  • Disable the alert if the clinician reaches a goal threshold for a behavior over a certain timeframe
  • Require clinicians to provide a reason if they choose to defer action. Display reasons in a readily visible area of the patient’s chart to increase accountability.
NormsWe are strongly influenced by what others doAmend definition to include: we are strongly influenced by norms (what others do), both real and perceived2
  • Display the average frequency of a behavior or a target level set for the clinician’s specific practice site
  • Present recommended orders in an order set
DefaultsWe “go with the flow” of pre-set optionsNo amendment necessary5
  • When the clinician opens an order set, display particular medication or lab orders based on the patient’s current therapies or lab-based criteria
  • Default lab monitoring orders with appropriate sequencing as a panel in the alert
SalienceOur attention is drawn to what is novel and seems relevant to usAmend definition to include: anything that draws our attention, not necessarily because it is novel or relevant: Things that stand out or draw our attention are more likely to influence our behavior2
  • Present abnormal lab values in red bolded text or larger font size
  • Incorporate the patient’s name or photo into alert body
  • Using the current alert, trigger just-in-time communications to clinicians about future lab monitoring
PrimingOur acts are often influenced by subconscious cuesNo amendment necessary4
  • Remove native medication warnings that may be firing before the custom alert to remove negative priming mechanism within EHRb
AffectOur emotional associations can powerfully shape our actionsNo amendment necessary2
  • Add more options for deferring the alert’s re-appearance to minimize psychological reactance
  • Include statistics or a clinical vignette in the alert that describes the patient’s potential disease outcomes due to poor medication management
CommitmentsWe seek to be consistent with our public promises and reciprocate actsNo amendment necessary3
  • Implement patient-facing alert that allows patient to indicate whether they are interested in receiving information about new therapy
EgoWe act in ways that make us feel better about ourselvesNo amendment necessary2
  • In the alert, display the clinician’s percentile ranking relative to peers in performing the recommended behavior

aMINDSPACE: Messenger, Incentives, Norms, Defaults, Salience, Priming, Affect, Commitments, and Ego

bEHR: electronic health record.

While other implementation science frameworks could be used, we selected the Practical, Robust Implementation and Sustainability Model (PRISM) [41-43] because (1) it has been applied with user-centered design (UCD) and human-computer interaction principles to CDS [32], (2) it offers specific guidance on issues of sustainability and equity [41,42,44-46], (3) it includes the RE-AIM (Reach, Effectiveness, Adoption, Implementation, Maintenance) outcome measures, which can guide nudge design decisions and support pragmatic evaluation of nudges deployed, and (4) there are numerous tools and resources to assist diverse users with and without implementation science expertise through the process of applying the PRISM framework [41,42,47,48].

Use of a framework such as PRISM is key to systematically design nudges to align with the implementation context to optimize relevance, generalizability, sustainability, and equity [41,42]. An implementation science framework such as PRISM provides holistic guidance on how to assess and align complex multilevel contextual issues with the selection of nudge types and forms, along with consideration and measurement of a variety of implementation outcomes.

PRISM is illustrated in Figure 1 and includes the RE-AIM outcomes and 4 context domains that work together to enhance alignment among the context, intervention, and implementation strategies in order to maximize outcomes [43]. The PRISM context domains are (1) the characteristics of the organization and patient, (2) the perspectives of the organization and patient about the intervention including the complexity and evidence to support the intervention, (3) the implementation and sustainability infrastructure including resources available for initial and ongoing implementation, and (4) the external environment including clinical guidelines and policies or regulations.

Central to PRISM is consideration of the multilevel perspectives of organizational partners (eg, leaders, managers, staff) and patient partners (eg, family or caregiver, individual).

Figure 1. The PRISM implementation logic model. PRISM considers the dynamic interactions of the context at multiple levels of the organization, patient, or community along with factors outside the local setting (eg, external environment includes evidence-based guidelines, human-computer interaction best practices, principles of behavioral economics) and factors that influence the ability to implement or sustain the program. These contextual factors inform the design of implementation strategies (eg, fitting within usual workflows) and the intervention itself. In the case of CDS tools, the CDS can be the strategy or intervention depending on the framing of the project. Design decisions are made while proactively and iteratively considering the impact on PRISM’s RE-AIM outcomes, which include consideration of equity across all outcomes. Contextual factors and RE-AIM outcomes are informed by multilevel partner engagement across diverse perspectives to promote equity. CDS: clinical decision support; PRISM: Practical, Robust Implementation and Sustainability Model; RE-AIM: Reach, Effectiveness, Adoption, Implementation, Maintenance.

Case Example

The multidisciplinary group iteratively developed this approach, and we applied it to design a CDS tool to improve guideline-concordant prescribing of a mineralocorticoid receptor antagonist (MRA) for patients with heart failure and reduced ejection fraction (HFrEF) [49]. Despite strong evidence that MRAs improve mortality and quality of life for patients with HFrEF, suboptimal prescribing remains problematic across health systems [50,51]. We selected this use case because it is widely relevant across health systems and it can be adapted to address other gaps in care.

The MRA CDS tool was designed to be implemented within primary care and cardiology clinics across the UCHealth system. UCHealth is a regional health system that includes 14 hospitals and more than 900 clinics in academic, urban, suburban, and rural settings across Colorado and parts of Wyoming and Nebraska. UCHealth uses one integrated EHR (Epic Systems).

Ethical Considerations

This study was reviewed and approved by the Institutional Review Board. Patients and clinicians who participated in focus groups provided informed consent and were compensated for their time (patients were compensated $60 and providers were compensated $100). All local, national, regional, and international laws and regulations regarding the protection of personal information, privacy, and human rights were adhered to.


Overview

Through a series of iterative applications to the MRA case example and consensus-based discussion with the multidisciplinary group, we developed a 4-step process that can be used to systematically identify and prioritize nudges and that applies to a range of different types of nudges, CDS tools, and diverse clinical issues. These steps are outlined in Textbox 1. The four iterative steps are as follows: (1) engage partners for UCD, (2) develop a shared understanding of the nudge types, (3) determine the overarching CDS format, and (4) brainstorm and prioritize nudges to address each modifiable contextual issue.

Textbox 1. Stepwise process to applying nudges to clinical decision support (CDS) tools. This approach is guided by the Practical, Robust Implementation and Sustainability Model (PRISM) implementations science framework and leverages the Messenger, Incentives, Norms, Defaults, Salience, Priming, Affect, Commitments, and Ego (MINDSPACE) framework to categorize the various nudge types and the nudge ladder to assess the effectiveness of nudges.

Step 1: Engage partners for user-centered design

  • Assemble a team and clearly define the focus
  • Assess partners’ areas of expertise
  • Conduct partner engagement to ensure representation of perspectives (equity) and expertise
  • Assess contextual facilitators and barriers
  • Identify modifiable contextual issues that can be addressed by nudges

Step 2: Develop a shared understanding of nudge types

  • Define the nudge types in language that resonates with the team
  • Review the anticipated effectiveness of each nudge type

Step 3: Determine the overarching CDS format

  • Ensure the CDS format addresses the ultimate intended behavior change
  • Consider feasibility of embedded nudges based on the intended behavior change and overarching CDS format

Step 4: Brainstorm and prioritize nudges to address each modifiable contextual issue

  • Explore different ways the nudge types can address the modifiable issues
  • Promote divergent thinking by having multiple members of the team independently create prototypes
  • Develop a plan in place for iterative evaluation of the final product

These steps are iterative and intended to be adapted to align with the local resources and needs of various clinical scenarios and settings. Below, we describe these four steps and illustrate how this approach was applied to the use case of a CDS to improve prescribing of MRAs for HFrEF.

Step 1. Engage Partners for UCD

Step 1, engage partners for UCD, is a process that involves several sub-steps: assessment of the context, identification of key partners, statement of the problem the CDS will address, and identification of modifiable drivers of the clinical decision.

Aligned with implementation science principles of designing relevant, generalizable, sustainable, and equitable solutions, the first sub-step is using PRISM to assess the context of the local and external setting [52,53]. Key to assessing the context is multilevel partner engagement. PRISM can be used to identify the multilevel partners to engage, inform the input needed from the different partners, and promote equity through representation of all partner perspectives [48]. While CDS are clinician-facing tools, their recommendations should be aligned with the patient’s needs and preferences; therefore, the patient is an important partner perspective to capture. Use of an implementation science framework such as PRISM provides guidance on how to design for equity, sustainability, and scalability (generalizability) from the beginning [53].

Create a clear problem statement or evidence-based gap. This is informed by the contextual assessment and with input from your team. Set a team-based approach with diverse expertise and iterative engagement of multiple types of partners. It is ideal to have expertise in the areas of CDS best practices and technical build, the clinical area of interest, decision science, implementation science, and ethics, but all this expertise is not always available within usual operations. Such expertise may be represented within the “implementation team,” which leads the design activities or through the partner engagement process. Partners engaged should include clinician end users of the CDS and other levels of partners within the local setting who could influence the uptake of the CDS tool (eg, leaders, informatics governance, decision-makers) or who could be influenced by the CDS tool (eg, patients whose care is impacted). The nature of partner engagement can range from formal semi-structured interviews or focus groups to less formal meetings or even emails to meet the goals of the engagement while also fitting within available resources [20].

For the MRA example, the goal was to design a CDS tool to improve evidence-based MRA prescribing for patients with HFrEF in outpatient cardiology and primary care clinics. We convened an implementation team with expertise in the clinical situation, implementation science, clinical informatics, and decision science. We used PRISM with UCD strategies to guide our contextual assessment and partner engagement activities [32]. To understand end user characteristics and perspectives of the CDS solution, we conducted a series of clinician focus groups. Through these focus groups, we assessed contextual factors that influence prescribing, their workflows, and preferences for receiving information within such a CDS solution [54]. Patients with HFrEF were also engaged in focus groups to ensure the CDS recommendations were patient-centered [55]. To promote equity, we used purposeful sampling of providers and patients across different types of settings (eg, academic, rural, and community settings) to ensure representation of diverse perspectives, not just the average. We engaged internal health system operational and informatics leaders and governance groups. This engagement was iterative and occurred via informal one-on-one and standing group meetings to ensure alignment with strategic priorities, gain buy-in, and get approval to deploy the CDS.

Next in the UCD process is understanding modifiable drivers of the clinical decision. Findings from the local context should be iteratively considered and mixed with findings about the context of the external environment (eg, regulations, evidence-based recommendations, reimbursement issues) to understand facilitators and barriers that can influence clinician decision-making, both the initial feasibility and ongoing sustainability of maintaining the CDS tool, and the scalability or generalizability of the tool. Throughout this process, the best practices in CDS design [32], including evidence-based principles of human-computer interaction and UCD methods, provide more granular direction on assessing CDS design decisions that consider issues of socio-technical relationships.

For the MRA example, we quantitatively assessed patterns of prescribing MRAs, including characteristics of patients, to identify additional contextual drivers of prescribing [56]. We then compared our findings to externally published literature to understand generalizability and validate the issues identified. We considered key external issues including national quality benchmarks, evidence-based recommendations for HFrEF management, and best practices in CDS design.

Finally, in Step 1 of the UCD process, modifiable issues are identified that can be addressed by nudges. Not all of what is discovered in Step 1 will be modifiable or amenable to a nudge intervention; therefore, an important part is reviewing the contextual findings to create a clear list of modifiable issues that can be addressed by nudges. Partner input may be needed to determine which issues are modifiable. For example, if the implementation team does not have CDS technical or clinical expertise to consider viability of tangible solutions to contextual issues, these perspectives will need to be engaged. Findings from the contextual assessment in Step 1 that are not amenable to nudge intervention are revisited in later steps to inform the type of nudge selected and its design or form.

In the MRA CDS tool example, the ultimate goal was getting clinicians to prescribe an evidence-based MRA for patients with HFrEF. In Step 1, we identified the following contextual drivers of the decision to prescribe an MRA that were deemed tangible issues for nudges to address within a CDS: (1) remembering to prescribe an MRA during a patient encounter, (2) overcoming clinician misconceptions that high-normal serum potassium and low-normal renal function are barriers to MRA, and (3) time needed to determine what laboratory monitoring is needed to safely monitor an MRA after prescribing. We identified many other contextual barriers and facilitators that were not amenable to a nudge intervention and documented these to consider later. For example, providers felt patients often hesitated to start treatment due to cost or complexity, and while this was considered an important contextual consideration, we felt it was not amenable to a nudge intervention.

Step 2. Develop a Shared Understanding of the Nudge Types

For Step 2, develop a shared understanding of the nudge types, it is useful to review the nudge types as a team. The focus should be on developing a shared understanding in language that resonates with the team of both the different nudge types and their strength of changing behavior based on the nudge ladder. To facilitate this process, we suggest using MINDSPACE and documenting shared definitions using relatable terminology along with ratings from the nudge ladder for each nudge type. Although MINDSPACE is relatively accessible to broad audiences, the jargon and categorization can still be interpreted in different ways, especially among multidisciplinary teams. We have found this to be an iterative process to come to consensus and, ideally, integrate multidisciplinary team perspectives.

In the MRA CDS tool example, we found that the original definitions of MINDSPACE were not always intuitive and that the differentiation between the categories was not always clear. Because the original MINDSPACE definitions were interpreted differently by different members of our team, this process of developing a shared “mental model” was key to developing a common vocabulary by which we could systematically consider applying the different nudge types. Together, we iteratively worked to create definitions that resonated with the unique perspectives of our multidisciplinary team, that clearly distinguished the different categories of nudge types, and that considered the strength of the nudge. For example, salience was defined as something that draws our attention because it seems novel or relevant, but our team amended this definition to define salience as “anything that draws our attention, not necessarily because it is novel or relevant.” Table 1 provides an example of how we documented our shared understanding of the MINDSPACE nudge types and their strength.

Step 3. Determine the Overarching CDS Format

The overarching CDS tool itself is a nudge that can include additional embedded nudges. Step 3, determining the overarching CDS format, focuses on determining a format that addresses the ultimate intended behavior change (eg, appropriate prescribing), while the embedded nudges can address additional contextual drivers of the ultimate behavior change (eg, address informational needs). Deciding on the overarching CDS format can be key to guiding later decisions about what embedded nudges are feasible; therefore, Steps 3 and 4 are purposely distinct. For example, with some CDS formats, such as those embedded within a medication order, there may not be space or technical capacity to address additional contextual drivers of the ultimate behavior change. CDS within medication orders often have character limits and do not allow for functions such as hyperlinks, thus providing limited space to address contextual drivers.

Often, the overarching format of the CDS tool is broadly categorized as interruptive (eg, “pop up”) or passive. With passive CDS, either the user needs to seek out the CDS tool or the tool is presented in such a way that the user’s workflow is not interrupted (no hard stop). A decision on whether the overarching CDS tool is an interruptive or passive format is based on the severity and frequency of the targeted behavior to be changed, as well as organizational norms, priority of the issue, and end user preferences [20]. Interruptive CDS is considered a stronger nudge type based on the nudge ladder and is often reserved for higher risk clinical situations. Decisions about the overarching CDS format should be informed by traditional UCD principles and best practices in CDS design [32].

In the MRA example, an interruptive “pop-up” CDS was selected for the overarching format based on the severity of the clinical situation, input from clinicians, and perceived need for space to address additional contextual drivers of prescribing an MRA. Per MINDSPACE and the nudge ladder, this CDS format is a “salient” type, and the strength is “strong.” At our health system, interruptive CDS are reserved for situations in which the clinical severity warrants the interruption or when passive CDS are not feasible to address the need. A strong, salient nudge was deemed appropriate for our local context, and this format provided space to address other drivers of the behavior change.

Step 4. Brainstorm and Prioritize Nudges to Address Each Modifiable Contextual Issue

Step 4 is to brainstorm and prioritize the nudges. To support divergent and creative thinking, this step is ideally completed independently by multiple members of the team who then share ideas for discussion and prioritization. In this step, each MINDSPACE nudge type defined in Step 3 is iteratively applied or mapped to each modifiable issue identified in Step 2 to define different ways or forms a given nudge type could be designed within the overarching CDS format determined in Step 3. There may be multiple ways a nudge type can be implemented for each modifiable issue, which should be explored. It is also possible there is not a feasible way to apply a given nudge type due to technical or other constraints. When this step is completed by individuals with technical knowledge and understanding of the clinical context, such decisions about feasibility may be more efficient and immediately evident.

In this step, we suggest using visual prototypes or mockups of how the different nudge types could address the different modifiable issues within the overarching CDS format. Mockups can be helpful to efficiently aid understanding and facilitate discussion among the team. In most cases, low-fidelity static prototypes or wireframes are sufficient to illustrate the nudge form and are conducive to rapid iterations.

For the MRA CDS tool example, two clinician informaticists familiar with the clinical situation and technical capability of CDS (KT and SZ) systematically and independently mapped the MINDSPACE nudge types to the modifiable issues identified in Step 2 by creating different forms for each nudge type. They mocked up each nudge form into low-fidelity, static prototypes of the overarching CDS format (interruptive CDS). The prototypes consisted of screenshots of other interruptive CDS that they edited using Microsoft PowerPoint. Based on their knowledge of technical feasibility and clinical situation, they did not identify any practical forms for applying the MINDSPACE nudge types for “Commitments” or “Incentives.” Although the team considered requiring clinicians to provide a free-text reason for deferring prescription that would then be saved to the patient’s EHR record, this was quickly deemed not feasible given technical limitations and thus was not mocked up into a prototype.

Using the mockups, the team then discusses the different forms for each modifiable contextual issue and rates them based on (1) anticipated impact on changing clinician behavior, (2) potential unintended consequences, and (3) technical feasibility to build. In this step, PRISM’s RE-AIM outcomes can prove useful in anticipating the potential impact of a nudge form on pragmatic outcomes of RE-AIM [57,58]. As with any technology, usability and user experience of a given nudge are important determinants to consider when anticipating impact on outcomes, especially those outcomes that are upstream of effectiveness, including clinician adoption and implementation fidelity. Considering the impact of a nudge form based on RE-AIM, including explicit discussion about potential unintended consequences, promotes safe and ethical use of nudges, fosters a culture of “do no harm,” and considers issues of equity and autonomy. RE-AIM can also assist in anticipating the magnitude of impact a nudge form can have on various outcomes including changing clinician behavior and assist in determining if the effort is worth the return [58].

It is important to assess technical feasibility as part of examining the overall effort required. Depending on team composition, ratings of technical feasibility may be reserved for a separate conversation with EHR technical staff (eg, analysts, builders) with this knowledge. A focused discussion regarding the ratio of “benefit to resources required” is important to design practical nudges that consider the impact within resource constraints [33]. For example, if a particular nudge form is anticipated to have low adoption and be moderately effective at changing behavior but will require a high degree of ongoing resource allocation, it may not be prioritized.

For the MRA example, the prototypes guided a discussion between the two clinician informaticists and two other members of the team, a social psychologist and a clinician with expertise in decision science. The team discussed and rated the prototypes based on impact, unintended consequences, and technical feasibility. This discussion and rating process occurred over a series of meetings in which other members of the broader team were consulted as questions arose. For example, the heart failure specialist was consulted when questions arose about the appropriateness of encouraging certain clinical decisions and the operational capacity to include a link for referral to cardiology within the CDS.

Throughout this prioritization process, the nudges with the greatest anticipated value (high impact, minimal or no unintended consequences, low resource burden) are selected and used to iteratively refine CDS mockups that integrate the different combinations of nudge forms within the overarching CDS format in ways that are deemed user-friendly. Principles of UCD and CDS design best practice need to be considered when designing the nudge forms and especially when thinking about how to combine them within the overarching CDS format to optimize human-computer interaction. To promote transparency and generalizability, it is important to document decisions and reasons for decisions. Ultimately, the CDS mockups resulting from this step are then iteratively tested and refined using traditional UCD approaches (eg, simulated scenarios and usability testing) to optimize usability and user experience.

Aligned with the goals of learning health systems, once a minimum viable product is defined, an evaluation plan is critical to ensuring positive impact and to iteratively refine the tool over time. This includes consideration of a study design and pragmatic outcome measurement. PRISM and PRISM’s RE-AIM outcome measures (Reach, Effectiveness, Adoption, Implementation, Maintenance) provide a framework for measuring both implementation and effectiveness outcomes iteratively over time along with the representativeness (equity) of the outcomes and important contextual drivers. More details of how to use PRISM and its RE-AIM outcomes to evaluate CDS tools within learning health systems along with examples can be found in other references [32,59].

For the MRA example, this process led to the development of three low-fidelity static prototypes of the CDS tool that integrated various combinations of the nudge types of salience, messenger, incentives, and defaults in different forms. These three prototypes were iteratively refined by sharing with the broader implementation team and then through a traditional UCD process with potential clinician end users. Examples of the three low-fidelity prototypes and the final CDS tool are available in Multimedia Appendix 1 with illustrative examples of the different nudge types. To evaluate the impact of the MRA CDS tool, we are currently comparing 2 versions of it in a 6-month, mixed methods randomized controlled trial across outpatient cardiology and primary care practices. We are conducting a mixed methods evaluation guided by PRISM and PRISM’s RE-AIM outcomes, including interviews of clinicians and quantitative EHR data.

Time and Resource Needs

In developing this approach, we initially met frequently, and some meetings were dedicated to trialing different approaches (eg, nominal group technique), but ultimately settled on the approach described here that involved two 1-hour meetings between 2 members of the team (KT and SZ) who rapidly prototyped and prioritized nudge designs, three 1-hour meetings for input from experts in social psychology and decision science (KT, DM, LS, and JM), and periodic asynchronous input via email from other members of the multidisciplinary team (DM, LA, CL, AH, and JM). It is also notable that we did not include a patient perspective in our multidisciplinary group to design the nudges, but we did engage patients early on in Step 1 to ensure the CDS were designed with their preferences in mind.


Principal Findings and Comparison With Previous Works

We describe a new systematic and pragmatic approach to selection and prioritization of nudges that can be used to comprehensively and systematically consider key issues in designing CDS to optimize feasibility, effectiveness, equity, and sustainability while minimizing potential for unintended consequences. Grounded by implementation science principles and approaches, this 4-step approach involves (1) engaging partners for UCD, (2) developing a shared understanding of the nudge types, (3) determining the overarching CDS format, and (4) brainstorming and prioritizing nudges to address each modifiable contextual issue. The approach prioritizes those applications of nudges with greatest potential to maximize the effectiveness of the intended behavior change, minimize unintended consequences, and align with the variable resource constraints of time and personnel available to design and build CDS tools in ways that are sustainable. We focused on clinician-facing nudges, recognizing they can also be patient-facing [51].

Our findings present one way of systematically and comprehensively selecting nudges for inclusion in CDS tools using an implementation science framework, PRISM [41]. We considered other approaches, including the nominal group technique, but found them to be cumbersome and less practical. The approach we present is intended to be pragmatic, iterative, and rapid within the context of a learning health system and should flex in ways that are relevant and meet the needs and resources or expertise available within a given health system. For example, not all health systems have access to experts in social psychology who have in-depth knowledge of behavioral economics, but this limitation should not preclude them from applying this approach. In other situations, a health system may need to expedite the design and implementation of a CDS solution to address a safety issue [20], and this systematic approach may need to be abbreviated without the luxury of having a series of discussions over time.

Despite growing application of nudges to CDS tools [28-31,60,61], there is a paucity of guidance on how to apply nudges, and recent calls have been made to refine such strategies [62,63]. One framework that others have used to leverage insights from behavioral economics when designing implementation strategies is Easy-Attractive-Social-Timely (EAST) [64,65]. The 4 domains of EAST—easy, attractive, social, and timely—can provide broad guidance when designing nudges for CDS but may not provide the specificity some CDS developers need, nor does EAST provide focused guidance on issues of equity, sustainability, or unintended consequences [64]. In contrast, the approach we describe provides more granular direction on how to select among the various types of nudges by leveraging the MINDSPACE framework [39] while also considering the broad implementation context via its use of the PRISM implementation science framework. Further, our integration of PRISM into this approach provides guidance on how to design nudges in ways that optimize equity and sustainability while minimizing potential unintended consequences of nudges [28-31,41-43,47,48].

When considering the research implications of our findings, this approach addresses an unmet need for guidance on how to systematically apply nudges to CDS tools in ways that allow for adaptation and tailoring to align with the local context, but that is also replicable. This pragmatic and rigorous approach has the potential to optimize the positive impact (eg, effectiveness, equity, sustainability) of CDS tools locally by ensuring relevance, while the systematic process and use of an implementation science framework simultaneously facilitates the scalability of CDS tools and their effect [3]. In using this approach to optimize CDS tools via nudges, it is hoped the well-known knowledge to practice gap can be decreased [66,67].

Aligned with the learning health system goals of being rapid and improving the quintuple aim, we encourage the use of this approach to create minimum viable products that are not anticipated to cause harm [68]. In other words, aim for good enough and not perfect. Technology and health care are rapidly changing, and aiming for perfect is likely an unachievable goal that will only stymie progress. The goal of this systematic approach to nudges is to aim for a CDS solution that is “good enough” while being confident that no harm will result and having a plan for continual improvement over time. This approach provides guidance on how to integrate methods and principles from behavioral economics with implementation science and traditional UCD principles to intentionally design nudges (and other behavioral economics strategies) for CDS that are equitable, effective, sustainable, and generalizable. This approach considers the complexity of how the overarching CDS itself is a nudge which can have layers of embedded nudges that must all work together and align with the dynamically changing context of health care locally and nationally to positively change behavior [69,70].

Table 2 outlines tips for success when using this approach, and Table 3 outlines some considerations for using this approach that surfaced when we developed this approach.

Table 2. Recommendations for applying this systematic approach to nudges for CDSa tools.
Iterative stepKeys to success
Engage partners for UCD
  • Recruit partners within a wide range of disciplines and job roles
  • Conduct an expansive review of contributing factors to the problem, then leverage partner input to delineate which contextual factors are modifiable through CDS
  • Offer multiple formats of partner engagement ranging from participation on the implementation team to occasional emails providing usability feedback
  • Ensure the patient perspective is captured, which may include integrating them within the CDS implementation team
Develop a shared understanding of the nudge types
  • Select a nudge psychology framework to identify nudge types that is broadly interpretable by a multidisciplinary team
  • Iterate and enhance the shared “mental model” of nudge types using case examples
Determine the overarching CDS format
  • Do not overlook the format of the CDS tool itself as a potential opportunity to apply nudge psychology, in addition to nudges contained within the tool’s contents
Brainstorm and prioritize nudges to address each modifiable contextual issue
  • Complete brainstorming steps independently, then engage partners for discussion and prioritization of nudge types and forms
  • Mock up static prototype images of nudge forms to aid understanding and generate feedback
  • Systematically rate each nudge form on anticipated impact, potential unintended consequences, and technical feasibility to build
  • Document the design and decision process for future reuse

aCDS: clinical decision support.

Table 3. Frequently asked questions to consider as you apply this approach.
Steps and questionsAnswers
Step 1. Engage partners for user-centered design
What areas of expertise are helpful to engage during this process?Expertise in CDSa systems, the clinical area of interest, decision science, implementation science, and ethics is helpful whenever feasible. Additionally, consider individuals involved in the entire CDS use cycle, including health system leaders involved in governance decisions, clinician and patient end users, and informatics teams involved in build and maintenance of the tool.
What methods of partner engagement were most effective?Step 1 emphasizes the importance of diverse user-centered and multi-level perspectives, but this must be balanced with resource and scalability constraints. We found success in varying the formality and format of engagement activities depending on the goals of the engagement (eg, semi-structured interviews and focus groups for brainstorming, one-on-one meetings for testing technical build options). This facilitated increased representation without compromising efficiency.
Step 2. Develop a shared understanding of the nudge types
Did the team encounter any challenges with applying a nudge framework to technical build?Not all nudge types are easily translatable into a CDS solution. However, key to this approach is the brainstorming process where the team explored many different ways to implement each nudge type, prior to rating the solutions based on technical feasibility and impact. This encouraged “out-of-the-box” problem-solving that did not rely on local standards or vendor capabilities.
Step 3. Determine the overarching CDS format
Are there differences in the strategy for determining the overarching nudge type versus embedded nudges?While the nudge ladder can inform embedded nudges as well, we found it particularly helpful for determining the nudge type of the overarching CDS format as it correlates well with existing CDS best practices such as aligning intrusiveness and frequency with perceived risk. Additionally, the sustainability and generalizability of the overarching CDS format should be considered.
Step 4. Brainstorm and prioritize nudges to address each modifiable contextual issue
What are some examples of potential unintended consequences?We considered designing a nudge type of norm to compare each clinician’s prescribing rate to their peers but ultimately decided that this could lead to complacency if a given clinician was performing better than their peers.

aCDS: clinical decision support.

The approach we present to selecting nudges should be used as a guide and adapted as needed. When adapted, we encourage documenting and reporting how the process was adapted, to promote rigor and replicability. Those seeking to replicate this approach for their health system may also choose to use a different implementation science framework that is more familiar to them or that fits their situation better [71,72]. There are also other frameworks to categorize nudge types that could be used [25].

Strengths and Limitations

Limitations of this approach include the costs of personnel time incurred from multiple meetings; these are important to consider and balance when deciding how to apply this approach. Although our use of a multidisciplinary team-based approach is a strength, not all health systems will have access to the same types of expertise. In this paper, we describe how one health system applied the approach based on their available resources and expertise, which may not be generalizable for all health systems. However, this approach is intended to be pragmatic, and we encourage health systems to adapt it to fit within the available resources and expertise. Further, although this team-based approach may rely on consensus for decision-making, decisions are grounded in evidence-based principles of human-computer interaction and guided by principles of UCD and conceptual frameworks, namely MINDSPACE and PRISM from implementation science.

Conclusions

By using a systematic approach to selecting nudges and documenting and reporting reasons for decisions, the findings can be adapted and generalized to other health settings and clinical situations, advancing the goals of learning health systems to generate evidence that is both internally and externally valid [3]. Grounded in implementation science, this approach has the potential to improve the effectiveness, equity, and sustainability of CDS tools while minimizing the potential for unintended consequences. We hope that lessons learned using this and similar approaches will be used to make these approaches more accessible and useful to broad audiences to optimize the effectiveness of CDS and ultimately expedite the translation of evidence into practice. Future research is also needed to evaluate the impact of using this approach on CDS effectiveness.

Acknowledgments

Dr. Trinkley’s time was supported by the NHLBI (1K23HL161352). Dr. Trinkley, Dr. Maw, Meagan Bean, and Danielle Maestas Duran’s time was supported in part by AHA (23SCISA1144584). Dr. Huebschmann’s time was supported in part by NHLBI (UH3 HL151297) and NIDDK (P30 DK092923). Dr. Allen reports consulting fees from Abbott, ACI Clinical, Amgen, Boston Scientific, Cytokinetics, and Novartis.

Conflicts of Interest

None declared.

Multimedia Appendix 1

Prototypes of a Clinical Decision Support (CDS) Tool to Improve Guideline-Concordant Prescribing of Mineralocorticoid Receptor Antagonists (MRA) for Patients with Heart Failure.

DOCX File, 1838 KB

  1. McClellan M, Brown N, Califf RM, Warner JJ. Call to action: urgent challenges in cardiovascular disease: a presidential advisory from the American Heart Association. Circulation. Feb 26, 2019;139(9):e44-e54. [CrossRef] [Medline]
  2. Grant J, Green L, Mason B. Basic research and health: a reassessment of the scientific basis for the support of biomedical science. Res Eval. Dec 1, 2003;12(3):217-224. [CrossRef]
  3. Trinkley KE, Ho PM, Glasgow RE, Huebschmann AG. How dissemination and implementation science can contribute to the advancement of learning health systems. Acad Med. Oct 1, 2022;97(10):1447-1458. [CrossRef] [Medline]
  4. Lappé JM, Muhlestein JB, Lappé DL, et al. Improvements in 1-year cardiovascular clinical outcomes associated with a hospital-based discharge medication program. Ann Intern Med. Sep 21, 2004;141(6):446-453. [CrossRef] [Medline]
  5. Gianos E, Schoenthaler A, Guo Y, et al. Investigation of motivational interviewing and prevention consults to achieve cardiovascular targets (IMPACT) trial. Am Heart J. May 2018;199:37-43. [CrossRef] [Medline]
  6. Chamberlain MA, Sageser NA, Ruiz D. Comparison of anticoagulation clinic patient outcomes with outcomes from traditional care in a family medicine clinic. J Am Board Fam Pract. 2001;14(1):16-21. [Medline]
  7. Hendriks JML, de Wit R, Crijns HJGM, et al. Nurse-led care vs. usual care for patients with atrial fibrillation: results of a randomized trial of integrated chronic care vs. routine clinical care in ambulatory patients with atrial fibrillation. Eur Heart J. Nov 2012;33(21):2692-2699. [CrossRef] [Medline]
  8. Ornstein S, Jenkins RG, Nietert PJ, et al. A multimethod quality improvement intervention to improve preventive cardiovascular care: a cluster randomized trial. Ann Intern Med. Oct 5, 2004;141(7):523-532. [CrossRef] [Medline]
  9. Wright J, Bibby J, Eastham J, et al. Multifaceted implementation of stroke prevention guidelines in primary care: cluster-randomised evaluation of clinical and cost effectiveness. Qual Saf Health Care. Feb 2007;16(1):51-59. [CrossRef] [Medline]
  10. Sim I, Gorman P, Greenes RA, et al. Clinical decision support systems for the practice of evidence-based medicine. J Am Med Inform Assoc. 2001;8(6):527-534. [CrossRef] [Medline]
  11. Gold R, Sheppler C, Hessler D, et al. Using electronic health record-based clinical decision support to provide social risk-informed care in community health centers: protocol for the design and assessment of a clinical decision support tool. JMIR Res Protoc. Oct 8, 2021;10(10):e31733. [CrossRef] [Medline]
  12. Fathauer L, Meek J. Initial implementation and evaluation of a hepatitis C treatment clinical decision support system (CDSS): a nurse practitioner-driven quality improvement initiative. Appl Clin Inform. 2012;3(3):337-348. [CrossRef] [Medline]
  13. Ali SM, Giordano R, Lakhani S, Walker DM. A review of randomized controlled trials of medical record powered clinical decision support system to improve quality of diabetes care. Int J Med Inform. Mar 2016;87:91-100. [CrossRef] [Medline]
  14. Karlsson LO, Nilsson S, Charitakis E, et al. Clinical decision support for stroke prevention in atrial fibrillation (CDS-AF): rationale and design of a cluster randomized trial in the primary care setting. Am Heart J. May 2017;187:45-52. [CrossRef] [Medline]
  15. Trinkley KE, Kroehl ME, Kahn MG, et al. Applying clinical decision support design best practices with the practical robust implementation and sustainability model versus reliance on commercially available clinical decision support tools: randomized controlled trial. JMIR Med Inform. Mar 22, 2021;9(3):e24359. [CrossRef] [Medline]
  16. Blecker S, Austrian JS, Horwitz LI, et al. Interrupting providers with clinical decision support to improve care for heart failure. Int J Med Inform. Nov 2019;131:103956. [CrossRef] [Medline]
  17. Arts DL, Abu-Hanna A, Medlock SK, van Weert H. Effectiveness and usage of a decision support system to improve stroke prevention in general practice: a cluster randomized controlled trial. PLoS ONE. 2017;12(2):e0170974. [CrossRef] [Medline]
  18. Vani A, Kan K, Iturrate E, et al. Leveraging clinical decision support tools to improve guideline-directed medical therapy in patients with atherosclerotic cardiovascular disease at hospital discharge. Cardiol J. 2022;29(5):791-797. [CrossRef] [Medline]
  19. Chokshi SK, Troxel A, Belli H, et al. User-centered development of a behavioral economics inspired electronic health record clinical decision support module. Stud Health Technol Inform. Aug 21, 2019;264:1155-1158. [CrossRef] [Medline]
  20. Shakowski C, Page Ii RL, Wright G, et al. Comparative effectiveness of generic commercial versus locally customized clinical decision support tools to reduce prescription of nonsteroidal anti-inflammatory drugs for patients with heart failure. J Am Med Inform Assoc. Aug 18, 2023;30(9):1516-1525. [CrossRef] [Medline]
  21. Ancker JS, Edwards A, Nosal S, et al. Effects of workload, work complexity, and repeated alerts on alert fatigue in a clinical decision support system. BMC Med Inform Decis Mak. Apr 10, 2017;17(1):36. [CrossRef] [Medline]
  22. Murad DA, Tsugawa Y, Elashoff DA, Baldwin KM, Bell DS. Distinct components of alert fatigue in physicians’ responses to a noninterruptive clinical decision support alert. J Am Med Inform Assoc. Dec 13, 2022;30(1):64-72. [CrossRef] [Medline]
  23. Lu Y, Melnick ER, Krumholz HM. Clinical decision support in cardiovascular medicine. BMJ. May 25, 2022;377:e059818. [CrossRef] [Medline]
  24. Jenssen BP, Schnoll R, Beidas RS, et al. Cluster randomized pragmatic clinical trial testing behavioral economic implementation strategies to improve tobacco treatment for patients with cancer who smoke. J Clin Oncol. Oct 1, 2023;41(28):4511-4521. [CrossRef] [Medline]
  25. Last BS, Buttenheim AM, Timon CE, Mitra N, Beidas RS. Systematic review of clinician-directed nudges in healthcare contexts. BMJ Open. Jul 12, 2021;11(7):e048801. [CrossRef] [Medline]
  26. Patel MS, Volpp KG, Asch DA. Nudge units to improve the delivery of health care. N Engl J Med. Jan 18, 2018;378(3):214-216. [CrossRef] [Medline]
  27. Mertens S, Herberz M, Hahnel UJJ, Brosch T. The effectiveness of nudging: a meta-analysis of choice architecture interventions across behavioral domains. Proc Natl Acad Sci USA. Jan 4, 2022;119(1):e2107346118. [CrossRef] [Medline]
  28. Adusumalli S, Kanter GP, Small DS, et al. Effect of nudges to clinicians, patients, or both to increase statin prescribing: a cluster randomized clinical trial. JAMA Cardiol. Jan 1, 2023;8(1):23-30. [CrossRef] [Medline]
  29. Frederick KD, Gatwood JD, Atchley DR, et al. Exploring the early phase of implementation of a vaccine-based clinical decision support system in the community pharmacy. J Am Pharm Assoc (2003). 2020;60(6):e292-e300. [CrossRef] [Medline]
  30. Richardson S, Dauber-Decker KL, Solomon J, et al. Effect of a behavioral nudge on adoption of an electronic health record-agnostic pulmonary embolism risk prediction tool: a pilot cluster nonrandomized controlled trial. JAMIA Open. Oct 2024;7(3):ooae064. [CrossRef] [Medline]
  31. Richardson S, Dauber-Decker K, Solomon J, et al. Nudging health care providers’ adoption of clinical decision support: protocol for the user-centered development of a behavioral economics-inspired electronic health record tool. JMIR Res Protoc. Jan 18, 2023;12:e42653. [CrossRef] [Medline]
  32. Trinkley KE, Kahn MG, Bennett TD, et al. Integrating the practical robust implementation and sustainability model with best practices in clinical decision support design: implementation science approach. J Med Internet Res. Oct 29, 2020;22(10):e19676. [CrossRef] [Medline]
  33. Atkins D. So many nudges, so little time: can cost-effectiveness tell us when it is worthwhile to try to change provider behavior? J Gen Intern Med. Jun 2019;34(6):783-784. [CrossRef] [Medline]
  34. Harrison JD, Patel MS. Designing nudges for success in health care. AMA J Ethics. Sep 1, 2020;22(9):E796-E801. [CrossRef] [Medline]
  35. Manz CR, Zhang Y, Chen K, et al. Long-term effect of machine learning-triggered behavioral nudges on serious illness conversations and end-of-life outcomes among patients with cancer: a randomized clinical trial. JAMA Oncol. Mar 1, 2023;9(3):414-418. [CrossRef] [Medline]
  36. Ho PM, Glorioso TJ, Allen LA, et al. Personalized patient data and behavioral nudges to improve adherence to chronic cardiovascular medications: a randomized pragmatic trial. JAMA. Jan 7, 2025;333(1):49-59. [CrossRef] [Medline]
  37. Trinkley KE, Wright G, Allen LA, et al. Sustained effect of clinical decision support for heart failure: a natural experiment using implementation science. Appl Clin Inform. Oct 2023;14(5):822-832. [CrossRef] [Medline]
  38. Sommers SW, Tolle HJ, Trinkley KE, et al. Clinical decision support to increase emergency department naloxone coprescribing: implementation report. JMIR Med Inform. Nov 6, 2024;12:e58276. [CrossRef] [Medline]
  39. Halpern D, Paul D, Hallsworth M, King D, Vlaev I. MINDSPACE: influencing behaviour through public policy. Institute for Government; 2016. URL: https://tinyurl.com/6semkmpe [Accessed 2025-09-27]
  40. Hodson N, Powell BJ, Nilsen P, Beidas RS. How can a behavioral economics lens contribute to implementation science? Implement Sci. Apr 26, 2024;19(1):33. [CrossRef] [Medline]
  41. Feldstein AC, Glasgow RE. A practical, robust implementation and sustainability model (PRISM) for integrating research findings into practice. Jt Comm J Qual Patient Saf. Apr 2008;34(4):228-243. [CrossRef] [Medline]
  42. Rabin BA, Cakici J, Golden CA, Estabrooks PA, Glasgow RE, Gaglio B. A citation analysis and scoping systematic review of the operationalization of the Practical, Robust Implementation and Sustainability Model (PRISM). Implement Sci. Sep 24, 2022;17(1):62. [CrossRef] [Medline]
  43. Glasgow RE, Trinkley KE, Ford B, Rabin BA. The application and evolution of the practical, robust implementation and sustainability model (PRISM): history and innovations. Glob Implement Res Appl. 2024;4(4):404-420. [CrossRef] [Medline]
  44. Glasgow RE, Harden SM, Gaglio B, et al. RE-AIM planning and evaluation framework: adapting to new science and practice with a 20-year review. Front Public Health. 2019;7:64. [CrossRef] [Medline]
  45. Glasgow RE, Vogt TM, Boles SM. Evaluating the public health impact of health promotion interventions: the RE-AIM framework. Am J Public Health. Sep 1999;89(9):1322-1327. [CrossRef] [Medline]
  46. Nilsen P. Nilsen P, editor. Implementation Science: Theory and Application. Routledge; 2024. URL: https://tinyurl.com/pmtdvsj6 [Accessed 2025-09-18] ISBN: 9781032330846
  47. Trinkley KE, Glasgow RE, D’Mello S, Fort MP, Ford B, Rabin BA. The iPRISM webtool: an interactive tool to pragmatically guide the iterative use of the practical, robust implementation and sustainability model in public health and clinical settings. Implement Sci Commun. Sep 19, 2023;4(1):116. [CrossRef] [Medline]
  48. Fort MP, Manson SM, Glasgow RE. Applying an equity lens to assess context and implementation in public health and health services research and practice using the PRISM framework. Front Health Serv. 2023;3:1139788. [CrossRef] [Medline]
  49. Heidenreich PA, Bozkurt B, Aguilar D, et al. 2022 AHA/ACC/HFSA guideline for the management of heart failure: executive summary: a report of the American college of cardiology/American heart association joint committee on clinical practice guidelines. Circulation. May 3, 2022;145(18):e876-e894. [CrossRef] [Medline]
  50. Greene SJ, Butler J, Albert NM, et al. Medical therapy for heart failure with reduced ejection fraction: the CHAMP-HF registry. J Am Coll Cardiol. Jul 24, 2018;72(4):351-366. [CrossRef] [Medline]
  51. Allen LA, Venechuk G, McIlvennan CK, et al. An electronically delivered patient-activation tool for intensification of medications for chronic heart failure with reduced ejection fraction: the the EPIC-HF trial. Circulation. Feb 2, 2021;143(5):427-437. [CrossRef] [Medline]
  52. Trinkley K, Fort M, McNeal D, Green L, Huebschmann A. Furthering dissemination and implementation research: the need for more attention to external validity. In: Brownson RC, Colditz GA, Proctor EK, editors. Dissemination and Implementation Research in Health: Translating Science to Practice. 2nd ed. Oxford University Press; 2023. [CrossRef]
  53. Kwan BM, Brownson RC, Glasgow RE, Morrato EH, Luke DA. Designing for dissemination and sustainability to promote equitable impacts on health. Annu Rev Public Health. Apr 5, 2022;43:331-353. [CrossRef] [Medline]
  54. Trinkley KE, Blakeslee WW, Matlock DD, et al. Clinician preferences for computerised clinical decision support for medications in primary care: a focus group study. BMJ Health Care Inform. Apr 2019;26(1):0. [CrossRef] [Medline]
  55. Trinkley KE, Kahn MG, Allen LA, et al. Patient treatment preferences for heart failure medications: a mixed methods study. Patient Prefer Adherence. 2020;14:2225-2230. [CrossRef] [Medline]
  56. Kim R, Suresh K, Rosenberg MA, et al. A machine learning evaluation of patient characteristics associated with prescribing of guideline-directed medical therapy for heart failure. Front Cardiovasc Med. 10. [CrossRef]
  57. Maw AM, Morris MA, Glasgow RE, et al. Using iterative RE-AIM to enhance hospitalist adoption of lung ultrasound in the management of patients with COVID-19: an implementation pilot study. Implement Sci Commun. Aug 12, 2022;3(1):89. [CrossRef] [Medline]
  58. Glasgow RE, Battaglia C, McCreight M, et al. Use of the reach, effectiveness, adoption, implementation, and maintenance (RE-AIM) framework to guide iterative adaptations: applications, lessons learned, and future directions. Front Health Serv. 2022;2:959565. [CrossRef] [Medline]
  59. Trinkley KE, Maw AM, Torres CH, Huebschmann AG, Glasgow RE. Applying implementation science to advance electronic health record-driven learning health systems: case studies, challenges, and recommendations. J Med Internet Res. Oct 7, 2024;26:e55472. [CrossRef] [Medline]
  60. Schnoll R, Blumenthal D, Wileyto EP, et al. Pilot trial of a behavioral economics-informed clinical decision support alert to improve inpatient tobacco use treatment rates. Nicotine Tob Res. Jun 17, 2025:ntaf129. [CrossRef] [Medline]
  61. Rowe TA, Brown T, Lee JY, et al. Development and pilot testing of EHR-nudges to reduce overuse in older primary care patients. Arch Gerontol Geriatr. Jan 2023;104:104794. [CrossRef] [Medline]
  62. Fuery MA, Clark KA, Sikand NV, et al. Electronic health record nudges to optimize guideline-directed medical therapy for heart failure. Heart Fail Rev. Jul 2025;30(4):771-776. [CrossRef] [Medline]
  63. Chen Y, Harris S, Rogers Y, Ahmad T, Asselbergs FW. Nudging within learning health systems: next generation decision support to improve cardiovascular care. Eur Heart J. Mar 31, 2022;43(13):1296-1306. [CrossRef] [Medline]
  64. 4 Easy ways to apply EAST framework to behavioural insights. BIT. URL: https://www.bi.team/publications/east-four-simple-ways-to-apply-behavioural-insights/ [Accessed 2025-08-12]
  65. Waddell KJ, Rafferty MR, Edelstein J, Beidas RS. Harnessing behavioral economics to accelerate implementation in rehabilitation. Am J Phys Med Rehabil. Aug 1, 2025;104(8):750-754. [CrossRef] [Medline]
  66. Harden SM, Balis LE, Strayer T, Assess WML, Plan D. Evaluate, and report: iterative cycle to remove academic control of a community-based physical activity program. Prev Chronic Dis. 2023;18:1-12.
  67. Fixsen DL, Blasé KA, Timbers GD, Wolf MM. In search of program implementation: 792 replications of the Teaching-Family Model. Behav Anal Today. 2007;8(1):96-110. [CrossRef]
  68. Nundy S, Cooper LA, Mate KS. The quintuple aim for health care improvement: a new imperative to advance health equity. JAMA. Feb 8, 2022;327(6):521-522. [CrossRef] [Medline]
  69. Chambers DA, Glasgow RE, Stange KC. The dynamic sustainability framework: addressing the paradox of sustainment amid ongoing change. Implement Sci. Oct 2, 2013;8:117. [CrossRef] [Medline]
  70. Shelton RC, Chambers DA, Glasgow RE. an extension of RE-AIM to enhance sustainability: addressing dynamic context and promoting health equity over time. Front Public Health. 2020;8:134. [CrossRef] [Medline]
  71. Damschroder LJ, Reardon CM, Widerquist MAO, Lowery J. The updated consolidated framework for implementation research based on user feedback. Implement Sci. Oct 29, 2022;17(1):75. [CrossRef] [Medline]
  72. Aarons GA, Hurlburt M, Horwitz SMC. Advancing a conceptual model of evidence-based practice implementation in public service sectors. Adm Policy Ment Health. Jan 2011;38(1):4-23. [CrossRef] [Medline]


CDS: clinical decision support
EAST: Easy-Attractive-Social-Timely
EHR: electronic health record
HFrEF: heart failure with reduced ejection fraction
MINDSPACE: Messenger, Incentives, Norms, Defaults, Salience, Priming, Affect, Commitments, and Ego
MRA: mineralocorticoid receptor antagonist
PRISM: Practical, Robust Implementation and Sustainability Model
RE-AIM: Reach, Effectiveness, Adoption, Implementation, Maintenance
UCD: user-centered design


Edited by Mohamed Estai; submitted 26.Feb.2025; peer-reviewed by Adeola Bamgboje-Ayodele, Amir Goren, Jason Hoppe; final revised version received 13.Aug.2025; accepted 13.Aug.2025; published 25.Sep.2025.

Copyright

© Katy E Trinkley, Danielle Maestas Duran, Shelley Zhang, Meagan Bean, Larry A Allen, Russell E Glasgow, Amy G Huebschmann, Chen-Tan Lin, Jason N Mansoori, Anna M Maw, James Mitchell, Laura D Scherer, Daniel D Matlock. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 25.Sep.2025.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research (ISSN 1438-8871), is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.