Viewpoint
Abstract
The last years have seen an acceleration in the development and uptake of artificial intelligence (AI) systems by “early adopter” hospitals, caught between the pressures to “perform” and “transform” in a struggling health care system. This transformation has raised concerns among health care providers as their voices and location-specific workflows have often been overlooked, resulting in technologies that fail to integrate meaningfully into routine care and worsen rather than improve care processes. How can positive AI implementation be carried out in health care, aligned with European values? Based on a perspective that spans all stakeholders, we have created EURAID (European Responsible AI Development), a practical, human-centric framework for AI development and implementation based on agreed goals and values. We illustrate this approach through the co-development of a narrow-purpose “in-house” AI system, designed to help bridge the AI implementation gap in real-world clinical settings. This example is then expanded to address the broader challenges associated with complex, multiagent AI systems. By portraying all key stakeholders across the AI development life cycle and highlighting their roles and contributions within the process, real use cases, and methods for achieving iterative consensus, we offer a unique practical approach for safe and fast progress in hospital digital transformation in the AI age.
J Med Internet Res 2026;28:e80754doi:10.2196/80754
Keywords
The Transformation of Future Medicine Through Artificial Intelligence Technologies
Will the slogans already heard in health care system strikes, such as “Trust Nurses, Not AI” and “AI has got to go!”[,], become more common? They reflect growing concerns about the evolving role of health care professionals (HCPs) in a changing health system, which persist despite reports that 20% of National Health Service (NHS) doctors are already using artificial intelligence (AI) daily []. Although the importance of digital transformation to enhance the efficiency of care delivery and to provide better models of care suited to modern age [-] is well recognized within care systems [-], it often cannot be comprehensively addressed, as health care systems worldwide find themselves caught between the need to both “perform” and “transform” in a system facing “firefighting” ongoing challenges [-]. The application of AI technologies has the potential to address some of those aspects (), as it can speed digital transformation and can (at least if applied well and if the associated potential barriers and uncertainties are jointly recognized and resolved) make health care more accessible, effective, and economically sustainable []. Examples of the positive impact of good AI implementation are (1) enhancement of clinical practice, particularly in areas such as diagnosis and personalized medicine [,,]; (2) workflow improvements, by supporting administrative tasks such as transcription, patient communication, and patient-related recordkeeping [,]; and (3) increased operational efficiency, through the optimization of routine processes, enabling HCPs to work in a more patient-centered way [], and potentially contribute to cost reductions [,]. With the recent introduction of “agentic AI” [-] and autonomous AI-enabled systems [,], far more systematic complexity can be handled by AI [].
| Current health system problems | Possible digital and AIa solutions | Implementation challenges and risks |
| Administrative workload unrelated to direct patient care [,], inefficient workflows, and fragmented communication burden on HCPsb. | Automation of administrative and routine tasks, and AI-driven workflow optimization, allowing people to focus on patients. |
|
| Stress, duplication (eg, medical history) [], and discontinuous care resulting from disconnected devices, limited interoperability, and manual coordination. | Adjusting the hospital’s IT environment as an AI-sustained platform, characterized by high interoperability in itself and with other providers supporting seamless patient journeys. |
|
| Poor information flow and HCP training deficit. | AI-supported knowledge management to build confidence in usage. |
|
aAI: artificial intelligence.
bHCP: health care professional.
However, AI is not a panacea, and initial evaluations of real-world performance in clinical settings are mixed [-]. One reason is that AI implementation projects have often underestimated the importance of individual AI medical devices operating as interconnected clinical and technological infrastructures rather than being a collection of isolated, standalone algorithms. AI in health care over the next years needs to be seen as interacting, interdependent, and flexible applications [], involving both broad- and narrow-purpose tools and models that closely interact with and reshape human workflows, while simultaneously, human workflows, adaptations, and experience reshape the use of AI, particular to the local setting and local approach to health care delivery.
Integration of Interactive AI Systems in Clinical Workflows Requires HCPs at the Core, Not as Observers
This future model needs HCPs at its core, not only as users interacting with AI systems, but as active participants in their co-design, procurement, implementation, monitoring, and evaluation. This idea is rooted in organizational and implementation theories, such as the “socio-technical systems theory” [], that emphasizes the importance of a holistic perspective to jointly bridge human and technological capabilities, particularly in the context of autonomous technologies [,], and the “normalization process theory” [], which acknowledges users’ cognitive participation and collective action as key determinants in implementing, embedding, and integrating complex and new interventions (eg, AI systems) in daily practice [,]. “Human-centered AI” can take a cross-theoretical perspective by viewing AI systems not as stand-alone technologies, but as integral components of a broader sociotechnical system. Two perspectives are relevant: humans being able to understand AI and AI being able to understand humans []. For example, explainable AI (XAI) methods should not only address the technical transparency of machine learning models but also focus on human understanding []. On the other hand, AI systems need to take into account the needs, requirements, and mental models of humans [] and the context of clinical decisions [] to create explanations that are supportive in the clinical setting.
Yet, despite the substantial body of research on theoretical foundations, the translation of the underlying principles into everyday implementation of AI systems and clinical reality is lagging behind [-], often key aspects are neglected, and many implementation projects fail []. Problems often begin during the development of AI systems, which are frequently designed and tested in settings that are far removed from the everyday realities of clinical practice [], and with HCPs and location-specific workflows often overlooked. The consequences of systems designed without sustained input from HCP and patients [] are visible as they fail to demonstrate their suitability and worsen rather than improve processes, leading to the perception that the introduction of digital technologies into health care adds to the burden [,] (), although general relief through well-implemented work aids would be very welcomed. That misalignment has been associated with increased stress among HCPs [] (including “technostress” [,]), disconnected patient care [,], and has even resulted in other unintended negative consequences, such as HCPs resisting the use of the technologies [], using technologies in unanticipated ways [], or developing workarounds that may endanger patient care []. Insufficient digital health literacy and training among HCPs amplify these effects, leaving HCPs unprepared for the demands of interacting with intelligent systems []. Other consequences appearing in real-world implementation are model uncertainty [], “AI hallucinations” or clinically harmful recommendations, bias [], and context misalignment [], which risk fragmented care and diminish patients’ trust in technology-assisted decisions.

Improving Adoption by Co-Development Across the AI Life Cycle
Overview
The real-world challenges discussed underscore that successful AI development and implementation are less a technical task than a comprehensive change management process [] that needs active participation, transparent governance, continuous feedback, and development beyond technical metrics, including systematic real-world evaluation of human-AI interaction, and a focus on non-technical design criteria such as usability, workflow fit, trust, and acceptance.
To bridge this gap, we propose EURAID (European Responsible AI Development), a practical framework of human-centric AI development and implementation in hospitals, which is cooperative and collaborative and based on shared goals in accordance with European values according to Article 2 of the Treaty on European Union (TEU; ie, human dignity [,], freedom [,], democracy [], equality [], rule of law [], and human rights [,]) and European laws ().
| Regulation or law | Scope | Approach |
| Medical Device Regulation (MDR; 2017/745) | Governs medical devices (including digital systems) used for diagnostic or therapeutic purposes. |
|
| Artificial Intelligence Act (AI Act; 2024/1689) | Governs the development, market entry, and use of AIe systems. |
|
| EU Occupational Safety and Health Directive (89/391/EEC 1989) and national laws | Ensures workers’ health and safety. |
|
| Professional regulations (eg, Federal Medical Code for doctors) and labor laws (eg, German Works Constitution Acts) | Defines autonomy and participation rights of HCPsh. |
|
aGSPR: general safety and performance requirements.
bISO: International Organization for Standardization.
cQMS: quality management system.
dIEC: International Electrotechnical Commission.
eAI: artificial intelligence.
fGPAI: general-purpose artificial intelligence.
gLLM: large language model.
hHCP: health care professional.
In detail, we describe the appropriate stakeholder circle, the approaches needed for implementing new and highly integrated, localized, and adaptive AI models, and optimal techniques for building consent. While this paper emphasizes that AI systems are increasingly evolving into system-level tools with broad intended purposes, it is nevertheless valuable to explore the development of a narrow-purpose, limited-functionality tool as a simple entry point in the consideration of AI system implementation. This example serves as a foundation for discussing the broader challenges associated with a broad intended purpose and multiagent AI systems. We describe the co-development of an “in-house” AI system [] that is developed within a health institution to address specific needs [,], rather than the implementation of an externally developed “off the shelf” AI system, as this allows more aspects of the collaborative process to be described.
This pragmatic approach was developed in part through in-depth individual consultation and 4 flexible multistakeholder workshops, which are described in more detail in . By bringing together all the relevant players in the health care ecosystem, we were able to set agreed goals and processes for the development, integration, use, and oversight of health AI. These insights from the workshops informed aspects of the development of the overall framework presented in this viewpoint, alongside the perspective of the authors.
| Aspect | Approach |
| Stakeholder definition |
|
| Identification of stakeholders |
|
| Stakeholder engagement |
|
aAI: artificial intelligence.
bHCP: health care professional.
cEURAID: European Responsible AI Development.
Step 1: Comprehensive and Inclusive Stakeholder Involvement to Build Consensus and Ensure Goal-Oriented Development and Implementation
The selection and active participation of stakeholders and the building of consensus are critical to the success of the AI system development and implementation. The stakeholders involved should be balanced in disciplinarity (clinical, technical, and administrative []) and operational responsibilities (professional positions, employee representatives, etc) as well as in age and gender. In , we highlight the key stakeholders involved, and in particular their role in the implementation process. Each stakeholder is selected for their contribution, ranging from strategic aspects (management board) to safety perspectives (employee representatives, quality management, clinical experts, and users), and data-driven issues (AI system developer, data scientists, and IT and regulatory specialists). In principle, stakeholders in their profession are not mutually exclusive; instead, one could fulfill several roles simultaneously.
| Stakeholder | Important areas of stakeholder involvement and key aspects they can address | |
Management Board![]() | The management board sets an overall vision and strategy, leading change management [,], and providing investment [] in staff, hardware, and supporting infrastructure []. They foster an institutional culture that tolerates experimentation (and failure) [], serve as the institution’s most credible communicator (ensuring transparency around risks and benefits), and manage external relationships by forging alliances with industry innovators, researchers, professional associations, and policymakers. | |
Employee Representatives![]() | The foremost priority of employee representatives is to defend and improve working conditions, including occupational safety, workload management, and job security. Although large-scale staff redundancies are unlikely consequences of the near-term implementation of AIa in hospital health care systems, which are operating against a backdrop of large staff shortage [,], anxiety about automation and transformation of job roles is real []. Employee representatives ensure that AI is implemented in a way that eases staff workload and safeguards their well-being and autonomy. In the mid- to long-term, they also negotiate fair compensation policies [] and career development frameworks that reflect changing roles and skills in an increasingly digital workplace. | |
AI-System Owner and teamb![]() | The AI system owner holds primary accountability for the system’s performance, safety, and operational impact. They lead the project and ensure alignment with strategic goals and regulatory compliance, while understanding the users’ “pain points” both from a clinical and organizational perspective. Their responsibilities include bridging the communication gap between technical and nontechnical language, balancing different perspectives, and developing educational approaches [] to increase user adoption. | |
Clinical Experts![]() | Clinical experts identify clinical relevance and utility, which are interpreted and transcribed into a specific scope (intended purpose that specifies clinical indication and initial target group). They provide crucial input to clinical validation and safety, ensuring the AI system integrates effectively into workflows, as well as initiate, oversee, and conduct clinical trial–based AI studies. | |
AI System Developer![]() | To design and develop machine learning algorithms tailored to specific needs, the AI system developer must integrate and harmonize data from different sources []. They also validate the AI model and detect and mitigate model bias to ensure the systems are fair, scalable, adaptable, and verifiable in real-world environments []. | |
Users (HCPc or patient)![]() | Users with varying levels of digital literacy [,] provide real-world, iterative feedback on the system’s usability, workflow integration, and perceived value. They often become multipliers for AI adoption, and by their active participation in co-designing educational materials [], they support evolving digital competence among peers. | |
Data Scientist![]() | The data scientist safeguards the quality of the data foundation on which the AI system depends during preparation, collection, and checking of the data, for example, by keeping data collection protocols and detecting data imbalance, bias, or outliers across age, sex, gender, race, or ethnicity to prevent disparities and underperformance before they arise []. | |
IT Specialists![]() | This role provides the essential technical infrastructure and ensures secure, seamless integration with existing systems, like EHRd platforms or laboratory systems, requiring technical, syntactic, semantic, and organizational interoperability [,]. Beyond integration, they build and maintain structures for data security, access control, and real-time support, and establish data backup and disaster-recovery systems. | |
Regulatory Specialists![]() | Regulatory Specialists provide expertise in medical device and AI law, data protection, and human rights. They ensure regulation standards (like the MDRe and the AI Regulation) are met throughout the product lifecycle, which is essential to mitigate legal risks and prevent potential breaches. | |
Notified Body![]() | The role of the Notified Body is to assess whether medical devices meet European legislation, like MDR. This includes determining the correct classification, evaluating legal compliance, and reviewing technical documentation [,]. The Notified Body only has a direct role where a CEf-mark is sought for medium or high-risk AI systems. | |
Quality Management![]() | Quality management ensures continuous patient safety by monitoring and measuring performance, outcomes, and the integrity of clinical workflows []. They establish comprehensive risk management systems (eg, handling device failures or malfunctions) [] and drive standardization. This role also promotes safe system use by co-designing educational programs [] for both HCP and patients. | |
aAI: artificial intelligence.
bRole of the stakeholders whose input is coordinated through the AI-System Owner.
cHCP: health care professional.
dEHR: electronic health record.
eMDR: Medical Device Regulation (2017/745).
fCE: Conformité Européenne.
An interactive environment, with all critical stakeholder groups adequately represented, enables and encourages the integration of stakeholder insights and experiential learnings, while promoting careful consideration of how AI systems are best built to be suited to clinical workflows, as well as where existing workflows may need to be modified to adapt to the AI system. This does not mean that every stakeholder group is involved in every decision and has an equal say in the progress of digitalization. Creating this impression could lead to disillusionment and eroded trust in digitalization, and would probably slow down the whole process. Each stakeholder group is involved in some part of the process, with their precise stages of involvement and roles depending on their potential contribution to the process, and it is essential that each stakeholder is aware of the degree of their involvement.
A crucial success factor alongside the development and implementation is the role of the “product owner,” who takes the coordinating lead. As in-house development in health care institutions often does not have a commercial development focus, we use the term “AI-System Owner” to denote the “product owner.” Although the title may vary by organization, this role usually combines both the entire lifecycle product ownership responsibilities and the domain expertise in health care and AI. The absence of a single person taking responsibility for the development and performance of the system will generally result in a range of negative consequences, such as poor stakeholder communication, a lack of clear vision, scope, and prioritization, and other issues, as real-world examples [] have shown. We therefore highlight the AI-System Owner as a central stakeholder leading a team of other stakeholders ().

Step 2: Agreement on the Overall Goals and “Device” Purpose
The collaborative and effective implementation of an AI system into clinical workflows starts with a collective agreement on the goals of the implementation, for example, using methods such as SMART (specific, measurable, attainable, relevant, and time-bound), particularly the specific user (generally an HCP or patient) whose needs the system is intended to address. These identified needs are then interpreted and transcribed into a specific scope of the device, known as the “Intended Purpose,” which specifies the clinical indication, how the system addresses this clinical indication, and the (initial) target group needs.
Although the regulations for AI-system design and implementation do not formally require the direct involvement of any other health care system actors than the “user” of the AI and its “deployer” (in a broad sense), we argue that the sustainable and beneficial implementation of AI systems needs early and proportional agreement on goals and input from all stakeholders. This includes discussion between the management board, employee representatives, quality management, and the AI-System Owner and their team (). Later product development steps require feedback between the AI-System Owner team (including clinical experts and the users of the system), and selected stakeholders (as shown in ), with management “checkpoints” periodically to ensure that the development of the AI-system is following the initially agreed plan for the AI system. Given the complexity of multistakeholder involvement, it is useful to have a set of rules for working together at the beginning, and to repeatedly build consent along the AI development life cycle, for which we highlight techniques in .

Step 3: AI System Development “In-House”
While generally medical devices must undergo a conformity assessment procedure and must be marked with a CE (Conformité Européenne)-mark before being used, the European Union (EU) exempts certain devices from this general obligation and allows individual health institutions to develop and use “in-house” medical AI systems involved in the diagnosis or therapy of disease without the obligation to conduct a conformity assessment procedure, as long as safety standards and those for quality management are in place. Based on Article 5(5) of the EU Medical Device Regulation (MDR; 2017/745) [], this exemption applies only for in-house use on a nonindustrial scale and if the needs of the targeted patient groups cannot be met through available and equivalent devices on the market [,]. Also covered is the in-house combination or modification of existing systems or devices [,]. For example, in , we have outlined 3 practical examples of AI systems, which have been developed in-house in a German hospital setting, each of them with a unique intended purpose, clinical indication, and target group. We highlight for each the technical approach used as well as the stakeholders included during development and potential prospective trial designs.
| Use case | Automated discharge summary | AIa-powered voice assistant for bedside patient support | AI-supported prevention of adverse events |
| Intended purpose | Automates and optimizes the creation of discharge letters within hospital workflows to reduce clinician workload and improve communication regarding patient care. | Enables patients at the bedside to interact via natural speech, facilitating access to medication schedules, personal calendars, diary management, and support to overcome language barriers through oral translation and simplified language. | Focuses on early and reliable detection of nursing-relevant risks by enhancing existing risk models based on structured nursing assessments and integrating LLMsb to analyze clinical progress notes and identify patient-specific risk factors. |
| Clinical indication | Addresses the challenge of time-intensive medical documentation, particularly discharge summaries following inpatient stays. | Designed for patients requiring accessible communication support, especially those experiencing language barriers, vision impairments, or limited mobility, while promoting autonomy without providing direct medical advice. | Designed to support systematic, early identification of nursing-related risks, including falls, pressure ulcers, and malnutrition, augmenting safety and enabling individualized care planning. |
| Target group | Primarily, hospital physicians with indirect patient benefits, such as improved continuity of care and efficient information transfer to general practitioners. | Hospitalized patients who require assistance in accessing information and communicating effectively. | Nursing staff responsible for patient care and hospitalized patients are actively involved in care processes. |
| Technical approach | Uses generative AI language models interfaced with hospital information systems to autonomously extract structured clinical data and generate contextually relevant text suggestions for documentation. | Uses on-premises LLMs within dedicated patient devices; enables localized processing of voice input streams independent of hospital system integration, thereby preserving data sovereignty. | Integrates structured clinical data, unstructured data derived from speech-to-text conversion of nursing assessments, and patient-reported outcomes to facilitate comprehensive risk detection. |
| Stakeholders included during development | Management Board, AI System Developer, AI-System Owner, IT Specialists, Clinical Experts, and Users. | Management Board, AI System Developer, IT Specialists, Clinical Experts, and Users. | Management Board, AI System Developer, AI-System Owner, IT Specialists, Clinical Experts, and Users. |
| Experience of development | Developed iteratively as a prototype, validated with real clinical data, while ensuring compliance with regulatory, privacy, and interoperability standards. | Followed an iterative development approach with thorough curation of informational content; faced technical challenges such as limited server access before full deployment of open-source models. | Development prioritized screening instruments to assess signs and symptoms of nursing care, optimization of AI risk detection models, and ensuring data privacy using pseudonymization and anonymization techniques. |
| Potential prospective trial designs | Cluster-randomized controlled trial at the ward level, comparing standard discharge processes versus AI-assisted summaries. Primary endpoints: clinician documentation time and report quality (as judged by independent review). | Patient-level crossover trial with and without AI voice assistant support. Main outcomes: patient autonomy, effectiveness of information access, and user satisfaction, controlling for intrapatient variability. | Pragmatic controlled trial in clinical wards comparing standard care with and without AI-based risk detection algorithms. Outcomes: incidence of adverse events (falls, pressure ulcers, and malnutrition), timeliness of risk identification, and changes in clinical workflow. |
aAI: artificial intelligence.
bLLM: large language model.
In contrast to commercial deployments, in-house systems offer a distinctive opportunity for embedding participatory ethics, iterative design cycles, and real-world validation and feedback loops directly into the lifecycle of medical AI. This allows the creation of a highly customized solution that fits in location-specific clinical workflows and staff practices, which can be extended to multiple systems within the same platform and institution []. Moreover, a key advantage is the use of the hospital's own data; however, this requires a well-developed data infrastructure and processes for obtaining patient consent. Considerations include interoperability and data preparation, such as labeling (although label-free approaches are becoming more common), structuring, and collection (requirements also under the AI Regulation), in order to know which data can be used for a specific solution.
Step 4: AI-System Testing, Validation, and Clinical Evaluation
Health care AI demands rigorous, multidimensional evaluation that must encompass not only technical performance, but also clinical integration, and verify safety, usability, ethical robustness, and regulatory compliance.
Independent assessment of device performance can be generated through statistically sound test plans, which generate information separate from the training data set []. Since validation in real-world settings is still a bottleneck [], prospective, noninterventional silent trials [,] (where AI is tested within the clinical pathway in real time without affecting patients) can enhance transparency and facilitate informed deployment decisions. For large language models (LLMs) and, in particular, adaptive AI models that evolve over time, continuous validation frameworks are needed []. Recent studies have highlighted, substantial challenges to the reliability and safety of LLMs in health care persist, including hallucinations [], metacognitive deficiencies [], vulnerability to bias [] and data-poisoning [], and problems in integration in existing workflows [], making single evaluation dimensions insufficient. Therefore, multidimensional methods could help to operationalize feasibility, score diagnostic accuracy or unsafe recommendations, and detect bias and usability issues. Examples are “QUEST” [] to score outputs, or agentic-based simulations such as “CRAFT-MD” [] for clinical workflow evaluation. Alignment with international AI standards (eg, ISO/IEC [International Organization for Standardization/ International Electrotechnical Commission] 42001:2023 [], FG-AI4H [Focus Group on AI for Health] clinical evaluation framework []) further strengthens interoperability and safety.
Beyond objective data and algorithm quality, subjective feedback from users is essential [,]. Evaluations should capture how AI systems integrate into existing workflows and routines, their ease of use, and their perceived performance and interface design. Researchers highlighted several approaches for evaluation, such as through integrated feedback systems [,] or through organizational internalization by creating an “AI-QI”-unit responsible for quality improvement and assurance [], interacting as a “glue” between different entities.
Evaluation should follow a risk-tiered approach that links the level of regulatory and ethical scrutiny to the severity of the health decision involved (). For instance, AI systems used for administrative optimization or appointment scheduling may require a lower level of risk mitigation, while those supporting diagnostic or therapeutic decisions demand significantly higher safeguards. This tiering can draw on the EU AI Act’s risk classes and MDR risk classifications, and should be developed in consensus with relevant stakeholders, including clinical risk management and regulatory specialists.

To ensure that the AI system is compatible with European values, ethics-based auditing frameworks like capAI, grounded in the EU AI Act, can guide risk identification in each phase of the AI lifecycle from an ethical point of view []. The integration of tools like the self-assessment list for trustworthy AI (ALTAI) [], developed by the EU High-Level Expert Group on AI, into ethics-based auditing of AI systems can further support responsible usage of AI and foster user trust. Yet, ethical guidelines are just that: guidelines. They rarely or incompletely answer concrete ethical questions regarding the use of an AI system in a specific situation, such as the question of specific moral responsibility if mistakes of AI systems lead to patient harm. This is a highly discussed topic in ethics [] and becomes even more severe in the context of black-box problems, eventually leading to moral responsibility gaps []. Other still unsolved ethical questions occur, for example, regarding data ownership in the context of the principle of beneficence (ie, promoting others’ benefit and preventing harm [,]) and informed consent [] or anthropomorphization of AI []. Therefore, embedding ethical points of view into the whole life cycle of AI is necessary [].
Step 5: Development and Deployment of Training Approaches
The successful adoption of AI by hospital employees correlates with continuous development and training []. Although training is also a requirement of the EU AI Act [], it is of note that only 24% of the health care institutions provide AI training programs and workshops []. This underscores a gap in education and certification, leaving clinicians without the necessary tools to harness the full potential of AI. However, there are various ways to support confidence in AI technologies among HCPs. For example, (1) by investing in comprehensive training programs that help to gain necessary skills [] while also extending existing programs with AI literacy, or (2) by developing and provisioning resources and mechanisms to build and strengthen connections among peers and innovators to share their AI-related knowledge and experiences []. And more importantly, AI training should be a fixed part of any professional education and competency assessment, as well as included in further training (eg, through integration into Continual Medical Education programs) [] to build confidence in its use among the next generation of HCP and achieve a symbiotic relationship between humans and AI [].
In order to build AI literacy among HCP in a safe and controlled environment, training methods such as simulation-based modules [,] (ie, practice in realistic settings [,]), case-based exercises [], and interactive workshops [] can help to explore tools repeatedly without risking patient safety while facilitating experimental learning. Another method of providing HCPs with hands-on experience using AI tools in a controlled environment is to conduct a pilot phase, during which AI is tested by selected clinical users in a narrow area of practice, or shadow deployment, in which AI operates in shadow mode alongside clinicians in real time and is guided by predefined safety and workflow indicators []. This will also influence trust and adoption among users and foster psychological safety, since evidence from human-computer interaction research indicates that a positive attitude toward AI is not only a function of system transparency or explainability, but also depends on users’ self-efficacy, previous experience, and the perceived fairness and predictability of the system []. With regard to content, it is important to define responsibilities within the company regarding who will take ownership of training the users in basic competencies of AI literacy. The AI-System Owner and his or her team would be the best fit, as they combine the entire use case-relevant expertise through different perspectives, ranging from clinical experts to system developers.
Training should foster understanding of AI systems and facilitate interaction and use of AI systems, and is relevant not just for direct users, but for all HCPs who will work alongside care systems influenced by AI () [,]. Key competencies are a basic understanding of when and how to use AI, knowledge about the use of the systems’ elements, the ability to make informed decisions based on a risk-benefit analysis, the awareness of legal and ethical considerations, and, to adapt to new tools and applications [,]. Components of health care AI training that are generic do not need to be developed de novo by the health institution. However, specific training directly related to the AI-system to be deployed will generally be required, and it is often necessary to provide ongoing training which takes account of the learning curve of the HCP in the use of the AI, emergent problems such as automation bias [] and deskilling [], and changes and further development of the AI-systems.

Step 6: AI-System Deployment, Real-World Performance Monitoring, and Later Decommissioning
After model creation and testing, the goal is to place the system in real-world clinical settings to improve patient care and outcomes [] according to the previously defined overall goals and device purpose. This needs transparency, and compliance with legal and ethical processes (user consent), as well as the completion of all steps required for the exemption to conduct a conformity assessment under in-house deployment (“MDR Article 5(5)”) or third-party approval (CE-mark). Therefore, looping in all stakeholders is needed to collaboratively address associated challenges. A key role is played by the management board and AI-System Owner to provide a clear external and internal communication that signals the prioritization of human well-being during the whole process, and users as multipliers to promote trust for broad widespread acceptance and use.
Involving all stakeholders also applies to monitoring and oversight of real-world performance, as it needs constant feedback from different perspectives to improve system performance and data-related processes. The goal of monitoring is to raise an alarm when unintended or special cases occur [], which emphasizes the importance of finding solutions through collaboration and collective intelligence. The “AI-QI” unit described above could consolidate and strengthen the established stakeholder structure within the company long term. In addition, algorithmic audits can serve as a framework for continuously monitoring AI systems and understanding errors, how and why these adverse events occurred, while anticipating their potential consequences []. Real-world performance monitoring must adequately account for model drift (degradation of AI system performance over time) due to changes in external factors such as patient populations, data collection, or medical practice [].
Running a “legacy system” usually means facing layers of technical debt, which slows down development and complicates maintenance, and leads to several risks, such as the technology becoming less reliable and decreasing in performance, or exposing systems to vulnerabilities such as cyberattacks. However, decommissioning can be an option to abstract and secure data in a newer system []. This process needs to be carried out by IT and regulatory specialists, as well as data scientists and quality management, in consultation with users, the management board, and employee representatives, and notified bodies where required.
Special Considerations for Adaptive, “Agentic” and “off-the-Shelf” AI Systems
Some recent AI approaches are developed so that they learn and adapt from data and feedback from the real world, allowing them to change continuously without explicit interventions from the developer [,]. Ensuring such systems are safe, effective, and of high quality while being flexible requires a more interactive and participatory approach than traditional systems that follow static and predefined rules. This is especially true when self-learning systems are combined with agentic AI systems that are able to handle multilevel tasks, coordinate tools, centralize human communication, and basically act as health care teammates [-]. Autonomous AI systems and LLM-enabled clinical decision systems have already been approved in Europe [,,]. As the approval and use increase, and as these systems continuously encounter new settings and tasks, it is essential to define clear boundaries, controlled environments with clinician oversight [], ongoing auditing [], and adequate training capacities for HCPs []. As broad models may be applied across multiple hospital departments and clinical contexts (eg, simultaneously used in an emergency department and psychiatry clinic) with dynamic or variable workflow integration, transparent communication, and iterative feedback across stakeholders (as presented in this paper) are also critical to ensure adaptability and to address the more complex ethical, legal, and social implications.
For off-the-shelf AI systems provided by external companies, the interaction between stakeholders should be focused on integration, compliance, and validation to meet operational and regulatory needs. These systems may limit the level of innovation achievable (no bottom-up activism from internal users and developers to continually contribute improvements and features that better meet unique requirements) and may lead to trust issues due to less transparency in the handling of data and underlying algorithms [], requiring proactive communication and change management. Responsibilities for monitoring and model updating, especially with proprietary algorithms, become more complex and need to be clarified between external collaborators and internal stakeholders []. Platforms for delivering off-the-shelf AI systems now allow the co-hosting of in-house developed AI models, alongside the CE-marked models, enabling both approaches to coexist, and making clear the need and possibilities for the co-design, embedding, and co-implementation of commercial and in-house approaches [].
Discussion
Studies show a persistent gap between research and clinical implementation [,], with medical AI adoption still very slow [,] and limited to a few use cases []. Reasons include the difficulty of aligning diverse stakeholder perspectives within complex health care systems, the rigidity of regulatory frameworks, and the limited consideration of design approaches of work and organizational psychology []. As a result, achieving both technological effectiveness, in the sense of medical accuracy and system performance, and user acceptance among HCP and patients is often perceived as conflicting goals.
A balance is therefore needed between ensuring safety and enabling innovation []. EURAID finds this “sweet spot,” accelerating digital transformation in a human-centric way. Unlike existing frameworks, which focus narrowly on user perspectives [,,], isolated implementation aspects [,-] (such as evaluation, safety, or ethics), serve as decision support tool for choosing the most fitting available AI solution [], or have a limited clinical scope [-], EURAID explicitly maps all key stakeholders across the AI development life cycle, clarifies their roles and key aspects they can address () in co-creating, guiding, and governing “in-house” AI development and deployment. It also details stakeholder roles in real use cases, and methods for achieving iterative consensus at each development stage across disciplines that reflect shared goals in alignment with European values, and strengthening the understanding of training methods, content, and key competencies.
However, EURAID has some limitations. The resources or specialized staff needed for iterative development and testing are more limited in smaller hospitals, necessitating concentrating multiple roles on fewer people, which can lead to a shortage of expertise, but, on the other hand, may also speed up processes. Although our approach can likely better address creative problem-solving, traditional, rigid, and hierarchical structures common in health care may hinder stakeholder selection based on their contributions and expertise rather than their positions and level of seniority. Although “in-house” AI devices may not require CE marking, they are not exempt from regulation and have legal liability implications. Health institutions must comply with a number of obligations that may discourage them from doing it at all, which slows down both innovation and digitalization. A practical solution is to designate key staff for legal or ethical liaison roles or establish a multidisciplinary AI advisory board and data governance council within the institution to ensure compliance and continuity.
Conclusions
EURAID is a pragmatic, solution-oriented framework, compatible with European values and regulations, and ensures that barriers to “in-house” AI development and implementation in hospitals are acknowledged early and resolved through collaborative problem-solving. The underlying principle is that the likely future of medicine, driven by integrated, localized, and adaptive AI technologies, will need all critical stakeholders (which we portray individually in this paper) adequately represented, and their various perspectives embedded in the co-design, procurement, implementation, and oversight of AI systems, ensuring that digital transformation in health care truly benefits the people who will use them every day. Additionally, as AI systems used vary by type and clinical setting, we propose a risk-tiered approach that provides a useful link between risk classification and the required level of human oversight, transparency, and stakeholder involvement.
To translate EURAID into action, hospitals should begin by conducting internal readiness assessments, establishing cross-functional AI governance structures, and defining clear, role-specific responsibilities for ethical, legal, technical, and clinical oversight. Regulators and professional bodies should, in parallel, create structures that connect local innovation with next-generation European legislation, for governance that is as intelligent as the technology built.
Acknowledgments
We acknowledge the use of the ChatGPT language model (GPT-3.5, GPT-4, and GPT-5; OpenAI) for assisting in refining some text of this paper. Responsibility for the final manuscript lies entirely with the authors. The graphical elements in this paper were designed using Inkscape.
Funding
This work was supported by the European Commission under the Horizon Europe Program, as part of the project ASSESS-DHT (101137347) via funding to SG and RM. The views and opinions expressed herein are, however, the authors’ responsibility only, and do not necessarily reflect those of the European Union, the United Kingdom, the European Health and Digital Executive Agency (HaDEA), UK Research and Innovation (UKRI), or the National Institute for Health and Care Excellence (NICE); the European Union, United Kingdom, and granting authorities cannot be held responsible for the views, opinions, and information contained herein.
Authors' Contributions
AS and SG developed the concept of the study. AS and SG wrote the first draft of the paper. AS, MEG, MHG, FJK, JNK, EK, TL, EL, ML, RM, HSM, JO, TR, UR, M Schneider, LS, HS, MLS, NS, M Sedlmayr, RS, BS, MKW, EW, KW, AD, and SG contributed to the writing, interpretation of the content, and editing of the paper, revising it critically for important intellectual content. AS, MEG, MHG, FJK, JNK, EK, TL, EL, ML, RM, HSM, JO, TR, UR, M Schneider, LS, HS, MLS, NS, M Sedlmayr, RS, BS, MKW, EW, KW, AD, and SG had final approval of the completed version. AS, MEG, MHG, FJK, JNK, EK, TL, EL, ML, RM, HSM, JO, TR, UR, M Schneider, LS, HS, MLS, NS, M Sedlmayr, RS, BS, MKW, EW, KW, AD, and SG take accountability for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved.
The authors expertise ranges from medical device regulation (AS, SG, RM, and ML), to high-level management of digital transformation in big hospitals (AD), and includes experts in quality and clinical risk management (RS, MEG, and SG), medical informatics (ML, M Schneider, SG, MLS, and M Sedlmayr) and occupational health and safety at work (UR, LS, and TL), as well as relevant insights from clinical experts and HCP (EW, JNK, HSM, NS, JO, and AD), AI system developers (JNK, NS, and JO) and expertise in psychology and human-centered AI development (MKW, KW, and TL). In addition, we included relevant legal (EK), ethical (EL), and federal policy (MLS) perspectives, health and social accident insurance companies (TL and M Schneider), labor unions (BS), and from academia (MHG, HS, TR, FJK, AS, RM, HSM, SG, JNK, and MKW).
Conflicts of Interest
SG declares a nonfinancial interest as an Advisory Group member of the EY-coordinated “Study on Regulatory Governance and Innovation in the field of Medical Devices” conducted on behalf of the Directorate-General for Health and Food Safety (SANTE) of the European Commission. He declares the following competing financial interests: SG has or has had consulting relationships with Una Health GmbH, Lindus Health Ltd, Flo Ltd, ICURA ApS, Rock Health Inc, Thymia Ltd, FORUM Institut für Management GmbH, High-Tech Gründerfonds Management GmbH, Directorate-General for Research and Innovation of the European Commission, and Ada Health GmbH, and holds share options in Ada Health GmbH. JNK declares consulting services for Bioptimus, France; Panakeia, UK; AstraZeneca, UK; and MultiplexDx, Slovakia. Furthermore, he holds shares in StratifAI, Germany, Synagen, Germany, and Ignition Lab, Germany; has received an institutional research grant from GSK; and has received honoraria from AstraZeneca, Bayer, Daiichi Sankyo, Eisai, Janssen, Merck, MSD, BMS, Roche, Pfizer, and Fresenius. JO has received travel grants from Abbott and research grants from German Heart Foundation (DSHF), German Center for Cardiovascular Research (DZHK), the University of Hamburg (UHH), and the German Federal Ministry of Education and Research (BMBF), and is co-founder and former managing director of IDM GmbH. MLS reports no conflicts of interest. The opinions expressed in this article are his own and do not necessarily reflect the views held by the German Federal Ministry of Health. None declared by the other authors.
References
- Blum K. California nurses protest 'untested' AI as it proliferates in health care. Association of Health Care Journalists. URL: https://healthjournalism.org/blog/2024/08/california-nurses-protest-untested-ai-as-it-proliferates-in-health-care/ [accessed 2024-08-09]
- Bruce G. Nurses protest AI at Kaiser Permanente. Becker's Health IT. URL: https://www.beckershospitalreview.com/healthcare-information-technology/nurses-protest-ai-at-kaiser-permanente/ [accessed 2024-04-22]
- Blease CR, Locher C, Gaab J, Hägglund M, Mandl KD. Generative artificial intelligence in primary care: an online survey of UK general practitioners. BMJ Health Care Inform. 2024;31(1):e101102. [FREE Full text] [CrossRef] [Medline]
- Fernandopulle R. We must stop trying to deliver 21st-century care with a 19th-century delivery model. MedGenMed. 2005;7(2):50. [FREE Full text] [Medline]
- Kennedy PJ. Our health system is built on an antiquated model of care. The Hill. Aug 25, 2020. URL: https://thehill.com/opinion/healthcare/513615-our-health-system-is-built-on-an-antiquated-model-of-care/ [accessed 2025-04-03]
- Mele M. Antiquated methods put patients at risk. Beckers's Clinical Leadership. URL: https://www.beckershospitalreview.com/quality/antiquated-methods-put-patients-at-risk/ [accessed 2019-03-14]
- Mauro M, Noto G, Prenestini A, Sarto F. Digital transformation in healthcare: assessing the role of digital technologies for managerial support processes. Technol Forecast Soc Change. 2024;209:123781. [CrossRef]
- Marques ICP, Ferreira JJM. Digital transformation in the area of health: systematic review of 45 years of evolution. Health Technol. 2019;10(3):575-586. [CrossRef]
- Barbieri C, Neri L, Stuard S, Mari F, Martín-Guerrero JD. From electronic health records to clinical management systems: how the digital transformation can support healthcare services. Clin Kidney J. 2023;16(11):1878-1884. [FREE Full text] [CrossRef] [Medline]
- Mulukuntla S, Pamulaparthyvenkata S. Digital transformation in healthcare: assessing the impact on patient care and safety. Int J Med Health Sci. 2020;6(3). [FREE Full text]
- Alowais SA, Alghamdi SS, Alsuhebany N, Alqahtani T, Alshaya AI, Almohareb SN, et al. Revolutionizing healthcare: the role of artificial intelligence in clinical practice. BMC Med Educ. 2023;23(1):689. [FREE Full text] [CrossRef] [Medline]
- Otero-García L, Mateos JT, Esperato A, Llubes-Arrià L, Regulez-Campo V, Muntaner C, et al. Austerity measures and underfunding of the Spanish health system during the COVID-19 pandemic-perception of healthcare staff in Spain. Int J Environ Res Public Health. 2023;20(3):2594. [FREE Full text] [CrossRef] [Medline]
- MOSCIARO M, KAIKA M, ENGELEN E. Financializing healthcare and infrastructures of social reproduction: How to bankrupt a hospital and be unprepared for a pandemic. J Soc Policy. 2022;53(2):261-279. [CrossRef]
- Dennstädt F, Hastings J, Putora PM, Schmerder M, Cihoric N. Implementing large language models in healthcare while balancing control, collaboration, costs and security. NPJ Digit Med. 2025;8(1):143. [FREE Full text] [CrossRef] [Medline]
- Borges do Nascimento IJ, Abdulazeem H, Vasanthan LT, Martinez EZ, Zucoloto ML, Østengaard L, et al. Barriers and facilitators to utilizing digital health technologies by healthcare professionals. NPJ Digit Med. 2023;6(1):161. [FREE Full text] [CrossRef] [Medline]
- Rane N, Choudhary S, Rane J. Acceptance of artificial intelligence: key factors, challenges, and implementation strategies. SSRN Electron J. 2024:19. [CrossRef]
- Karpathakis K, Morley J, Floridi L. A justifiable investment in AI for healthcare: aligning ambition with reality. SSRN Electron J. 2024. [CrossRef]
- Artificial intelligence in healthcare. European Commission. URL: https://health.ec.europa.eu/ehealth-digital-health-and-care/artificial-intelligence-healthcare_en [accessed 2025-12-20]
- McDuff D, Schaekermann M, Tu T, Palepu A, Wang A, Garrison J, et al. et al. Towards accurate differential diagnosis with large language models. Nature. 2025;642(8067):451-457. [FREE Full text] [CrossRef] [Medline]
- Tu T, Schaekermann M, Palepu A, Saab K, Freyberg J, Tanno R, et al. Towards conversational diagnostic artificial intelligence. Nature. 2025;642(8067):442-450. [CrossRef] [Medline]
- Anderson BJ, Zia ul Haq M, Zhu Y, Hornback A, Cowan AD, Mott M, et al. Development and evaluation of a model to manage patient portal messages. NEJM AI. 2025;2(3). [FREE Full text] [CrossRef]
- Hassan H, Zipursky AR, Rabbani N, You JG, Tse G, Orenstein E, et al. Clinical implementation of artificial intelligence scribes in health care: a systematic review. Appl Clin Inform. 2025;16(4):1121-1135. [FREE Full text] [CrossRef] [Medline]
- Olson KD, Meeker D, Troup M, Barker TD, Nguyen VH, Manders JB, et al. Use of ambient AI scribes to reduce administrative burden and professional burnout. JAMA Netw Open. 2025;8(10):e2534976. [FREE Full text] [CrossRef] [Medline]
- Chatzikou M, Latsou D, Apostolidis G, Billis A, Charisis V, Rigas ES, et al. et al. Economic evaluation of artificially intelligent (AI) diagnostic systems: Cost consequence analysis of clinician-friendly interpretable computer-aided diagnosis (ICADX) tested in cardiology, obstetrics, and gastroenterology, from the HosmartAI horizon 2020 project. Healthcare (Basel). 2025;13(14):1661. [FREE Full text] [CrossRef] [Medline]
- El Arab RA, Al Moosa OA. Systematic review of cost effectiveness and budget impact of artificial intelligence in healthcare. NPJ Digit Med. 2025;8(1):548. [FREE Full text] [CrossRef] [Medline]
- Moor M, Banerjee O, Abad ZSH, Krumholz HM, Leskovec J, Topol EJ, et al. et al. Foundation models for generalist medical artificial intelligence. Nature. 2023;616(7956):259-265. [CrossRef] [Medline]
- Zou J, Topol EJ. The rise of agentic AI teammates in medicine. The Lancet. 2025;405(10477):457. [CrossRef]
- Moritz M, Topol E, Rajpurkar P. Coordinated AI agents for advancing healthcare. Nat Biomed Eng. 2025;9(4):432-438. [CrossRef] [Medline]
- Qiu J, Lam K, Li G, Acharya A, Wong TY, Darzi A, et al. et al. LLM-based agentic systems in medicine and healthcare. Nat Mach Intell. 2024;6(12):1418-1420. [CrossRef]
- DERM makes medical history as world's first autonomous skin cancer detection system is approved for clinical decisions in Europe. Skin Analytics. URL: https://skin-analytics.com/news/regulatory-certification/derm-class-iii-ce-mark/ [accessed 2025-12-20]
- Gilbert S, Dai T, Mathias R. Consternation as congress proposal for autonomous prescribing AI coincides with the haphazard cuts at the FDA. NPJ Digit Med. 2025;8(1):165. [FREE Full text] [CrossRef] [Medline]
- Bajwa J, Munir U, Nori A, Williams B. Artificial intelligence in healthcare: transforming the practice of medicine. Future Healthc J. 2021;8(2):e188-e194. [FREE Full text] [CrossRef] [Medline]
- Myny D, Van Goubergen D, Gobert M, Vanderwee K, Van Hecke A, Defloor T. Non-direct patient care factors influencing nursing workload: a review of the literature. J Adv Nurs. 2011;67(10):2109-2129. [CrossRef] [Medline]
- Woolhandler S, Himmelstein DU. Administrative work consumes one-sixth of U.S. physicians' working hours and lowers their career satisfaction. Int J Health Serv. 2014;44(4):635-642. [CrossRef]
- Steinkamp J, Kantrowitz JJ, Airan-Javia S. Prevalence and sources of duplicate information in the electronic medical record. JAMA Netw Open. 2022;5(9):e2233348. [FREE Full text] [CrossRef] [Medline]
- Fritz P, Kleinhans A, Raoufi R, Sediqi A, Schmid N, Schricker S, et al. et al. Evaluation of medical decision support systems (DDX generators) using real medical cases of varying complexity and origin. BMC Med Inform Decis Mak. 2022;22(1):254. [FREE Full text] [CrossRef] [Medline]
- Kanjee Z, Crowe B, Rodman A. Accuracy of a generative artificial intelligence model in a complex diagnostic challenge. JAMA. 2023;330(1):78-80. [FREE Full text] [CrossRef] [Medline]
- Ng JJW, Wang E, Zhou X, Zhou KX, Goh CXL, Sim GZN, et al. Evaluating the performance of artificial intelligence-based speech recognition for clinical documentation: a systematic review. BMC Med Inform Decis Mak. 2025;25(1):236. [FREE Full text] [CrossRef] [Medline]
- Mathias R, McCulloch P, Chalkidou A, Gilbert S. Digital health technologies need regulation and reimbursement that enable flexible interactions and groupings. NPJ Digit Med. 2024;7(1):148. [FREE Full text] [CrossRef] [Medline]
- Appelbaum SH. Socio‐technical systems theory: an intervention strategy for organizational development. Management Decision. 1997;35(6):452-463. [CrossRef]
- Behymer KJ, Flach JM. From autonomous systems to sociotechnical systems: designing effective collaborations. She Ji J Des Econ Innov. 2016;2(2):105-114. [FREE Full text] [CrossRef]
- Kudina O, Van de Poel I. A sociotechnical system perspective on AI. Minds Mach. 2024;34(3):21. [CrossRef]
- May C, Finch T. Implementing, embedding, and integrating practices: an outline of normalization process theory. Sociology. 2009;43(3):535-554. [CrossRef]
- Finch TL, Rapley T, Girling M, Mair FS, Murray E, Treweek S, et al. et al. Improving the normalization of complex interventions: measure development based on normalization process theory (NoMAD): study protocol. Implement Sci. 2013;8:43. [FREE Full text] [CrossRef] [Medline]
- Murray E, Treweek S, Pope C, MacFarlane A, Ballini L, Dowrick C, et al. et al. Normalisation process theory: a framework for developing, evaluating and implementing complex interventions. BMC Med. 2010;8:63. [FREE Full text] [CrossRef] [Medline]
- Riedl MO. Human‐centered artificial intelligence and machine learning. Hum Behav & Emerg Tech. 2019;1(1):33-36. [CrossRef]
- Dawoud K, Samek W, Eisert P, Lapuschkin S, Bosse S. Human-centered evaluation of XAI methods. IEEE; 2023. Presented at: Proceedings of the 2023 IEEE International Conference on Data Mining Workshops (ICDMW); Dec 4, 2023:912-921; Shanghai, China. [CrossRef]
- Holzinger A, Kargl M, Kipperer B, Regitnig P, Plass M, Muller H. Personas for artificial intelligence (AI) an open source toolbox. IEEE Access. 2022;10:23732-23747. [CrossRef]
- Combi C, Amico B, Bellazzi R, Holzinger A, Moore JH, Zitnik M, et al. et al. A manifesto on explainability for artificial intelligence in medicine. Artif Intell Med. 2022;133:102423. [FREE Full text] [CrossRef] [Medline]
- Woolf SH. The meaning of translational research and why it matters. JAMA. 2008;299(2):211-213. [CrossRef] [Medline]
- Westerlund A, Sundberg L, Nilsen P. Implementation of implementation science knowledge: the research-practice gap paradox. Worldviews Evid Based Nurs. 2019;16(5):332-334. [FREE Full text] [CrossRef] [Medline]
- Sanderson C, Douglas D, Lu Q, Schleiger E, Whittle J, Lacey J, et al. et al. AI ethics principles in practice: perspectives of designers and developers. IEEE Trans Technol Soc. 2023;4(2):171-187. [CrossRef]
- Tidjon LN, Khomh F. The different faces of AI ethics across the world: a principle-to-practice gap analysis. IEEE Trans Artif Intell. 2023;4(4):820-839. [CrossRef]
- Lukkien DRM, Nap HH, Buimer HP, Peine A, Boon WPC, Ket JCF, et al. et al. Toward responsible artificial intelligence in long-term care: a scoping review on practical approaches. Gerontologist. 2023;63(1):155-168. [FREE Full text] [CrossRef] [Medline]
- Oludapo S, Carroll N, Helfert M. Why do so many digital transformations fail? A bibliometric analysis and future research agenda. J Bus Res. 2024;174:114528. [CrossRef]
- Wekenborg MK, Gilbert S, Kather JN. Examining human-AI interaction in real-world healthcare beyond the laboratory. NPJ Digit Med. 2025;8(1):169. [FREE Full text] [CrossRef] [Medline]
- Safi S, Thiessen T, Schmailzl KJ. Acceptance and resistance of new digital technologies in medicine: qualitative study. JMIR Res Protoc. 2018;7(12):e11072. [FREE Full text] [CrossRef] [Medline]
- Sujan M, Baber C, Salomon P, Pool R, Chozos N, Aceves-González C. Human factors ergonomics in healthcare AI. Chartered Institute of Ergonomics & Human Factors. 2021:45. [CrossRef]
- Wosny M, Strasser LM, Hastings J. Experience of health care professionals using digital tools in the hospital: qualitative systematic review. JMIR Hum Factors. 2023;10:e50357. [FREE Full text] [CrossRef] [Medline]
- Wekenborg MK, Förster K, Schweden F, Weidemann R, Bechtolsheim FV, Kirschbaum C, et al. et al. Differences in physicians' ratings of work stressors and resources associated with digital transformation: cross-sectional study. J Med Internet Res. 2024;26:e49581. [FREE Full text] [CrossRef] [Medline]
- Brod C. Technostress: The Human Cost of the Computer Revolution. Boston, MA. Addison-Wesley; 1984.
- Alkureishi MA, Choo ZY, Rahman A, Ho K, Benning-Shorb J, Lenti G, et al. et al. Digitally disconnected: qualitative study of patient perspectives on the digital divide and potential solutions. JMIR Hum Factors. 2021;8(4):e33364. [FREE Full text] [CrossRef] [Medline]
- Tabche C, Raheem M, Alolaqi A, Rawaf S. Effect of electronic health records on doctor-patient relationship in Arabian gulf countries: a systematic review. Front Digit Health. 2023;5:1252227. [FREE Full text] [CrossRef] [Medline]
- Zheng K, Abraham J, Novak LL, Reynolds TL, Gettinger A. A survey of the literature on unintended consequences associated with health information technology: 2014–2015. Yearb Med Inform. 2018;25(01):13-29. [CrossRef]
- Holden RJ, Rivera-Rodriguez AJ, Faye H, Scanlon MC, Karsh B. Automation and adaptation: Nurses' problem-solving behavior following the implementation of bar coded medication administration technology. Cogn Technol Work. 2013;15(3):283-296. [FREE Full text] [CrossRef] [Medline]
- Antecedents of constructive human-AI collaboration: an exploration of human actors' key competencies. In: IFIP Advances in Information and Communication Technology. Cham, Switzerland. Springer International Publishing; 2021:113-124.
- Hüllermeier E, Waegeman W. Aleatoric and epistemic uncertainty in machine learning: an introduction to concepts and methods. Mach Learn. 2021;110(3):457-506. [FREE Full text] [CrossRef]
- Dung L. Current cases of AI misalignment and their implications for future risks. Synthese. 2023;202(5):138. [FREE Full text] [CrossRef]
- Charter of fundamental rights of the European union (2000/C 364/01). European Parliament, the Council and the Commission of the European Union. URL: https://www.europarl.europa.eu/charter/pdf/text_en.pdf [accessed 2025-12-24]
- Human dignity in the European Union (EU). Values@VET. 2025. URL: https://valuesatvet.si/files/2025/06/Human-dignity-in-the-European-Union.pdf [accessed 2025-12-20]
- Freedom in the European Union (EU). Values@VET. URL: https://valuesatvet.si/files/2025/06/Freedom-in-the-European-Union.pdf [accessed 2025-12-20]
- EU mechanism on democracy, the rule of law and fundamental rights: European Parliament resolution of 25 October 2016 with recommendations to the commission on the establishment of an EU mechanism on democracy, the rule of law and fundamental rights (2015/2254(INL)) (2018/C 215/25). European Parliament. 2018. URL: https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX%3A52016IP0409&utm [accessed 2025-12-20]
- Klamert M, Kochenov D. Article 2 TEU. In: The EU Treaties and the Charter of Fundamental Rights: A Commentary. New York. Oxford Academic; 2019:22-30.
- European Convention on Human Rights, as amended by protocols nos. 11, 14 and 15 supplemented by protocols nos. 1, 4, 6, 7, 12, 13 and 16. European Court of Human Rights. URL: https://www.echr.coe.int/documents/d/echr/convention_ENG [accessed 2025-12-20]
- Regulation (EU) 2017/745 of 5 April 2017 on medical devices, amending directive 2001/83/EC, regulation (EC) No 178/2002 and regulation (EC) No 1223/2009 and repealing council directives 90/385/EEC and 93/42/EEC. European Parliament and Council of the European Union. URL: https://eur-lex.europa.eu/eli/reg/2017/745/oj/eng [accessed 2025-03-25]
- Gilbert S, Mathias R, Schönfelder A, Wekenborg M, Steinigen-Fuchs J, Dillenseger A, et al. et al. A roadmap for safe, regulation-compliant Living Labs for AI and digital health development. Sci Adv. 2025;11(20):eadv7719. [FREE Full text] [CrossRef] [Medline]
- Calderaro J, Morement H, Penault-Llorca F, Gilbert S, Kather JN. The case for homebrew AI in diagnostic pathology. J Pathol. 2025;266(4-5):390-394. [FREE Full text] [CrossRef] [Medline]
- Ørngreen R, Levinsen KT. Workshops as a research methodology. Electron J E-Learn. 2017;15(1):70-81. [FREE Full text]
- Concannon TW, Meissner P, Grunbaum JA, McElwee N, Guise J, Santa J, et al. et al. A new taxonomy for stakeholder engagement in patient-centered outcomes research. J Gen Intern Med. 2012;27(8):985-991. [FREE Full text] [CrossRef] [Medline]
- Understanding healthcare workers confidence in artificial intelligence (AI) (Part 1). NHS Artificial Intelligence (AI) Lab, Health Education England (HEE). 2022. URL: https://digital-transformation.hee.nhs.uk/building-a-digital-workforce/dart-ed/horizon-scanning/understanding-healthcare-workers-confidence-in-ai [accessed 2025-12-20]
- Hess T, Matt C, Benlian A, Wiesböck F. Options for formulating a digital transformation strategy. In: Strategic Information Management: Theory and Practice. Oxfordshire, UK. Routledge; 2020:494.
- Kejriwal M. AI in practice and implementation: issues and costs. In: Artificial Intelligence for Industries of the Future. Cham, Switzerland. Springer International Publishing; 2023:25-45.
- Džakula A, Relić D. Health workforce shortage - doing the right things or doing things right? Croat Med J. 2022;63(2):107-109. [FREE Full text] [CrossRef] [Medline]
- Global strategy on human resources for health: workforce 2030. World Health Organization. 2016. URL: https://iris.who.int/handle/10665/250368 [accessed 2025-02-13]
- Rony MKK, Parvin MR, Wahiduzzaman M, Debnath M, Bala SD, Kayesh I. "I Wonder if my years of training and expertise will be devalued by Machines": Concerns about the replacement of medical professionals by artificial intelligence. SAGE Open Nurs. 2024;10:23779608241245220. [FREE Full text] [CrossRef] [Medline]
- Kochan TA. Artificial intelligence and the future of work: a proactive strategy. AI Mag. 2021;42(1):16-24. [CrossRef]
- Feng J, Phillips RV, Malenica I, Bishara A, Hubbard AE, Celi LA, et al. et al. Clinical artificial intelligence quality improvement: towards continual monitoring and updating of AI algorithms in healthcare. NPJ Digit Med. 2022;5(1):66. [FREE Full text] [CrossRef] [Medline]
- Kumawat E, Datta A, Prentice C, Leung R. Artificial intelligence through the lens of hospitality employees: a systematic review. Int J Hosp Manag. 2025;124:103986. [CrossRef]
- de Hond AAH, Leeuwenberg AM, Hooft L, Kant IMJ, Nijman SWJ, van Os HJA, et al. Guidelines and quality criteria for artificial intelligence-based prediction models in healthcare: a scoping review. NPJ Digit Med. 2022;5(1):2. [FREE Full text] [CrossRef] [Medline]
- Lehne M, Sass J, Essenwanger A, Schepers J, Thun S. Why digital medicine depends on interoperability. NPJ Digit Med. 2019;2(1):79. [FREE Full text] [CrossRef] [Medline]
- Regulation (EU) 2017/746 of 5 April 2017 on in vitro diagnostic medical devices and repealing directive 98/79/EC and commission decision 2010/227/EU. European Parliament and Council of the European Union. URL: https://eur-lex.europa.eu/eli/reg/2017/746/oj/eng [accessed 2025-12-20]
- Lohr S. What ever happened to IBM's Watson? The New York Times. 2016. URL: https://www.nytimes.com/2021/07/16/technology/what-happened-ibm-watson.html [accessed 2025-12-20]
- Guidance on the health institution exemption under Article 5(5) of Regulation (EU) 2017/745 and Regulation (EU) 2017/746 (MDCG 2023-1). Medical Device Coordination Group (MDCG). 2023. URL: https://dskb.dk/wp-content/uploads/2021/09/In-house-guidance_stakeholders.pdf [accessed 2025-12-20]
- Boyle G, Melvin T, Verdaasdonk RM, Van Boxtel RA, Reilly RB. Hospitals as medical device manufacturers: keeping to the medical device regulation (MDR) in the EU. BMJ Innov. 2024;10(3):74-80. [CrossRef]
- Mit Künstlicher Intelligenz das Krankenhaus von morgen gestalten. SmartHospital.NRW. URL: https://smarthospital.nrw/ [accessed 2025-12-20]
- Good machine learning practice for medical device development: guiding principles. Medicines and Healthcare products Regulatory Agency (MHRA). URL: https://www.gov.uk/government/publications/good-machine-learning-practice-for-medical-device-development-guiding-principles/good-machine-learning-practice-for-medical-device-development-guiding-principles#guiding-principles [accessed 2021-10-27]
- Arun S, Grosheva M, Kosenko M, Robertus JL, Blyuss O, Gabe R, et al. et al. Systematic scoping review of external validation studies of AI pathology models for lung cancer diagnosis. NPJ Precis Oncol. 2025;9(1):166. [FREE Full text] [CrossRef] [Medline]
- Wiens J, Saria S, Sendak M, Ghassemi M, Liu VX, Doshi-Velez F, et al. et al. Do no harm: a roadmap for responsible machine learning for health care. Nat Med. 2019;25(9):1337-1340. [CrossRef] [Medline]
- McCradden MD, London AJ, Gichoya JW, Sendak M, Erdman L, Stedman I, et al. et al. CANAIRI: the collaboration for translational artificial intelligence trials in healthcare. Nat Med. 2025;31(1):9-11. [CrossRef] [Medline]
- Hellmeier F, Brosien K, Eickhoff C, Meyer A. Beyond one-time validation: a framework for adaptive validation of prognostic and diagnostic AI-based medical devices. ArXiv. Preprint posted online on September 7, 2024. [CrossRef]
- Farquhar S, Kossen J, Kuhn L, Gal Y. Detecting hallucinations in large language models using semantic entropy. Nature. 2024;630(8017):625-630. [FREE Full text] [CrossRef] [Medline]
- Griot M, Hemptinne C, Vanderdonckt J, Yuksel D. Large language models lack essential metacognition for reliable medical reasoning. Nat Commun. 2025;16(1):642. [FREE Full text] [CrossRef] [Medline]
- Omar M, Soffer S, Agbareia R, Bragazzi NL, Apakama DU, Horowitz CR, et al. Sociodemographic biases in medical decision making by large language models. Nat Med. 2025;31(6):1873-1881. [CrossRef] [Medline]
- Alber DA, Yang Z, Alyakin A, Yang E, Rai S, Valliani AA, et al. et al. Medical large language models are vulnerable to data-poisoning attacks. Nat Med. 2025;31(2):618-626. [CrossRef] [Medline]
- Hager P, Jungmann F, Holland R, Bhagat K, Hubrecht I, Knauer M, et al. Evaluation and mitigation of the limitations of large language models in clinical decision-making. Nat Med. 2024;30(9):2613-2622. [CrossRef] [Medline]
- Tam TYC, Sivarajkumar S, Kapoor S, Stolyar AV, Polanska K, McCarthy KR, et al. et al. A framework for human evaluation of large language models in healthcare derived from literature review. NPJ Digit Med. 2024;7(1):258. [FREE Full text] [CrossRef] [Medline]
- Mehandru N, Miao BY, Almaraz ER, Sushil M, Butte AJ, Alaa A. Evaluating large language models as agents in the clinic. NPJ Digit Med. 2024;7(1):84. [FREE Full text] [CrossRef] [Medline]
- ISO/IEC 42001:2023 Information technology - artificial intelligence - management system. International Organization for Standardization. 2023. URL: https://www.iso.org/standard/81230.html#lifecycle [accessed 2025-12-20]
- FG-AI4H DEL7.4 - Clinical evaluation of AI for health. International Telecommunication Union. 2023. URL: https://www.itu.int/pub/T-FG-AI4H-2023-3 [accessed 2025-12-20]
- Welzel C, Cotte F, Wekenborg M, Vasey B, McCulloch P, Gilbert S. Holistic human-serving digitization of health care needs integrated automated system-level assessment tools. J Med Internet Res. 2023;25:e50158. [FREE Full text] [CrossRef] [Medline]
- Mathias R, Vasey B, Chalkidou A, Riedemann L, Melvin T, Gilbert S. Safe AI-enabled digital health technologies need built-in open feedback. Nat Med. 2025;31(2):370-375. [CrossRef] [Medline]
- Floridi L, Holweg M, Taddeo M, Amaya Silva J, Mökander J, Wen Y. capAI - A procedure for conducting conformity assessment of AI systems in line with the EU artificial intelligence act. SSRN Electron J. 2022:91. [CrossRef]
- Directorate General for Communications Networks, Content and Technology. The assessment list for trustworthy artificial intelligence (ALTAI) for self assessment. European Commission. 2020. URL: https://data.europa.eu/doi/10.2759/002360 [accessed 2025-06-07]
- Coeckelbergh M. Artificial intelligence, responsibility attribution, and a relational justification of explainability. Sci Eng Ethics. 2020;26(4):2051-2068. [FREE Full text] [CrossRef] [Medline]
- Santoni de Sio F, Mecacci G. Four responsibility gaps with artificial intelligence: why they matter and how to address them. Philos Technol. 2021;34(4):1057-1084. [FREE Full text] [CrossRef]
- Beauchamp T. The principle of beneficence in applied ethics. The Stanford Encyclopedia of Philosophy. 2019. URL: https://plato.stanford.edu/archives/spr2019/entries/principle-beneficence/ [accessed 2025-12-20]
- Varkey B. Principles of clinical ethics and their application to practice. Med Princ Pract. 2021;30(1):17-28. [FREE Full text] [CrossRef] [Medline]
- Porsdam Mann S, Savulescu J, Sahakian BJ. Facilitating the ethical use of health data for the benefit of society: electronic health records, consent and the duty of easy rescue. Philos Trans A Math Phys Eng Sci. 2016;374(2083):20160130. [FREE Full text] [CrossRef] [Medline]
- Placani A. Anthropomorphism in AI: hype and fallacy. AI Ethics. 2024;4(3):691-698. [FREE Full text] [CrossRef]
- McLennan S, Fiske A, Tigard D, Müller R, Haddadin S, Buyx A. Embedded ethics: a proposal for integrating ethics into the development of medical AI. BMC Med Ethics. 26, 2022;23(1):6. [FREE Full text] [CrossRef] [Medline]
- Regulation (EU) 2024/1689 of the European Parliament and of the council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act). European Parliament and Council of the European Union. 2024. URL: https://eur-lex.europa.eu/eli/reg/2024/1689/oj/eng [accessed 2025-12-20]
- Early successes, untapped potential, lingering questions: AI adoption in healthcare report 2024. Healthcare Information and Management Systems Society (HIMSS), Medscape. 2024. URL: https://cdn.sanity.io/files/sqo8bpt9/production/68216fa5d161adebceb50b7add5b496138a78cdb.pdf [accessed 2025-12-20]
- Schubert T, Oosterlinck T, Stevens RD, Maxwell PH, van der Schaar M. AI education for clinicians. EClinicalMedicine. 2025;79:102968. [FREE Full text] [CrossRef] [Medline]
- Zirar A, Ali SI, Islam N. Worker and workplace artificial intelligence (AI) coexistence: emerging themes and research agenda. Technovation. 2023;124:102747. [CrossRef]
- Elendu C, Amaechi DC, Okatta AU, Amaechi EC, Elendu TC, Ezeh CP, et al. et al. The impact of simulation-based training in medical education: a review. Medicine (Baltimore). 2024;103(27):e38813. [FREE Full text] [CrossRef] [Medline]
- So HY, Chen PP, Wong GKC, Chan TTN. Simulation in medical education. J R Coll Physicians Edinb. 2019;49(1):52-57. [CrossRef]
- Datta R, Upadhyay K, Jaideep C. Simulation and its role in medical education. Med J Armed Forces India. 2012;68(2):167-172. [FREE Full text] [CrossRef] [Medline]
- Thistlethwaite JE, Davies D, Ekeocha S, Kidd JM, MacDougall C, Matthews P, et al. et al. The effectiveness of case-based learning in health professional education. A BEME systematic review: BEME Guide No. 23. Medical Teacher. 2012;34(6):e421-e444. [CrossRef]
- Mukurunge E, Reid M, Fichardt A, Nel M. Interactive workshops as a learning and teaching method for primary healthcare nurses. Health SA. 2021;26:1643. [FREE Full text] [CrossRef] [Medline]
- Daye D, Wiggins WF, Lungren MP, Alkasab T, Kottler N, Allen B, et al. et al. Implementation of clinical artificial intelligence in radiology: who decides and how? Radiology. 2022;305(3):555-563. [FREE Full text] [CrossRef] [Medline]
- Hoff KA, Bashir M. Trust in automation: integrating empirical evidence on factors that influence trust. Hum Factors. 2015;57(3):407-434. [CrossRef] [Medline]
- Jiang T, Sun Z, Fu S, Lv Y. Human-AI interaction research agenda: a user-centered perspective. Data Inf Manag. 2024;8(4):100078. [CrossRef]
- Vered M, Livni T, Howe PDL, Miller T, Sonenberg L. The effects of explanations on automation bias. Artificial Intelligence. 2023;322:103952. [CrossRef]
- Choudhury A, Chaudhry Z. Large language models and user trust: consequence of self-referential learning loop and the deskilling of health care professionals. J Med Internet Res. 2024;26:e56764. [FREE Full text] [CrossRef] [Medline]
- Ng MY, Kapur S, Blizinsky KD, Hernandez-Boussard T. The AI life cycle: a holistic approach to creating ethical AI for health decisions. Nat Med. 2022;28(11):2247-2249. [FREE Full text] [CrossRef] [Medline]
- Liu X, Glocker B, McCradden MM, Ghassemi M, Denniston AK, Oakden-Rayner L. The medical algorithmic audit. Lancet Digit Health. 2022;4(5):e384-e397. [CrossRef]
- Faust L, Wilson P, Asai S, Fu S, Liu H, Ruan X, et al. et al. Considerations for quality control monitoring of machine learning models in clinical practice. JMIR Med Inform. 2024;12:e50437. [FREE Full text] [CrossRef] [Medline]
- Planning for managing legacy systems and decommissioning digital healthcare technologies. NHS AI and Digital Regulations Service for Health and Social Care. URL: https://www.digitalregulations.innovation.nhs.uk/regulations-and-guidance-for-adopters/all-adopters-guidance/planning-for-managing-legacy-systems-and-decommissioning-digital-healthcare-technologies/ [accessed 2023-11-13]
- Sharma A, Nayancy, Verma R. The Confluence of Cryptography, Blockchain and Artificial Intelligence. Florida, USA. CRC Press; 2025.
- MHRA, Brunel University. Project Report: Research into Methodology for Determining Significant Change in the Way That an Adaptive AI Algorithm Medical Device Is Working and How Such Change Should Be Regulated. https://www.gov.uk. URL: https://www.gov.uk/government/publications/software-and-artificial-intelligence-ai-as-a-medical-device/software-and-artificial-intelligence-ai-as-a-medical-device [accessed 2025-03-29]
- The most trusted AI in mental healthcare: scale behavioral health with clinical AI. Limbic. 2025. URL: https://www.limbic.ai/ [accessed 2025-12-20]
- We provide validated information for healthcare professionals. Prof. Valmed - Validated Medical Information GmbH. URL: https://profvalmed.com/ [accessed 2025-12-20]
- Frequently asked questions. deepc GmbH. URL: https://www.deepc.ai/learn/faq [accessed 2025-12-20]
- Study on the deployment of AI in healthcare: final report. Publications Office of the European Union. 2025. URL: https://data.europa.eu/doi/10.2875/2169577 [accessed 2025-08-07]
- Eskofier BM, Klucken J. Predictive models for health deterioration: understanding disease pathways for personalized medicine. Annu Rev Biomed Eng. 2023;25(1):131-156. [FREE Full text] [CrossRef] [Medline]
- Goldfarb A, Taska B, Teodoridis F. Artificial intelligence in health care? Evidence from online job postings. AEA Pap Proc. 2020;110:400-404. [FREE Full text] [CrossRef]
- Wu K, Wu E, Theodorou B, Liang W, Mack C, Glass L, et al. et al. Characterizing the clinical adoption of medical AI devices through U.S. insurance claims. NEJM AI. 2024;1(1). [CrossRef]
- Ulfert AS, Le Blanc P, González-Romá V, Grote G, Langer M. Are we ahead of the trend or just following? The role of work and organizational psychology in shaping emerging technologies at work. Eur J Work Organ Psychol. 2024;33(2):120-129. [CrossRef]
- Gilbert S, Anderson S, Daumer M, Li P, Melvin T, Williams R. Learning from experience and finding the right balance in the governance of artificial intelligence and digital health technologies. J Med Internet Res. 2023;25:e43682. [FREE Full text] [CrossRef] [Medline]
- Ganesan S, Somasiri N. Navigating the integration of machine learning in healthcare: challenges, strategies, and ethical considerations. J Comput Cogn Eng. 2024. [CrossRef]
- Developing healthcare workers' confidence in artificial intelligence (AI) (Part 2). NHS Artificial Intelligence (AI) Lab, Health Education England (HEE). 2023. URL: https://digital-transformation.hee.nhs.uk/building-a-digital-workforce/dart-ed/horizon-scanning/developing-healthcare-workers-confidence-in-ai [accessed 2025-12-20]
- Reddy S, Rogers W, Makinen VP, Coiera E, Brown P, Wenzel M, et al. et al. Evaluation framework to guide implementation of AI systems into healthcare settings. BMJ Health Care Inform. 2021;28(1):e100444. [FREE Full text] [CrossRef] [Medline]
- Moreno-Sánchez PA, Ser JD, Gils MV, Hernesniemi J. A design framework for operationalizing trustworthy artificial intelligence in healthcare: requirements, tradeoffs and challenges for its clinical adoption. Information Fusion. 2025;127:103812. [CrossRef]
- Nair M, Nygren J, Nilsen P, Gama F, Neher M, Larsson I, et al. et al. Critical activities for successful implementation and adoption of AI in healthcare: towards a process framework for healthcare organizations. Front Digit Health. 2025;7:1550459. [FREE Full text] [CrossRef] [Medline]
- Nilsen P, Svedberg P, Neher M, Nair M, Larsson I, Petersson L, et al. A framework to guide implementation of AI in health care: protocol for a cocreation research project. JMIR Res Protoc. 2023;12:e50216. [FREE Full text] [CrossRef] [Medline]
- Dagan N, Devons-Sberro S, Paz Z, Zoller L, Sommer A, Shaham G, et al. et al. Evaluation of AI solutions in health care organizations — The OPTICA tool. NEJM AI. 2024;1(9). [CrossRef]
- Mittermaier M, Raza M, Kvedar JC. Collaborative strategies for deploying AI-based physician decision support systems: challenges and deployment approaches. NPJ Digit Med. 2023;6(1):137. [FREE Full text] [CrossRef] [Medline]
- Davahli MR, Karwowski W, Fiok K, Wan T, Parsaei HR. Controlling safety of artificial intelligence-based systems in healthcare. Symmetry. 2021;13(1):102. [FREE Full text] [CrossRef]
- Labkoff S, Oladimeji B, Kannry J, Solomonides A, Leftwich R, Koski E, et al. et al. Toward a responsible future: recommendations for AI-enabled clinical decision support. J Am Med Inform Assoc. 2024;31(11):2730-2739. [CrossRef] [Medline]
- Lekadira K, Osuala R, Gallin C. FUTURE-AI: guiding principles and consensus recommendations for trustworthy artificial intelligence in medical imaging. ArXiv. Preprint posted online on September 20, 2021. [CrossRef]
Abbreviations
| AI: artificial intelligence |
| CE: Conformité Européenne |
| EU: European Union |
| EURAID: European Responsible AI Development |
| FG-AI4H: Focus Group on AI for Health |
| HCP: health care professional |
| ISO/IEC: International Organization for Standardization/ International Electrotechnical Commission |
| LLM: large language model |
| MDR: Medical Device Regulation |
| NHS: National Health Service |
| SMART: specific, measurable, attainable, relevant, and time-bound |
| TEU: Treaty on European Union |
| XAI: explainable AI |
Edited by J Sarvestan; submitted 16.Jul.2025; peer-reviewed by I Schlömer, K-H Lin; comments to author 22.Aug.2025; revised version received 05.Nov.2025; accepted 06.Nov.2025; published 29.Jan.2026.
Copyright©Anett Schönfelder, Maria Eberlein-Gonska, Manfred Hülsken-Giesler, Florian Jovy-Klein, Jakob Nikolas Kather, Elisabeth Kohoutek, Thomas Lennefer, Elisabeth Liebert, Myriam Lipprandt, Rebecca Mathias, Hannah Sophie Muti, Julius Obergassel, Thomas Reibel, Ulrike Rösler, Moritz Schneider, Larissa Schlicht, Hannes Schlieter, Malte L Schmieding, Nils Schweingruber, Martin Sedlmayr, Reinhard Strametz, Barbara Susec, Magdalena Katharina Wekenborg, Eva Weicken, Katharina Weitz, Anke Diehl, Stephen Gilbert. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 29.Jan.2026.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research (ISSN 1438-8871), is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.












