Balancing Value and Risk: Clinicians’ Perceptions and Adoption of AI-Enabled Clinical Decision Support Systems
Date Submitted: Jan 29, 2026
Open Peer Review Period: Jan 30, 2026 - Mar 27, 2026
Background: The increasing adoption of Artificial Intelligence (AI) in healthcare, particularly within Clinical Decision Support Systems (CDSSs), is transforming clinical practice and decision-making. Although AI-CDSSs hold the potential to improve diagnostic accuracy, operational efficiency, and patient outcomes, their implementation also creates ethical, technical, and regulatory concerns, affecting healthcare professionals’ willingness to adopt these systems. Objective: Building on a value-based perspective, the study integrates the Unified Theory of Acceptance and Use of Technology (UTAUT) framework as determinants of perceived benefits and a risk-based perception model as determinants of perceived risks to develop a unified model exploring clinicians’ behavioural intention to adopt AI-enabled CDSSs. Methods: A self-administered cross-sectional survey was distributed to licensed healthcare professionals to examine how validated factors influence perceptions of risks and benefits. Responses were collected from 215 clinicians across Italy and the United Kingdom. Recruitment was undertaken using email invitations, attendance at academic conferences, and direct approaches within healthcare settings. Results: Perceived Benefits were found to be the strongest positive predictor of clinicians’ intentions to use AI-enabled CDSSs (β=.45, p<.001), whereas perceived risks had a significant negative effect (β=-.18, p=.002). Performance Expectancy and Facilitating Conditions significantly increased the adoption intentions, whereas Effort Expectancy and Social Influence were not significant. Among the risk antecedents, Perceived Performance Anxiety, Communication Barriers, and Liability Concerns were significant predictors of Perceived Risks. The model explained 46% of the variance in the intention to use AI-enabled CDSSs. Conclusions: The findings offer theoretical and practical insights into human factors influencing AI adoption in clinical practice, underscoring the importance of value alignment, professional accountability and institutional readiness, and highlighting the need to foster clinician trust in AI tools beyond the boundaries of technical performance.
