Background: With the rapid development of artificial intelligence (AI) and related technologies, AI algorithms are being embedded into various health information technologies that assist clinicians in clinical decision making.
Objective: This study aimed to explore how clinicians perceive AI assistance in diagnostic decision making and suggest the paths forward for AI-human teaming for clinical decision making in health care.
Methods: This study used a mixed methods approach, utilizing hierarchical linear modeling and sentiment analysis through natural language understanding techniques.
Results: A total of 114 clinicians participated in online simulation surveys in 2020 and 2021. These clinicians studied family medicine and used AI algorithms to aid in patient diagnosis. Their overall sentiment toward AI-assisted diagnosis was positive and comparable with diagnoses made without the assistance of AI. However, AI-guided decision making was not congruent with the way clinicians typically made decisions in diagnosing illnesses. In a quantitative survey, clinicians reported perceiving current AI assistance as not likely to enhance diagnostic capability and negatively influenced their overall performance (β=–0.421, P=.02). Instead, clinicians’ diagnostic capabilities tended to be associated with well-known parameters, such as education, age, and daily habit of technology use on social media platforms.
Conclusions: This study elucidated clinicians’ current perceptions and sentiments toward AI-enabled diagnosis. Although the sentiment was positive, the current form of AI assistance may not be linked with efficient decision making, as AI algorithms are not well aligned with subjective human reasoning in clinical diagnosis. Developers and policy makers in health could gather behavioral data from clinicians in various disciplines to help align AI algorithms with the unique subjective patterns of reasoning that humans employ in clinical diagnosis.
Artificial intelligence (AI) and related technologies are rapidly evolving as part of workplace technology to support clinicians’ decision making [, ]. AI refers to the use of a collection of intelligent technologies in “the science and engineering of intelligent machines and computational part of the ability to achieve goals in the world” [ ] or to “model intelligent behavior with minimal human intervention” [ - ]. Its influence has permeated retail, marketing, and human resource management [ ]. Specifically in health care, considering the risks of incorrect predictions, the Consumer Technology Association (CTA) has reconceptualized health care AI as “assistive intelligence,” described as “a category of AI-enabled software that ‘informs’ or ‘drives’ diagnosis or clinical management of a patient,” in which “the health care provider makes the ultimate decisions before clinical action is taken” [ ]. In both clinical and administrative areas of health care organizations, AI algorithms are increasingly embedded in health information technologies (HITs) to assist with clinical functions such as data monitoring, clinical research, diagnostics, and support compliance for billing and administrative tasks [ , ]. AI technology in health care is therefore expected to streamline clinicians’ clinical and administrative decision making by providing prompt data analyses and the necessary recommendations to make health care more efficient and cost-effective.
Particularly within the clinical side of health care organizations, clinical decision making is based on subjective and objective patient information and other influencing variables. Clinical decision making generally refers to the process of making a choice among options aimed at diagnosis, intervention, interaction, and evaluation within a context, in which numerous interactions exist among stakeholders, background knowledge, and social and technological factors . In other words, clinicians’ decision making regarding patient health outcomes or health care diagnoses is largely influenced by numerous factors such as the idiosyncratic characteristics of patients and clinicians, consultation environments, and technology used [ ]. In this information-rich context, the ability to access and manage appropriate and accurate information is critical for clinicians to make decisions on the patient’s behalf. As reflected in the CTA definition, AI algorithms under these circumstances need to complement human cognitive processing of electronic medical records, multimedia images, or laboratory results. Once successfully incorporated, AI assistance has been shown to help clinicians reduce adverse events outside of the intensive care unit by 44% [ ].
Thus far, views have been mixed on how AI may “assist” or “team up with” clinicians through the medical assessment and diagnosis processes. On the one hand, AI allows clinicians to freely follow their own paths of inquiry through patient investigations and select from a database of questions on patient history, physical examinations, and laboratory tests to make patient-specific diagnostic decisions . On the other hand, training AI with a collection of data from the clinicians’ own subjective diagnoses of various patient cases may not be ideal for fully utilizing AI techniques [ ]. Indeed, inconsistent performance has also been reported [ ]. Given that 83% of health care organizations have a strategy for AI investment and deployment in the coming years [ ], particularly for electronic health record management and diagnosis [ ], health care practice and training will follow such trends. However, as core users of AI, incumbent health care stakeholders in the current health care system have differential levels of technological readiness; therefore, clinicians’ existing ability to incorporate analytic results from AI may hamper the leveraging of the full potential of AI. In other words, care providers’ attitudes toward and positive experiences with the assistance offered by AI-enabled technologies in the process of clinical diagnosis can help health care organizations reap the benefits of AI investments by enhancing consistency, quality of care, and reducing costs [ ].
However, research on whether and how clinicians assess AI assistance in decision-making processes remains limited. Thus far, in health care AI literature, clinicians’ clinical task performance is known to be influenced by clinical task types and evidence-based standards that support AI’s data structure and clinical integration capacity within health care institutions . One stream of AI literature, which has focused on the types of tasks and medical diagnosis, claims that AI algorithms enhance human clinicians’ capabilities while improving efficiency in incident reporting and reducing adverse events [ , ]. However, the quality of health care diagnostic tasks between clinicians and AI continues to be characterized by broad discrepancies [ - ]. According to literature on AI’s data structure, health information management professionals’ ability to improve coded data quality and data patterns is necessary to ensure the optimal adaptation of teaming with AI [ ]. Moreover, per the literature on the clinical integration of AI, clinicians’ perceptions should be recognized to enhance the clinical diagnosis by AI [ , - ].
In summary, our literature review revealed that, although advanced AI algorithms have the potential to enhance the quality and efficiency of health care, the critical users of AI algorithms are clinicians whose roles are to understand and communicate with AI. Nonetheless, there has been limited focus in AI health care research on clinicians’ attitudes toward AI assistance in actual diagnostic decision making, which may be due to the infancy of such AI-assisted tools, lack of trust in AI from health care stakeholders, and potential health-related risks. Therefore, there is reasonable urgency to examine clinicians’ attitudes and sentiments toward AI assistance in clinical decision making, to address the gaps in the existing body of knowledge.
Background and Theory
Medical Diagnostic Knowledge Theory
Clinicians’ decision making includes diagnoses or high-complexity decisions across medical specialties. A diagnosis is viewed as an iterative process of “task categorization” by decomposing patients’ health symptoms into different task classes and matching the given conditions with the predefined disease categories based on their respective hypotheses . Clinicians are expected to predict and determine the course of action based on their knowledge and experience and by utilizing information on the features of the focal clinical situations. The theory of clinical diagnosis revolves around 2 important concepts: clinical knowledge and clinical reasoning. The former refers to the fundamental base of knowledge, and it can be subdivided into 3 categories: conceptual, strategic, and conditional knowledge [ ]. The latter refers to a holistic process of hypothesis generation, pattern recognition, context formulation, diagnostic test interpretation, differential diagnosis, and diagnostic verification, all of which provide both the language and methods of clinical problem solving [ ]. Once clinicians have obtained clinical knowledge, they then organize it using mental representations, medical scripts, or clinical examples, followed by cognitive processes to translate medical information into testable hypotheses in each context and evaluate their own clinical reasoning for clinical diagnosis [ ]. Specifically, the conceptual framework of a clinical diagnosis includes the “hypothetico-deductive model” (ie, generation of multiple competing hypotheses from preliminary information provided by patients), decision analysis, pattern recognition, and intuition [ ]. During this process, challenges exist as to how clinicians match patients’ cases with known patterns and focus on complex patient information among many treatment options [ ]. In summary, a typical clinical decision-making process consists of the speedy processing of the situational features, assessment of the relevant hypotheses, and investigations and treatments in a sequence [ ].
Such clinical reasoning by individual clinicians cannot be fully supported by AI algorithms’ generalized task categorization, pattern recognition, and matching . The current level of AI performs well for clinical knowledge creation by simple interpretations of medical images, slides, and graphs, as well as detection of complex relational time-series patterns within data sets [ , ]. AI algorithms for clinical reasoning (ie, IBM Watson, chatbots to smartphone apps) are believed to function as decision support tools for making diagnoses, providing patient consultation, and detecting certain medical symptoms [ ].
In his book Deep Medicine, Eric Topol  highlighted the role of AI in clinical diagnosis as augmented intelligence: Some clinical knowledge formation via feature selection, task categorization, and pattern recognition can be aided by AI for clinicians to perform clinical reasoning by making a clinical diagnosis. A clinical diagnosis is a high-complexity decision that needs to be based on information inquiry using idiosyncratic patient data and threshold-based decision rules. Without clearcut threshold rules for decisions, AI may not fully process and analyze idiosyncratic patient cases and inaccurate descriptions from patients; it may likewise fail to incorporate the complexity of the diagnosis and predict patients’ adverse conditions [ ]. Therefore, the role of AI in clinical reasoning or clinical diagnosis is expected to be more assistive, and clinicians should communicate well with AI to prevent any adverse effects of clinical reasoning on patients’ health outcomes.
Artificial Intelligence Technology Use in Practice and in Simulation
In practice and in simulation, clinicians are users or collaborators of such “assistive” AI, contingent upon the extent and scope of such technologies that are defined within the context. In other words, AI can refer to algorithms that are embedded in existing HITs or holistic technological artifacts to be newly implemented as standalone software. As such, the use of AI is no longer limited to specific HITs in practice. Clinicians may have unintentionally encountered various AI technologies through system interfaces and embedded AI decision-making logic within systems used in medicine. Owing to such mixed definitions of AI in health care , the effects of AI-assisted diagnostic performance have been mixed in prior studies. Positive performance is expected when clinicians trust and expect the performance of AI [ ], whereas information overload driven by AI algorithms can cause a loss of situational awareness, thereby negatively affecting clinicians’ decision-making abilities [ ]. In addition, clinicians are perceived as more trusted by patients [ ] and that AI algorithms may not be effective in diagnosing unique cases. Thus, clinicians need to diagnose and communicate with patients using AI, at least for a while. The manner through which clinicians make clinical diagnoses with the support of AI technology and perceive such a diagnostic decision merits further investigation.
Unlike these practical constraints, observations and evaluations of clinicians’ health care diagnostic behaviors have been methods of inducing behavioral changes in safer simulation contexts. As what is learned throughout clinical health care training has been associated with what is likely to be carried out in clinical practice  and similar technologies span across practice and simulation contexts [ ], clinicians’ AI attitudes and performance have been studied and predicted from their use of simulated AI [ ]. Thus, one can examine the way clinicians have used and familiarized themselves with AI in simulation and extrapolate their behaviors in practice.
Given that clinicians’ behaviors and readiness to use this technology can be predicted from simulation experiences , we turned to a context of clinical diagnosis simulations—whereby clinicians have familiarized themselves with AI—and evaluated their decisions in a safe and controlled context. Subsequently, we examined whether and how clinicians perceive AI assistance and how their perceptions may differ from other non-AI-based diagnostic situations.
Taken together, in simulation contexts, this study explored users’ detailed experiences with AI-enabled patient diagnosis and examined the effect of AI assistance on diagnostic performance. Thus, the following research questions were formulated to shed light on clinicians’ attitudes and behavioral characteristics regarding AI assistance in patient diagnosis:
- Research question 1. During patient diagnosis, what are clinicians’ sentiments toward AI assistance, and how do their sentiments differ from other non-AI-based diagnostic situations?
- Research question 2. How does current AI assistance affect clinicians’ perceptions of enhancing diagnosis and future care task performance?
To this end, our target population was clinicians who have experience with patient diagnosis encounters using AI-based diagnostic technology to understand clinicians’ perceptions of AI-assisted diagnosis in a controlled and safe context where patient care was not compromised by potentially incorrect AI algorithms. We recruited clinicians who met the abovementioned requirement and used AI-based diagnostic technology using live patient simulations in a nursing simulation lab. We accessed 3 cohorts of family nurse practitioner (FNP) students who experienced 3 patient diagnosis simulations in the lab: encounters with live standard patients, encounters with AI assistance in diagnosing patients using multimedia patient information, and encounters with patient simulators (ie, lifelike mannequin patients). Each qualified participant was then incentivized by US $5 Amazon gift cards for their completion and the quality of their responses. Consequently, 144 clinicians were selected and invited to participate in the online simulation surveys.
In the simulation lab at the College of Nursing, AI-enabled diagnosis technology has been used in on-ground lessons, in which faculty and students collaborate to complete virtual patient cases and in home-based diagnostic decision making [, , ]. In our university, after an AI system developer implemented the focal AI technology and trained the graduate nursing faculty on the technology and after the successful go-live events, this technology was subsequently integrated as part of the clinical simulations for the students. The software is based on data from hundreds of actual patients compiled by experts and AI and based on physiology algorithms [ ]. The use of this system is particularly emphasized in the final year of the program to promote skills building and practice in clinical diagnosis.
This interactive AI incorporates intelligence from both humans and AI physiology algorithms such that evaluators on the other end of the system can recognize any patterns demonstrated by those specific users in the process of clinical diagnosis. Clinicians are known to experience some technical features of AI, such as search ability, knowledge expression function, reasoning ability, abstraction ability, speech recognition ability, and ability to process fuzzy clinical information . In our context, reasoning ability was embedded by FNP faculty in the AI-enabled diagnosis technology so that the participants could make diagnostic decisions with AI assistance and obtain feedback after completing each patient case.
Our online simulation surveys consisted of 2 parts: survey instruments and open-ended simulation questions. First, for the survey instruments, each clinician reported their perception of AI assistance and perceived performance of patient diagnosis and overall clinical tasks using a 7-point Likert scale ranging from “strongly disagree” (1) to “strongly agree” (7).presents the key survey items sourced from existing information systems (IS) literature.
As shown in, our study had 2 dependent variables. One was diagnostic performance, which we defined as the ability to provide health care consultations and diagnose health-related issues properly both in person and via online or telehealth platforms [ ]. We contextualized and operationalized clinicians’ diagnostic capabilities in a virtual context (mean 5.39, SD 0.13) [ ]. The other dependent variable was clinical task performance, the survey items for which were adapted from the IS literature (mean 5.48, SD 0.12) [ ].
We included control variables that accounted for personal technological traits and demographics. On the one hand, personal technological traits were measured via individual levels of technological advancement, such as personal innovativeness , technological habits [ ], and computer literacy [ ]. In the IS literature, personal innovativeness and computer literacy have been salient in explaining the technology adoption behaviors of individual users. Here, we defined personal innovativeness as “the willingness of an individual to try out any new information technology” [ ], whereas computer literacy is “a judgement of clinicians’ capability to use a computer” [ ]. We also included technology habits to control for the participants’ potential automatic reactions to the use of AI-enabled diagnosis technology that includes multimedia information and similar AI algorithms on social media platforms [ ]. Lastly, participants’ demographic characteristics, such as self-identified gender, age, income, education, and occupation, were included in the survey to control for potential confounding effects.
Next, in the part listing open-ended simulation questions, each clinician was asked to describe their experience with patient diagnosis under 3 different diagnostic modalities: diagnosing a live patient, diagnosing a human-like mannequin, and AI-based diagnostic simulations using AI assistance. In the 3 simulation prompts, participants were asked to recall their completed patient cases during the semester and write comments using either keywords or key phrases. For example, particularly for the AI case, the scenario prompt reads as given in. After reading this prompt, participants described “favorite” as well as “least favorite” diagnosis experiences using keywords or key phrases in 2 open-ended questions. Each clinician’s 6 diagnostic encounters were recorded as textual narratives, along with the 3 different simulation contexts of the patient diagnoses.
|Clinical task performance|||
|Key control variables|
|Personal technology trait: technology habit|||
|Personal technology trait: personal innovativeness|||
|Personal technology trait: computer literacy|||
aEach item uses a seven-point Likert scale ranging from “strongly disagree” (1) to “strongly agree” (7).
bOO system refers to artificial intelligence–enabled diagnosis technology in our research setting.
cAI: artificial intelligence.
Example scenario of artificial intelligence (AI)–based patient diagnosis.
You read a case description about a 50-year-old patient on an AI-based diagnosis system. His complaints are about back pain. “My back has been hurting for around five days. I bent over to pick something up in my print shop and I had this severe pain in my back. It hurts so much so it is hard to stand up. The pain is on and off throughout the day—maybe four times, for half an hour each, especially when I am walking around. It is mostly a dull aching pain.”
Using clinicians’ quantitative and qualitative responses from our online simulation surveys, we utilized a mixed method to analyze our mixed data. According to Creswell  and Creswell and Clark [ ], mixed methods research “incorporates qualitative and quantitative data to solve complex research questions and hypotheses, and it is suitable for explaining what (ie, estimating overall trends in participant behaviors within the population) and obtaining an in-depth understanding of why (ie, specific individual behaviors in the subsamples).” This method is particularly useful for exploring context-specific variables and understanding beyond the importance of research variables by delving into the underlying mechanisms using causal models.
Mixed methods research includes sequential and concurrent designs such that qualitative and quantitative data can be collected sequentially in the former, whereas both types of data are obtained at the same time in the latter . Each category has a specific design typology based on the emphasis on the importance of qualitative or quantitative data, the analysis process, and theoretical emphasis (see Castro et al [ ] for an in-depth discussion on designs of mixed methods research). We applied a concurrent triangulation design, because both quantitative and qualitative data were collected concurrently, and such data were used to accurately describe and examine clinicians’ experiences and sentiments toward AI assistance in clinical decision making [ ].
We utilized qualitative and quantitative data using natural language understanding (NLU) and hierarchical linear modeling (HLM). Regarding the first research question—capturing clinicians’ respective sentiments in 3 cases of patient diagnosis simulations—we analyzed textual narratives to understand what clinicians perceived about the diagnosis process in teaming with AI and compared it with non-AI-involved diagnostic situations using NLU. Textual comments consisted of a single sentence or a small number of keywords. To incorporate such data characteristics, we deemed NLU, as a subfield of computer science, an adequate method based on its explicit focus on the use of computational techniques to learn, understand, and produce human language content, and many information technology firms, such as Microsoft, Google, and IBM, have developed NLU platforms and algorithms [, ]. We utilized IBM Watson Natural Language Understanding-77 for its well-known performance and cross-evaluation results. Understanding each participant’s language content was more suitable than traditional text mining analytics [ - ].
Next, for quantitative data analysis, we implemented HLM estimation techniques to answer the second research question—measuring the effect of the current form of AI assistance on clinicians’ diagnostic decision making and care task performance. In our data, each clinician’s clinical diagnosis experience was nested within the program. In other words, under the same graduate program, each participant was exposed to the same sets of patient diagnosis simulations with 3 modalities and reported their responses with 3 diagnosis simulations repeatedly. This context could engender statistical dependency among the responses . Moreover, in the research model, the outcome variable (ie, the clinical diagnosis) was at the individual level, and the AI assistance variable was at the group level (or graduate nursing program), leading to a multilevel research program in the same research model [ ]. To mitigate such dependency and estimate this multilevel model, we used the STATA 16.1 (StataCorp, College Station, TX) xtmixed procedure to carry out generalized linear modeling using the restricted maximum likelihood or residual maximum likelihood estimation. The xtmixed procedure fits linear mixed models that include fixed (or standard regression coefficients) and random effects that are not estimated directly but by variance component. The default covariance structure is independent covariance. We also specified other covariance structures (eg, exchangeable and unstructured) to validate our mixed model results (see [ ] for an in-depth discussion of mixed models and covariance structures).
We explored clinicians’ perceptions and performance prospects toward AI assistance in patient diagnosis research. These relations and sentiments are evidenced by the results of generalized linear models as well as qualitative text analysis with direct quotations from the health care worker sample.
A total of 114 clinicians completed our online surveys during the 2020-2021 study period (response rate: 114/144, 79.2%). In summary, 66.7% (76/114) of the participants were between 26 years old and 40 years old, 49.1% (56/114) were white, and 84.2% (96/114) identified as female. Additionally, 89.5% (102/114) worked either full time or part time in hospitals or clinics. In terms of education, 24.6% (28/114) had a graduate degree, whereas all the participants had obtained a bachelor’s degree in nursing with prior clinical experience before joining the graduate program, as shown in.
|Characteristics||Results, n (%)|
|Not disclosed||2 (1.8)|
|Income (US $)|
|Prefer not to answer||28 (24.6)|
|Bachelor’s degree||82 (71.9)|
|Master\'s degree||20 (17.5)|
|Working full time||54 (47.4)|
|Working part time||48 (42.1)|
|African American||22 (19.3)|
|Native Hawaiian or Pacific Islander||2 (1.8)|
|Prefer not to answer||6 (5.3)|
Research Question 1: Sentiment Analysis Results
To identify clinicians’ sentiments toward AI assistance, we compared the clinicians’ perceptions regarding AI-assisted diagnosis and non-AI diagnosis contexts.presents some narrative examples in our data set whereby the clinicians described what they liked and disliked about the patient diagnosis process with the help of AI-aided technology, with a live human patient, and with a human-like mannequin/patient simulator.
The results of the NLU analyses are reported in, , and . We found that clinicians perceived their patient diagnosis experience differently across the 3 simulation cases. The positive sentiment scores for the diagnosis process were 0.99 for the live patient, 0.92 for teaming up with AI assistance, and 0.41 for the patient simulator. In contrast, the negative sentiment score was the highest with the patient simulator (sentiment score = –1), followed by the live patient (sentiment score = –0.97) and AI assistance (sentiment score = –0.87). Specifically, our respondents perceived diagnostic simulations with AI technology less negatively and more positively compared with diagnosing a live human patient. reports each positive and negative sentiment score for all 3 cases, with scores ranging from –1 to 1.
and present the most relevant keywords from the textual narratives for the 3 simulation cases. The respondents perceived AI-based diagnostic simulations positively for the following reasons: “convenient access,” “thorough assessment skills,” “student interaction convenience,” “interactive learning rationale,” “good learning opportunities” and “[a] vast range of questions.” At the same time, they also perceived the AI-based diagnosis simulation to contain a “long system,” “strict sensitive clicking,” “differential diagnosis,” “large learning curve,” and “technical difficulties,” and they found it to be a “hard system.” These keywords indicated that the current form of AI assistance may lead to differential diagnosis logic relative to clinicians’ own logic and hypotheses, and technical difficulties could cause users to be averse to the technology.
|Narrative valence||AIa assistance context||Non-AI assistance context|
|Diagnosis experience with AI assistance||Diagnosis experience with live patients||Diagnosis experience with HPSb|
|Positive comments||“I don’t feel like I am under pressure and can do it at my own pace”||“interaction, being able to gauge the patient, reading facial expressions, immediate feedback”||“Very realistic, lifelike scenarios”|
|Negative comments||“not having an orientation on how to work the system (first time user)”||“I\'m not an actor, and it felt like acting; immediate feedback”||“being watched through a one-way mirror”|
aAI: artificial intelligence.
bHPS: human patient simulator. It is worth nothing that the clinicians recorded their retrospective experience with the HPS, as it was not used in the graduate program.
|Sentiment||AIa assistance context||Non-AI assistance context|
|Diagnosis experience with AI assistance||Diagnosis experience with live patients||Diagnosis experience with HPSb|
aAI: artificial intelligence.
bHPS: human patient simulator. It is worth nothing that the clinicians recorded their retrospective experience with the HPS, as it was not used in the graduate program.
cThe sentiment score ranged from –1 (negative) to 1 (positive).
|AIb assistance context: diagnosis experience with AI assistance|
|Thorough assessment skills||0.641589|
|Good practice student interaction convenience||0.62027|
|Interactive learning rationales||0.612274|
|Good learning opportunities||0.599706|
|Vast list of questions||0.542493|
|Non-AI assistance context: diagnosis experience with live patients|
|Fast convenient real experience||0.89935|
|Telehealth: convenient fast access||0.664928|
|Physical actions convenience||0.581031|
|Best learning experience||0.540142|
|Challenging open-ended questions||0.536408|
|Non-AI assistance context: diagnosis experience with HPSc|
|Convenient reinforcement of learning||0.633235|
aRelevance scores range from 0 to 1, reflecting that higher values indicate greater relevance.
bAI: artificial intelligence.
cHPS: human patient simulator. It is worth nothing that the clinicians recorded their retrospective experience with the HPS, as it was not used in the graduate program.
|AIb assistance context: diagnosis experience with AI assistance|
|Strict sensitive clicking||0.634627|
|Large learning curve||0.610383|
|Results of x-rays and CTc||0.567441|
|Sound doesn’t work||0.552959|
|Non-AI assistance context: diagnosis experience with live patients|
|Strict testing environment||0.7036|
|Face interaction complex||0.65795|
|Real clinic patients||0.59044|
|Physical examination (PE)||0.5563|
|Feeling of self-doubt||0.54479|
|Patient expresses lack of physical exam||0.53226|
|PE doesn’t correlate||0.51702|
|Non-AI assistance context: diagnosis experience with HPSd|
|Physical examination maneuvers||0.65952|
|Lack of feelings response||0.55535|
|Live patient additional questions||0.5476|
|Actual patient experiences||0.52713|
aRelevance scores range from 0 to 1, reflecting that higher values indicate greater relevance.
bAI: artificial intelligence.
cCT: computed tomography.
dHPS: human patient simulator.
Quantitative Methods: Results of Mixed Models
presents our findings for the second research question—the effect of AI assistance on clinicians’ clinical diagnosis and care performance from multilevel mixed effects models. Models 1 and 2 report the results for the 2 dependent variables of diagnostic performance and clinical task performance, respectively, where the independent variable is the experience with AI assistance (binary). In terms of individual-level covariates, social media use, personal innovativeness, and computer literacy were included as personal technological traits, in addition to demographic covariates. The HLM results were compared with the baseline ordinary least squares model with clustered standard errors. The effect of AI assistance was not statistically significant in explaining the variation in enhanced clinical diagnosis. Instead, education and age, which were related to clinicians’ overall practical experience, were positively associated with clinical diagnosis. The mixed model results were qualitatively identical, along with the different covariance structures.
|Variables||Model 1a||Model 2b|
|OLSc (clustered SEd,e)||P values||Mixed modelf,g||P values||OLS (clustered SEd,h)||P values||Mixed modelf,g||P values|
|Constant||2.162 (1.013)||.04||2.162 (1.851)||.24||4.278 (0.898)||<.001||4.278 (1.462)||.003|
|AI assistance||–0.105 (0.185)||.57||–0.105 (0.167)||.53||–0.421 (0.192)||.03||–0.421 (0.175)||.02|
|Personal technological traits|
|Technology habit||0.232 (0.104)||.03||0.232 (0.137)||.09||0.244 (0.0803)||.004||0.244 (0.108)||.02|
|Personal innovativeness||–0.227 (0.202)||.27||–0.227 (0.197)||.25||–0.0234 (0.157)||.88||–0.0234 (0.155)||.89|
|Computer literacy||–0.161 (0.181)||.38||–0.161 (0.157)||.31||–0.257 (0.111)||.02||–0.257 (0.124)||.04|
|Female gender||–0.202 (0.615)||.74||–0.202 (0.575)||.73||0.0792 (0.495)||.87||0.0792 (0.454)||.86|
|Age: 18-25 years||1.885 (0.892)||.04||1.885 (1.473)||.20||–0.910 (0.825)||.28||–0.910 (1.163)||.43|
|Age: 26-40 years||2.339 (0.782)||.004||2.339 (1.294)||.07||–0.236 (0.704_||.74||–0.236 (1.021)||.82|
|Age: 41-55 years||2.428 (0.802)||.004||2.428 (1.321)||.07||0.102 (0.683)||.88||0.102 (1.042)||.92|
|Race: African American||–0.0232 (0.540)||.97||–0.0232 (0.842)||.98||0.00592 (0.615)||.99||0.00592 (0.664)||.99|
|Race: Asian||–0.419 (0.687)||.55||–0.419 (1.106)||.71||–0.0125 (0.745)||.99||–0.0125 (0.873)||.99|
|Race: Native Hawaiian/Pacific Islander||1.067 (0.672)||.12||1.067 (1.514)||.48||0.708 (0.724)||.33||0.708 (1.195)||.55|
|Race: White||–0.0132 (0.417)||.98||–0.0132 (0.780)||.99||0.113 (0.545)||.84||0.113 (0.615)||.85|
|Race: Other||0.209 (0.631)||.74||0.209 (0.826)||.80||0.644 (0.614)||.30||0.644 (0.652)||.32|
|Education: Bachelor’s degree||1.880 (0.600)||.003||1.880 (1.044)||.07||1.584 (0.495)||.002||1.584 (0.824)||.06|
|Education: Master’s degree||1.586 (0.738)||.04||1.586 (1.128)||.16||1.014 (0.586)||.09||1.014 (0.890)||.25|
|Education: PhD||0.380 (0.935)||.69||0.380 (1.245)||.76||–0.158 (0.764)||.84||–0.158 (0.982)||.87|
|Occupational status: working full time||–0.345 (0.413)||.41||–0.345 (0.790)||.66||–0.128 (0.439)||.77||–0.128 (0.623)||.84|
|Occupational status: working part time||0.332 (0.425)||.44||0.332 (0.801)||.68||0.326 (0.448)||.47||0.326 (0.632)||.61|
|Occupational status: unemployed||–0.760 (0.485)||.12||–0.760 (1.031)||.46||–0.719 (0.698)||.31||–0.719 (0.813)||.38|
|Urban||–0.451 (0.433)||.30||–0.451 (0.512)||.38||–0.472 (0.409)||.25||–0.472 (0.404)||.24|
aDependent variable: diagnostic performance.
bDependent variable: clinical task performance.
cOLS: ordinary least squares.
dRobust standard errors are clustered by each participant.
f57 groups (clusters).
gVariance structures were specified using unstructured, identify, and exchangeable, respectively, and results qualitatively remained the same.
This mixed methods study aimed to explore the status of AI-assisted decision-making patterns among clinicians and gain a detailed understanding of how this novel method of AI-human collaboration and decision making could progress in the future. The results from the qualitative methods showed that clinicians described the diagnostic process with the support of AI as more positive compared with encountering a live patient on their own. Nonetheless, the respondents’ keywords revealed that AI assistance impeded clinicians from formulating their own subjective diagnoses based on their clinical reasoning and that the main complaints among clinicians related to some of the steps in the AI algorithms. Moreover, our quantitative results showed that clinicians’ perceptions of their clinical diagnostic capability neither indicated the current level of AI assistance nor enhanced their care task performance. The participants believed that their ex ante quality and capability, such as education, age, and daily technology habits, were more relevant in enhancing their care task performance.
Comparison With Prior Work
We expected our research to make 2 important contributions to the existing IS and health care literature. First, our findings from clinicians’ keywords in their textual comments demonstrated that clinicians perceived AI assistance positively. However, the current AI interface may not align with their clinical reasoning process, and therefore, such AI interface issues can negatively affect clinicians’ perception of whether AI can collaborate with them as a team member. Clinician keywords such as “long system,” “strict sensitive clicking,” “differential diagnosis,” “large learning curve,” “technical difficulties,” and “hard system” demonstrated human-computer interaction (HCI) issues. In other words, this phenomenon can be linked to the topic of the user interface and explainable or understandable case scenarios in AI research. Studies on HCI have typically focused on the effect of technology on users by considering principles, guidelines, and strategies for designing and interacting with user interfaces . The importance of the design aspect of AI (or AI from an HCI perspective) is more pronounced in health care because active involvement of stakeholders in the design stage can promote appropriation and sensemaking of the focal technology and increase the benefits of AI implementation [ , ]. Studies on machine learning–human interaction have emphasized “human-centeredness” in the use of AI such that humans and machines can integrate or work together as a team [ ]. To do so, AI must be explainable, comprehensible, useful, and usable for clinicians in the use scenarios, both in practice and in simulation.
However, some challenges exist. First, the incorporation of fast-moving machine learning techniques into common user experience designs falls under restrictions in environment, law, and regulations in real-world scenarios . Second, there are unintended consequences of AI-generated automated inferences and functions under uncertainty [ ] so that, in cases where scenarios yield false positives or false negatives, AI-based decision making negatively affects clinicians’ care task performance and patient health outcomes [ ]. From the perspective of clinicians, case scenarios or process logic may be the primary interface faced by clinicians under the current development of AI algorithms. However, we also showed that the complexity of AI algorithms may disrupt clinicians’ patient diagnostic decisions because clinicians may not understand how to access the complete functions and capabilities of the AI [ ]. Thus, based on the previous literature and current evidence in this paper, our results call for more attention to HCI issues by actively involving the end users in the system development procedures and providing them with adequate education to co-create effective decision-making scenarios with AI assistance.
For another, we empirically quantified the effect of AI assistance on clinicians’ decision making in a nomological network and explored whether it can serve as a factor to enhance clinical diagnostic capability and overall health task performance. Our results demonstrated that, although clinicians interacted with AI algorithms in safer simulations, the effect of AI assistance was either negative or nonexistent in the clinicians’ diagnostic decisions. Instead, the clinicians’ ex ante personal traits, that is education and age, are positively associated with enhanced outcomes. This finding corresponds with the existing understanding that years of training, professional background, and educational background affect diagnostic performance . Moreover, we found that AI assistance was negatively associated with the clinicians’ perceptions of task performance. This might be due to frontline health care providers’ trust in AI [ ] or implementation issues [ ] in health care. It will be worthwhile to revisit our research model to identify AI-specific factors and test downstream effects on clinicians’ AI use performance in future research.
Furthermore, we found that the clinicians’ ex ante technological traits were statistically associated with clinical decision making. First, we found that technological habits were positively linked to clinical diagnostic and health care task performance. In this study, we viewed a technological habit as a social media habit. Clinicians have used social media technologies that may embed AI algorithms and techniques in nonhealth care contexts . It is likely that processing and accessing multimedia-based information daily may help clinicians manage multimedia-based patient information on AI platforms. Second, we showed that computer literacy was negatively associated with overall health care task performance with AI assistance. In health care, clinicians’ levels of literacy vary across contexts such as information [ ], health [ ], YouTube [ ], informatics tools [ ], and computers [ ]. In particular, the concept of computer literacy is broad and encompasses hardware and software [ ], informatics tools (eg, decision support systems), handheld devices [ ], computerized statistical analysis, databases, presentation graphics, spreadsheet applications, and bibliographic database searches for evidence-based practice [ ]. Such measures of literacy are related to knowledge about focal tools or interpreting health information in health care contexts. Meanwhile, a renewed concept of computer literacy has emerged, known as “digital literacy,” or the combined knowledge, skills, and competencies necessary for thriving in a technology-saturated culture [ ]; it encompasses various forms of literacy via visual, electronic, and digital forms of expression and communication [ ]. The use of 3-dimensional virtual images in operations or virtual reality goggles in pathology laboratories [ ] or the matching of doctors with professional actors in medical improvisation sessions over Zoom [ ] are adequate examples of computer literacy with which clinicians should be equipped. Therefore, it is necessary to redefine and contextualize the definition of computer literacy in the domain of AI use and actively educate clinicians on this topic in practice and in training altogether.
The findings of this study have various practical implications. With an explicit focus on the clinicians’ AI-assisted decision making, we found that their sensemaking of the diagnostic logic provided by AI and the system features may pose a challenge to the health care workforce. Our findings correspond with a recent report by the National Academy of Medicine  that highlighted “augmented intelligence,” with an emphasis on the supporting role of AI in data synthesis, interpretation, and decision making for health care multi-stakeholders, such as clinicians, patients, and other related professionals. In preparing clinicians for such a change, the report suggested that their training should incorporate education programs on how to assess and use AI products and services appropriately for new and incumbent professionals alike [ ].
Moreover, to prevent AI algorithms from generating a simplified decision plan, health care providers should be involved in the case scenarios of patient diagnosis to enhance the effectiveness of the AI algorithms . What is largely neglected in the discussion is the need to close gaps between practice and clinical training for increasing AI understanding. The developers of AI need to consider the clinicians’ current levels of exposure to technology in practice and in simulation and then design clinician-centered algorithms and interfaces. To achieve the goal of human-AI collaboration in health care [ ], beyond involving clinicians in the development of clinical AI algorithms, these algorithms and their interfaces should be more human-centered. There must be assessments of diagnoses made by AI in both practice and training to reduce the gap between theory and practice in the use of AI by the health care workforce. This can be done by gathering behavioral big data from clinicians in various disciplines to help align AI algorithms with the unique, subjective patterns of reasoning that humans employ in clinical diagnosis.
As with all research, this study is not without limitations, and its results should be interpreted with caution. First, we employed purposive sampling techniques for the target population. Since our target audience was individuals with experience in clinical diagnosis in various scenarios, we targeted and recruited FNP students as our sample. As such, the response rate was relatively low, and missing values were prevalent in the data. Future research could benefit from increasing the sample size to compare group differences in greater depth. Although our sample represents the target population of this study, sampling at the national level would be beneficial for generalizing the results of this study. Second, our AI variable was operationalized as binary (1: AI assistance; 0: otherwise). Future research may consider a survey construct with items that capture the rich characteristics of multimedia technology variables in the research models. Lastly, as our results were derived from the use of AI-enabled diagnostic technology, our results may not be generalizable to other types of AI or other clinical decision-making categories.
In keeping with the recent interest in and expectations for AI-assisted decision making in health care, as a first step, our research explored clinicians’ sentiments toward AI assistance and their perceptions using sentiment analysis and a mixed methods design. Our results indicated that, while there are negative or nonexistent effects of AI assistance in enhancing clinical decision making, clinicians have positive sentiments toward AI assistance in the decision-making process, comparable with their encounters with actual patients. With this potential, we suggest that health care leaders, policy makers, and AI developers need to collect clinicians’ behavior data and revisit the design and user interface of AI to make it more clinician-centered and collaborative.
HH created and distributed the survey, analyzed the results, and drafted the entire manuscript. DG created and distributed the survey and co-authored the manuscript. Both the authors read and approved the final manuscript.
Conflicts of Interest
- Tian S, Yang W, Grange J, Wang P, Huang W, Ye Z. Smart healthcare: making medical care more intelligent. Global Health Journal 2019 Sep;3(3):62-65 p. 206 [FREE Full text] [CrossRef]
- Phillips KA. The coming era of precision health. Health Affairs 2021 Feb 01;40(2):363-363. [CrossRef]
- McCarthy J. What is artificial intelligence? Stanford University. 1998. URL: http://jmc.stanford.edu/artificial-intelligence/what-is-ai/index.html [accessed 2021-12-10]
- Davenport T, Kalakota R. The potential for artificial intelligence in healthcare. Future Healthc J 2019 Jun;6(2):94-98 [FREE Full text] [CrossRef] [Medline]
- Hamet P, Tremblay J. Artificial intelligence in medicine. Metabolism 2017 Apr;69S:S36-S40 [FREE Full text] [CrossRef] [Medline]
- Kim J, Heo W. Artificial intelligence video interviewing for employment: perspectives from applicants, companies, developer and academicians. Information Technology & People 2021 Apr 01. [CrossRef]
- ANSI/CTA-2089.1-2020: Definitions/Characteristics of Artificial Intelligence in Health Care. Americal National Standards Institute (ANSI). 2020. URL: https://webstore.ansi.org/standards/ansi/ansicta20892020 [accessed 2021-12-10]
- Padmanabhan P. AI in healthcare: The tech is here, the users are not. CIO. 2021 May 21. URL: https://www.cio.com/article/3619551/ai-in-healthcare-the-tech-is-here-the-users-are-not.html [accessed 2021-12-10]
- Davenport T, Bean R. Optum Focuses On AI To Improve Administrative Decisions. Forbes. 2020 Oct 9. URL: https://www.forbes.com/sites/tomdavenport/2020/10/09/optum-focuses-on-ai-to-improve-administrative-decisions/ [accessed 2021-12-10]
- Smith M, Higgs J, Ellis E. Factors influencing clinical decision making. In: Higgs J, Jensen GM, Loftus S, Christensen N, editors. Clinical reasoning in the health professions. Amsterdam, Netherlands: Elsevier; 2008:89-100.
- Eisenberg JM. Sociologic influences on decision-making by clinicians. Ann Intern Med 1979 Jun 01;90(6):957-964. [CrossRef] [Medline]
- Case Study: Ochsner Health System - AI-powered Early-warning System Saves Lives. Americal Hospital Association. 2018 Jun. URL: https://www.aha.org/system/files/2018-06/ochner-value-initiative-warning-system-case-study.pdf [accessed 2021-12-10]
- Zhao H, Li G, Feng W. Research on Application of Artificial Intelligence in Medical Education. 2018 Presented at: International Conference on Engineering Simulation and Intelligent Control (ESAIC); August 10-11, 2018; Hunan, China URL: https://ieeexplore.ieee.org/document/8530428 [CrossRef]
- Asan O, Bayrak AE, Choudhury A. Artificial intelligence and human trust in healthcare: focus on clinicians. J Med Internet Res 2020 Jun 19;22(6):e15154 [FREE Full text] [CrossRef] [Medline]
- Wang X, Liang G, Zhang Y, Blanton H, Bessinger Z, Jacobs N. Inconsistent performance of deep learning models on mammogram classification. J Am Coll Radiol 2020 Jun;17(6):796-803. [CrossRef] [Medline]
- 3rd Annual Optum Survey on AI in Health Care. Optum. 2020. URL: https://www.optum.com/business/resources/ai-in-healthcare/2020-ai-survey.html [accessed 2021-12-10]
- Chenbrolu K, Ressler D, Varia H. Smart use of artificial intelligence in health care: Seizing opportunities in patient care and business activities. Deloitte. 2020. URL: https://www2.deloitte.com/us/en/insights/industry/health-care/artificial-intelligence-in-health-care.html [accessed 2021-12-10]
- Lavender J. Investment in AI for healthcare soars. KPMG. 2018 Nov. URL: https://home.kpmg/xx/en/home/insights/2018/11/investment-in-ai-for-healthcare-soars.html [accessed 2021-12-10]
- Maddox TM, Rumsfeld JS, Payne PRO. Questions for artificial intelligence in health care. JAMA 2019 Jan 01;321(1):31-32. [CrossRef] [Medline]
- Young IJB, Luz S, Lone N. A systematic review of natural language processing for classification tasks in the field of incident reporting and adverse event analysis. Int J Med Inform 2019 Dec;132:103971. [CrossRef] [Medline]
- Fan W, Liu J, Zhu S, Pardalos PM. Investigating the impacting factors for the healthcare professionals to adopt artificial intelligence-based medical diagnosis support system (AIMDSS). Ann Oper Res 2018 Mar 19;294(1-2):567-592. [CrossRef]
- Shen J, Zhang CJP, Jiang B, Chen J, Song J, Liu Z, et al. Artificial intelligence versus clinicians in disease diagnosis: systematic review. JMIR Med Inform 2019 Aug 16;7(3):e10010 [FREE Full text] [CrossRef] [Medline]
- Baker A, Perov Y, Middleton K, Baxter J, Mullarkey D, Sangar D, et al. A comparison of artificial intelligence and human doctors for the purpose of triage and diagnosis. Front Artif Intell 2020 Nov 30;3:543405 [FREE Full text] [CrossRef] [Medline]
- Nagendran M, Chen Y, Lovejoy CA, Gordon AC, Komorowski M, Harvey H, et al. Artificial intelligence versus clinicians: systematic review of design, reporting standards, and claims of deep learning studies. BMJ 2020 Mar 25;368:m689 [FREE Full text] [CrossRef] [Medline]
- Stanfill MH, Marc DT. Health information management: implications of artificial intelligence on healthcare data and information management. Yearb Med Inform 2019 Aug;28(1):56-64 [FREE Full text] [CrossRef] [Medline]
- Longoni C, Bonezzi A, Morewedge CK. Resistance to medical artificial intelligence. Journal of Consumer Research 2019;46(4):629-650. [CrossRef]
- Shneiderman B. International Journal of Human–Computer Interaction 2020 Mar 23;36(6):495-504. [CrossRef]
- Nadarzynski T, Miles O, Cowie A, Ridge D. Acceptability of artificial intelligence (AI)-led chatbot services in healthcare: A mixed-methods study. Digit Health 2019;5:2055207619871808 [FREE Full text] [CrossRef] [Medline]
- Kim DW, Jang HY, Kim KW, Shin Y, Park SH. Design characteristics of studies reporting the performance of artificial intelligence algorithms for diagnostic analysis of medical images: results from recently published papers. Korean J Radiol 2019 Mar;20(3):405-410 [FREE Full text] [CrossRef] [Medline]
- Charlin B, Tardif J, Boshuizen HPA. Scripts and medical diagnostic knowledge: theory and applications for clinical reasoning instruction and research. Acad Med 2000 Feb;75(2):182-190. [CrossRef] [Medline]
- Kassirer JP. Teaching clinical reasoning: case-based and coached. Acad Med 2010 Jul;85(7):1118-1124. [CrossRef] [Medline]
- Young ME, Dory V, Lubarsky S, Thomas A. How different theories of clinical reasoning influence teaching and assessment. Academic Medicine 2018;93(9):1415. [CrossRef]
- Offredy M. The application of decision making concepts by nurse practitioners in general practice. J Adv Nurs 1998 Nov;28(5):988-1000 [FREE Full text] [CrossRef] [Medline]
- Innocent P, John R. Computer aided fuzzy medical diagnosis. Information Sciences 2004 May;162(2):81-104 [FREE Full text] [CrossRef]
- Lynn LA. Artificial intelligence systems for complex decision-making in acute care medicine: a review. Patient Saf Surg 2019;13:6 [FREE Full text] [CrossRef] [Medline]
- Grover S, Sengupta S, Chakraborti T, Mishra AP, Kambhampati S. RADAR: automated task planning for proactive decision support. Human–Computer Interaction 2020 Mar 19;35(5-6):387-412. [CrossRef]
- Topol E. Deep medicine: how artificial intelligence can make healthcare human again. London, England: Hachette UK; 2019.
- Shinners L, Aggar C, Grace S, Smith S. Exploring healthcare professionals' understanding and experiences of artificial intelligence technology use in the delivery of healthcare: An integrative review. Health Informatics J 2020 Jun 30;26(2):1225-1236 [FREE Full text] [CrossRef] [Medline]
- Van De Ven AH, Johnson PE. Knowledge for theory and practice. AMR 2006 Oct;31(4):802-821. [CrossRef]
- Nagel DA, Penner JL. Conceptualizing telehealth in nursing practice: advancing a conceptual model to fill a virtual gap. J Holist Nurs 2016 Mar 09;34(1):91-104. [CrossRef] [Medline]
- Bryce DA, King NJ, Graebner CF, Myers JH. Evaluation of a diagnostic reasoning program (DxR): exploring student perceptions and addressing faculty concerns. JIME 1998 May 12;1998(1):1. [CrossRef]
- Saunders C, Ives B. DxR case study. CAIS 2000;3:1. [CrossRef]
- Goodhue DL, Thompson RL. Task-technology fit and individual performance. MIS Quarterly 1995 Jun;19(2):213. [CrossRef]
- Agarwal R, Karahanna E. Time flies when you're having fun: cognitive absorption and beliefs about information technology usage. MIS Quarterly 2000 Dec;24(4):665. [CrossRef]
- Venkatesh V, Thong JY, Xu X. Consumer acceptance and use of information technology: extending the unified theory of acceptance and use of technology. MIS Quarterly 2012;36(1):157. [CrossRef]
- Compeau DR, Higgins CA. Computer self-efficacy: development of a measure and initial test. MIS Quarterly 1995 Jun;19(2):189. [CrossRef]
- Hah H, Goldin D, Ha S. The association between willingness of frontline care providers' to adaptively use telehealth technology and virtual service performance in provider-to-provider communication: quantitative study. J Med Internet Res 2019 Aug 29;21(8):e15087 [FREE Full text] [CrossRef] [Medline]
- Creswell JW. A Concise Introduction to Mixed Methods Research. Thousand Oaks, CA: Sage Publications, Inc; 2014.
- Creswell JW, Clark VLP. Designing and Conducting Mixed Methods Research. Thousand Oaks, CA: Sage Publications, Inc; 2017.
- Castro FG, Kellison JG, Boyd SJ, Kopak A. A methodology for conducting integrative mixed methods research and data analyses. J Mix Methods Res 2010 Sep 20;4(4):342-360 [FREE Full text] [CrossRef] [Medline]
- Hirschberg J, Manning CD. Advances in natural language processing. Science 2015 Jul 17;349(6245):261-266. [CrossRef] [Medline]
- Canonico M, De Russis L. A comparison and critique of natural language understanding tools. Cloud Computing. 2018. URL: https://www.semanticscholar.org/paper/A-Comparison-and-Critique-of-Natural-Language-Tools-Canonico-Russis/fdf8bbd6dff6201c1965f81842852cbba59fe79e [accessed 2021-12-10]
- Braun D, Mendez AH, Matthes F, Langen M. Evaluating natural language understanding services for conversational question answering systems. 2017 Presented at: 18th Annual SIGdial Meeting on Discourse and Dialogue; August 2017; Saarbrücken, Germany URL: https://aclanthology.org/W17-5522/ [CrossRef]
- High R. The Era of Cognitive Systems: An Inside Look at IBM Watson and How it Works. IBM Redbooks. 2012. URL: http://www.redbooks.ibm.com/redpapers/pdfs/redp4955.pdf [accessed 2021-12-10]
- Raudenbush SW, Bryk AS. Hierarchical Linear Models: Applications and Data Analysis Methods (Advanced Quantitative Techniques in the Social Sciences). Thousand Oaks, CA: Sage Publications, Inc; 2002.
- Singer JD, Willett JB. Applied Longitudinal Data Analysis: Modeling Change and Event Occurrence. Oxford, England: Oxford University Press; 2009.
- Choudhury MD, Lee MK, Zhu H, Shamma DA. Introduction to this special issue on unifying human computer interaction and artificial intelligence. Human–Computer Interaction 2020 May 31;35(5-6):355-361. [CrossRef]
- LeRouge C, Mantzana V, Wilson EV. Healthcare information systems research, revelations and visions. European Journal of Information Systems 2017 Dec 19;16(6):669-671. [CrossRef]
- LeRouge CM, Hah H, Deckard GJ, Jiang H. Designing for the co-use of consumer health technology in self-management of adolescent overweight and obesity: mixed methods qualitative study. JMIR Mhealth Uhealth 2020 Jun 29;8(6):e18391 [FREE Full text] [CrossRef] [Medline]
- Xu W. Toward human-centered AI. interactions 2019 Jun 26;26(4):42-46. [CrossRef]
- Chancellor S, Baumer EPS, De Choudhury M. Who is the "Human" in Human-Centered Machine Learning. Proc. ACM Hum.-Comput. Interact 2019 Nov 07;3(CSCW):1-32. [CrossRef]
- Dong L, Li W, Ma D, Lv C, Chen C. The evaluation and comparison on different types of resident doctors in training through DxR Clinician system. J. Phys.: Conf. Ser 2020 Jun 01;1549(4):042076. [CrossRef]
- Park Y, Jackson G, Foreman M, Gruen D, Hu J, Das A. Evaluating artificial intelligence in medicine: phases of clinical research. JAMIA Open 2020 Oct;3(3):326-331 [FREE Full text] [CrossRef] [Medline]
- Faggella D. Everyday Examples of Artificial Intelligence and Machine Learning. Emerj. 2020 Apr 11. URL: https://emerj.com/ai-sector-overviews/everyday-examples-of-ai/ [accessed 2021-12-10]
- Forster M. Six ways of experiencing information literacy in nursing: the findings of a phenomenographic study. Nurse Educ Today 2015 Jan;35(1):195-200 [FREE Full text] [CrossRef] [Medline]
- Scott S. Health literacy education in baccalaureate nursing programs in the United States. Nurs Educ Perspect 2016;37(3):153-158. [CrossRef] [Medline]
- Skiba DJ. Nursing Education 2.0: YouTube™. Nursing Education Perspectives 2007;28(2):100-102 [FREE Full text]
- Thompson B, Skiba D. Informatics in the nursing curriculum: a national survey of nursing informatics requirements in nursing curricula. Nurs Educ Perspect 2008;29(5):312-317. [Medline]
- Lin T. A computer literacy scale for newly enrolled nursing college students: development and validation. J Nurs Res 2011 Dec;19(4):305-317. [CrossRef] [Medline]
- Campbell CJ, McDowell DE. Computer literacy of nurses in a community hospital: where are we today? J Contin Educ Nurs 2011 Aug;42(8):365-370. [CrossRef] [Medline]
- Hobbs R. Create to learn: Introduction to digital literacy. Hoboken, NJ: Wiley-Blackwell; 2017.
- Hobbs R. Reconceptualizing media literacy for the digital age. In: Martin A, Madigan D, editors. Literacies for learning in the Digital Age. London, England: Facets Press; 2006:99-109.
- Pappano L. Training the next generation of doctors in Uganda. Internal Medicine News 2010 Oct;43(16):58-59. [CrossRef]
- Landi H. Healthcare AI investment will shift to these 5 areas in the next 2 years: survey. Fierce Healthcare. 2021 Mar 09. URL: https://www.fiercehealthcare.com/tech/healthcare-executives-want-ai-adoption-to-ramp-up-here-s-5-areas-they-plan-to-focus-future [accessed 2021-12-10]
- Matheny ME, Whicher D, Thadaney Israni S. Artificial intelligence in health care: a report from the National Academy of Medicine. JAMA 2020 Feb 11;323(6):509-510. [CrossRef] [Medline]
- Lee J. Is artificial intelligence better than human clinicians in predicting patient outcomes? J Med Internet Res 2020 Aug 26;22(8):e19918 [FREE Full text] [CrossRef] [Medline]
|AI: artificial intelligence|
|CTA: Consumer Technology Association|
|FNP: family nurse practitioner|
|HIT: health information technology|
|HLM: hierarchical linear modeling|
|IS: information systems|
|NLU: natural language understanding|
Edited by G Eysenbach; submitted 11.09.21; peer-reviewed by A Maroli, B Chaudhry, N Doreswamy; comments to author 01.10.21; revised version received 26.10.21; accepted 16.11.21; published 16.12.21Copyright
©Hyeyoung Hah, Deana Shevit Goldin. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 16.12.2021.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.