Published on in Vol 23, No 9 (2021): September

Preprints (earlier versions) of this paper are available at, first published .
Application of Artificial Intelligence in Community-Based Primary Health Care: Systematic Scoping Review and Critical Appraisal

Application of Artificial Intelligence in Community-Based Primary Health Care: Systematic Scoping Review and Critical Appraisal

Application of Artificial Intelligence in Community-Based Primary Health Care: Systematic Scoping Review and Critical Appraisal


1Department of Family Medicine, Faculty of Medicine and Health Sciences, McGill University, Montreal, QC, Canada

2Mila-Quebec AI Institute, Montreal, QC, Canada

3Department of Family Medicine and Emergency Medicine, Université Laval, Quebec City, QC, Canada

4VITAM - Centre de recherche en santé durable, Université Laval, Quebec City, QC, Canada

5Faculty of Engineering, Dayalbagh Educational Institute, Agra, India

6Quebec SPOR-Support Unit, Quebec City, QC, Canada

7Faculty of Science and Engineering, Université Laval, Quebec City, QC, Canada

8School of Nursing, University of British Columbia, Vancouver, BC, Canada

9Center for Health Services and Policy Research, University of British Columbia, Vancouver, BC, Canada

10Department of Industrial Relations, Université Laval, Quebec City, QC, Canada

11OBVIA - Quebec International Observatory on the social impacts of AI and digital technology, Quebec City, QC, Canada

12School of Social Work, University of Sherbrooke, Sherbrooke, QC, Canada

13Department of Data Science, University Pablo de Olavide, Seville, Spain

14Faculty of Nursing, Université Laval, Quebec City, QC, Canada

15Arthritis Alliance of Canada, Montreal, QC, Canada

Corresponding Author:

Samira Abbasgholizadeh Rahimi, BEng, PhD

Department of Family Medicine, Faculty of Medicine and Health Sciences

McGill University

5858 Côte-des-Neiges Road, Suite 300

Montreal, QC


Phone: 1 514 399 9218


Background: Research on the integration of artificial intelligence (AI) into community-based primary health care (CBPHC) has highlighted several advantages and disadvantages in practice regarding, for example, facilitating diagnosis and disease management, as well as doubts concerning the unintended harmful effects of this integration. However, there is a lack of evidence about a comprehensive knowledge synthesis that could shed light on AI systems tested or implemented in CBPHC.

Objective: We intended to identify and evaluate published studies that have tested or implemented AI in CBPHC settings.

Methods: We conducted a systematic scoping review informed by an earlier study and the Joanna Briggs Institute (JBI) scoping review framework and reported the findings according to PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analysis-Scoping Reviews) reporting guidelines. An information specialist performed a comprehensive search from the date of inception until February 2020, in seven bibliographic databases: Cochrane Library, MEDLINE, EMBASE, Web of Science, Cumulative Index to Nursing and Allied Health Literature (CINAHL), ScienceDirect, and IEEE Xplore. The selected studies considered all populations who provide and receive care in CBPHC settings, AI interventions that had been implemented, tested, or both, and assessed outcomes related to patients, health care providers, or CBPHC systems. Risk of bias was assessed using the Prediction Model Risk of Bias Assessment Tool (PROBAST). Two authors independently screened the titles and abstracts of the identified records, read the selected full texts, and extracted data from the included studies using a validated extraction form. Disagreements were resolved by consensus, and if this was not possible, the opinion of a third reviewer was sought. A third reviewer also validated all the extracted data.

Results: We retrieved 22,113 documents. After the removal of duplicates, 16,870 documents were screened, and 90 peer-reviewed publications met our inclusion criteria. Machine learning (ML) (41/90, 45%), natural language processing (NLP) (24/90, 27%), and expert systems (17/90, 19%) were the most commonly studied AI interventions. These were primarily implemented for diagnosis, detection, or surveillance purposes. Neural networks (ie, convolutional neural networks and abductive networks) demonstrated the highest accuracy, considering the given database for the given clinical task. The risk of bias in diagnosis or prognosis studies was the lowest in the participant category (4/49, 4%) and the highest in the outcome category (22/49, 45%).

Conclusions: We observed variabilities in reporting the participants, types of AI methods, analyses, and outcomes, and highlighted the large gap in the effective development and implementation of AI in CBPHC. Further studies are needed to efficiently guide the development and implementation of AI interventions in CBPHC settings.

J Med Internet Res 2021;23(9):e29839



The use of artificial intelligence (AI) in primary health care has been widely recommended [1]. AI systems have been increasingly used in health care, in general [2], given the hope that such systems may help develop and augment the capacity of humans in such areas as diagnostics, therapeutics, and management of patient-care and health care systems [2]. AI systems have the capability to transform primary health care by, for example, improving risk prediction, supporting clinical decision making, increasing the accuracy and timeliness of diagnosis, facilitating chart review and documentation, augmenting patient–physician relationships, and optimizing operations and resource allocation [3].

Community-based primary health care (CBPHC) is a society-wide approach to primary health care that involves a broad range of prevention measures and care services within communities, including health promotion, disease prevention and management, home care, and end-of-life care [4]. CBPHC incorporates health service delivery from personal to community levels and is the first and most frequent point of contact for the patients with health care systems for patients in many countries, including Canada [4]. In addition to providing comprehensive health care and its importance within healthcare systems, CBPHC has also been identified as essential in formulating evidence-informed public health policies [5]. Given the growing role of primary health care and CBPHC in our society [6], it is important to develop strategies that address the limitations of the existing health care system and enhance the overall quality of care delivered alongside all other aspects of CBPHC. This includes efforts for reducing the growing health care burden of CBPHC providers as well as the burden of chronic diseases, decreasing rates of misclassification and misdiagnosis, reducing cases of mismanaged diseases, and increasing accessibility to care [7-17].

Indeed, integration of AI into CBPHC could help in a variety of ways, including identifying patterns, optimizing operations, and gaining insights from clinical big data and community-level data that are beyond the capabilities of humans. Over time, using AI in CBPHC could lessen the excessive workload for health care providers by integrating large quantities of data and knowledge into clinical practice and analyzing these data in ways humans cannot, thus yielding insights that could not otherwise be obtained. This will allow health care providers to devote their time and energy to the more human aspects of health care [18]. Several studies have reported early successes of AI systems for facilitating diagnosis and disease management in different fields, including radiology [19], ophthalmology [20], cardiology [21], orthopedics [22], and pathology [23]. However, the literature also raises doubts about using and implementing AI in health care [24,25]. Aspects including privacy and consent, explainability of the algorithms, workflow disruption, and the “Frame Problem” that is defined as unintended harmful effects from issues not directly addressed for patient care [26].

Despite the potential advantages, disadvantages, and doubts, there is no comprehensive knowledge synthesis that clearly identifies and evaluates AI systems that have been tested or implemented in CBPHC. Thus, we performed a systematic scoping review aiming to (1) summarize existing studies that have tested or implemented AI methods in CBPHC; (2) report evidence regarding the effects of different AI systems’ outcomes on patients, health care providers, or health care systems, and (3) critically evaluate current studies and provide future directions for AI-CBPHC researchers.

Study Design

Based on the scoping review methodological framework proposed by Levac et al [27], and the Joanna Briggs Institute (JBI) methodological guidance for scoping reviews [28], we developed a protocol with the following steps: (1) clarifying the purpose of the review and linking it to a research question, (2) identifying relevant studies and balancing feasibility with breadth and comprehensiveness, (3) working in a team to iteratively select studies and extract their data, (4) charting the extracted data, incorporating a numerical summary, (5) collating, summarizing, and reporting the results, and (6) consulting the results regularly with stakeholders throughout regarding emerging and final results. This protocol is registered and available on the JBI website and the Open Science Framework (OSF) websites. We completed this review as per the published protocol.

We formed a multidisciplinary committee of experts in public health, primary health care, AI and data science, knowledge translation, and implementation science, as well as a patient partner and an industry partner (with expertise in the AI-health domain) with whom we consulted during all the steps of the scoping review. This helped us to interpret the results. The screening process is shown in Figure 1. Our review is reported according to the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analysis-Scoping Reviews) reporting guideline for reporting the study [29] (see Multimedia Appendix 1). Studies that did not report their study design are categorized by methodology according to the classification outlined by the National Institute for Health and Care Excellence [30].

Figure 1. PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) flowchart of the selection procedure. AI: artificial intelligence.
View this figure

We used the Prediction Model Risk of Bias Assessment Tool (PROBAST) tool for assessing the risk of bias, which includes 20 signaling questions to facilitate structured judgment of risk of bias organized in four domains of potential biases related to the following: (1) participants (covers potential sources of bias related to participant selection methods and data sources); (2) predictor variables (covers potential sources of bias related to the definition and measurement of predictors evaluated for inclusion in the model); (3) outcomes (covers potential sources of bias related to the definition and measurement of the outcomes predicted by the model); and (4) analyses (covers potential sources of bias in the statistical analysis methods) [31]. Risk of bias was judged as low, high, or unclear. If one or more domains were judged as having high risk of bias, the overall judgment was “high risk” [31].

Eligibility Criteria

We defined our bibliographic database search strategy for peer- reviewed publications in English or French using the Population, Intervention, Comparison, Outcomes, Setting and Study (PICOS) design components [32].


Studies about any population that provides health care services, including nurses, social workers, pharmacists, dietitians, public health practitioners, physicians, and community-based workers (an unregulated type of provider) were included, as were those about any populations who receive CBPHC services. We adhered to the definition of CBPHC provided by the Canadian Institutes of Health Research (CIHR) (ie, the broad range of primary prevention measures including public health, and primary care services within the community, including health promotion and disease prevention; the diagnosis, treatment, and management of chronic and episodic illness; rehabilitation support; and end-of-life care) [4]. Studies that took place in any CBPHC points of care, including community health centers, primary care networks, clinics, and outpatient departments of hospitals, were also included. Studies conducted in emergency departments were excluded.


Only studies that “tested” or “implemented” or “tested and implemented” AI methods, such as computer heuristics, expert systems, fuzzy logic, knowledge representation, automated reasoning, data mining, and machine learning (eg, support vector machines, neural networks, and Bayesian networks) were included. Studies related to robot-assisted care were excluded.


No inclusion or exclusion criteria were considered.


The primary outcomes of interest were those related to individuals receiving care (eg, cognitive outcomes, health outcomes, behavioral outcomes), providers of care (eg, cognitive outcomes, health outcomes, behavioral outcomes), and health care systems (eg, process outcomes). Moreover, we analyzed the outcomes of the AI systems for their accuracy and impact on the outcomes of care.

Analysis Methods

All study designs using qualitative, quantitative, or mixed methods were eligible for inclusion. In particular, we included experimental and quasi-experimental studies (randomized controlled trials, quasi-randomized controlled trials, nonrandomized clinical trials, interrupted time series, and controlled before-and-after studies), and observational (cohort, case control, cross- sectional, and case series), qualitative (ethnography, narrative, phenomenological, grounded theory, and case studies), and mixed methods studies (sequential, convergent).

Information Sources and Search Criteria

An information specialist with an epidemiologist, an AI-healthcare researcher, and a family doctor developed a comprehensive search strategy and Medical Subject Headings (MeSH) mediated by the National Library of Medicine. The systematic search was conducted from inception until February 2020 in seven bibliographic databases: Cochrane Library, MEDLINE, EMBASE, Web of Science, Cumulative Index to Nursing and Allied Health Literature (CINAHL), ScienceDirect, and IEEE Xplore. Retrieved records were managed with EndNote X9.2 (Clarivate) and imported into the DistillerSR review software (Evidence Partners, Ottawa, ON) to facilitate the selection process (see Multimedia Appendix 2 for the search strategies used on each database).

Study Selection Process

Title and Abstract Screening (Level 1)

Using DistillerSR, two independent reviewers conducted a pilot screening session using a questionnaire based on our eligibility criteria to test the screening tool and to reach a common understanding. Then, the two reviewers independently screened the titles and abstracts of the remaining records. A third reviewer resolved disagreements between the two reviewers.

Full-Text Screening (Level 2)

Using DistillerSR and the abovementioned questionnaire, the same two reviewers independently assessed the full texts selected at level 1 for their eligibility to be included in the review. A third reviewer resolved conflicting decisions. For those references for which we did not have full-text access, we attempted to obtain access through the interlibrary loan mechanism at the McGill University Library. Studies that met the eligibly criteria were included for full data extraction.

Data Collection

We used a data extraction form, approved by our consultative committee, that we designed based on the Cochrane Effective Practice and Organisation of Care Review Group (EPOC) data collection checklist [33]. Specifically, we extracted study characteristics (eg, design and country of the corresponding author); population characteristics (eg, number of participants and type of disease or treatment); intervention characteristics (eg, AI methods used); and outcome characteristics, including outcomes related to the patients (eg, cognitive outcomes, health outcomes, behavioral outcomes), providers of care (eg, cognitive outcomes, health outcomes, behavioral outcomes), and health care systems (eg, process outcomes).

Assessment of Risk of Bias in the Included Studies

Two reviewers independently appraised the included studies using the criteria outlined in PROBAST to evaluate the risk of bias in each included study that was eligible for evaluation using PROBAST [31]. A third reviewer verified their appraisals.


We performed a descriptive synthesis [34] to describe the studies in terms of their population (patient, primary care providers), interventions (AI systems, evaluated parameters), and outcomes. The results were arranged according to the PICOS format. The tools and techniques for developing a preliminary synthesis included textual descriptions of the studies, grouping and clustering, and tabulation.


Throughout the steps of the review, we regularly updated all members of the research team and requested their feedback. We also presented our preliminary results during a workshop at Université Laval, Québec, Canada, with a multidisciplinary group of experts (in public health, primary care, AI and data science, knowledge translation, implementation science, as well as a patient partner, and an industry partner) and collected their comments and feedback.

Patient Involvement

Using a patient-centered approach, our team co-developed the protocol, conducted the review, and reported the results of this study. We integrated patients’ priorities within our research questions, search strategy terms, and outcomes of interest. Our patient partner was involved in each step of the research process, including the definition of the objectives, main analysis, descriptive synthesis, interpretation of preliminary and final results, and dissemination of the results obtained in this study.

We identified 16,870 unique records. After screening their titles and abstracts, 979 studies remained for full-text review. Ultimately, 90 studies met our inclusion criteria (Figure 1).

Study Characteristics

Countries and Publication Dates

The number of studies published annually has increased gradually since 1990, especially since 2015. Figure 2 shows the timeline of the AI-based studies. Moreover, the four countries publishing a high number of studies are the United States (32/90, 36%), the United Kingdom (15/90, 17%), China (12/90, 13%), and Australia (6/90, 7%). The remaining are New Zealand (4/90, 5%), Canada (4/90, 5%), Spain (3/90, 3%), India (2/90, 2%), and the Netherlands (2/90, 2%), followed by Iran, Austria, Taiwan, Italy, France, Germany, the United Arab Emirates, Ukraine, Israel, and Cuba publishing 1 study each (1%). North America accounts for the highest number of studies (37/90, 41%) followed by Europe (25/90, 28%), Asia (18/90, 20%), and Oceania (10/90, 11%).

Figure 2. Distribution and timeline showing the publication of studies based on artificial intelligence.
View this figure

Aims of the Included Studies

The included studies sought to describe and test or implement either a novel AI model in CBPHC (16/90, 18%) or an off-the-shelf AI model, which is a modified or improved version of existing AI models in CBPHC (74/90, 82%).

Conceptual Frameworks

Among the 90 studies, 2 (2%) reported using a sociocognitive theoretical framework [35,36]. One of these used the I-change model [35], a model that evolved from several cognitive models, explores the process of behavioral change and the determinants that relate to the change, and focuses on individuals’ intentions for adopting innovations [35,37]. In the first study [35] using the I-change model, the authors investigated the cognitive determinants associated with Dutch general practitioners’ intention to adopt a smoking cessation expert AI system in their respective practices and found that workload and time constraints are important barriers.

The second study used a continuing medical education framework [38] and compared traditional expert-led training (control group) with an online multimedia-based training activity supplemented with an AI-driven simulation feedback system (treatment group) [36]. Diagnosis accuracy significantly improved in the treatment group when compared to the control group, providing evidence supporting the efficacy of AI medical training methods.

Time Frame of the Collected Data Sets

Among the included studies, 25% (23/90) used data collected over a period of 1 year or less, 20% (17/90) used data collected over a period between 1 and 5 years, 12% (11/20) used data collected over a period between 5 and 10 years, and 9% (8/90) used data collected during more than a 10-year period. One study (1%) used three data sets, collected data from three different sites with over three different time periods (<1 year, 1-5 years, >10 years) [39]. The remaining studies (30/90, 33%) did not specify the time frames of their data set collections.

Population Characteristics

Sample Size

Overall, 88% (79/90) of the included studies reported their sample size. A total of 21,325,250 patients participated in the testing, training, or validation of the AI systems.

Sex, Gender, and Age

Among the 79 studies reporting their sample size, 46 (58%) reported the sex distribution and none of the studies reported on gender-relevant indicators. Further, 32 (41%) reported the participants’ mean age and standard deviation. Overall, the mean age of the participants in these studies was 60.68 (±12.15) years. Age was reported as a range in 21% (17/79) of the studies reporting the sample size, and the remaining 38% (30/79) did not report the age of their participants.


Among all the included studies, 22% (19/79) reported the participants’ ethnic origins, which included Caucasian, Asian-Middle eastern, South Asian, African, American Indian, Alaskan Native, Hispanic, Pacific Islander, Māori, and mixed (Table 1).

Table 1. Characteristics of the participants in the included studies (N=90).
Participant characteristicsValue

Total number21,325,250



Did not report the sex17,422,964

Age (years), mean (SD)60.68 (12.15)
Number of studies reporting the sample size of patients (n)79
Health care providers

Total number2,581



Did not report the sex1,890

Age (years), mean (SD)48.50 (7.59)
Number of studies reporting the sample size of health care providers (n)17

Ethnicities reported for patients (number)




American Indian/Alaskan native13


Mixed ethnicity11



Number of studies reporting patients’ ethnicities (n)19
Number of studies reporting health care providers ethnicities (n)0
Other Sociodemographic Information

Only 27% (25/90) of the included studies reported other sociodemographic characteristics of their participants. Socioeconomic status (ie, income level) was the most commonly reported (12/90, 13%). Other characteristics reported were educational status, marital status, area of residence, employment status, smoking status, and insurance status.

Health Care Providers

Among the 90 included studies, 55 (61%) reported the involvement of primary health care providers. Further, 41 of these 55 studies (75%), involved general practitioners, 5 (9%) included nurses, 1 (2%) involved psychiatrists, 1 (2%) involved occupational therapists, and 1 (2%) involved an integrated care specialist. Six studies (7%) involved general practitioners together with other types of health care providers, specifically nurses (3/55, 5%), physician assistants, (1/55 2%), nurses, surgeons, and non-surgeon specialists, (1/55, 2%) and respirologists (1/55; 2%).

Sample Size

Among these 55 studies, 17 (31%) reported the sample size. The data pertaining to 2581 primary health care providers were collected in these studies.

Five of these studies (29%) reported the sex distribution and none reported on gender-relevant indicators. Moreover, 2 (12%) studies reported the age of the primary health care provider participants. The mean age and SD obtained in all the studies for which we collected information is 48.50 (±7.59) years (Table 1).

Sociodemographic Information

Out of 17 studies, only 1 (5%) reported the primary health care providers’ locations of practice. Among the 120 providers in this study, 57 providers practiced in rural areas and 63 practiced in urban areas.


AI Methods

Most of the included studies (78/90, 86%), used a single AI method (non-hybrid) and the remaining 14% (n=12) used hybrid AI models—meaning that they integrated multiple AI methods. The most commonly used methods were machine learning (ML) (41/90, 45%) and natural language processing (NLP), including applied ML for NLP (24/90, 27%), and expert systems (17/90, 19%). Figure 3 illustrates the number of studies published according to the type of AI method and year of publication (see Multimedia Appendices 3 and 4 for details regarding the AI methods).

Figure 3. Number of studies published according to the artificial intelligence method used and years of publication.
View this figure

Performance Measures of AI Interventions

In terms of evaluating the performance of AI models, we considered the following performance metrices: True positive (TP), True negative (TN), False positive (FP), False negative (FN), sensitivity, specificity, precision, F1 score (ie, the weighted average of precision and recall, and area under the curve [AUC]). Among the 90 included studies, 31 (34%) did not report the performance of their models. Among the 59 studies that reported model performance, 13 (22%) used 2 or more performance measures and the remaining 46 (78%) used one measure (see Multimedia Appendix 4 for detailed information on studies’ AI methods used in the included studies and their performance measures).

Generated Knowledge

Most of the included studies (81/90, 91%) were either diagnosis- or prognosis-related or focused on surveillance, and the remaining involved operational aspects (eg, resource allocation, system- level decisions) (see Multimedia Appendix 4 for detailed information).

Health Conditions

The majority of the 90 included studies (68/90, 76%) investigated the use of AI in relation to a specific medical condition. Conditions studied were vascular diseases including hypertension, hypercholesteremia, peripheral arterial disease, and congestive heart failure (10/90, 11%) [40-49]; infectious diseases including influenza, herpes zoster, tuberculosis, urinary tract infections, and subcutaneous infections (8/90, 9%) [50-57]; type 2 diabetes (5/90, 6%) [58-62]; respiratory disorders including chronic obstructive pulmonary disease and asthma (6/90, 8%) [63-69]; orthopedic disorders including rheumatoid arthritis, gout, and lower back pain (5/90, 5%) [36,39,70-72]; neurological disorders including stroke, Parkinson disease, Alzheimer disease [73-75], and cognitive impairments (6/90, 5%) [76,77]; cancer including colorectal cancer, and head and neck cancer (4/90, 4%) [78-81]; psychological disorders including depression and schizophrenia (3/90, 3%) [82-84]; diabetic retinopathy (3/90, 3%) [85-87]; suicidal ideations (2/90, 2%) [88,89]; tropical diseases including malaria (2/90, 2%) [90,91]; renal disorders (2/90, 2%) [92,93]; autism spectrum disorder (2/90, 2%) [94,95]; venous disorders including deep vein thrombosis and venous ulcers (2/90, 2%) [96,97]; and other health conditions (8/90, 8%) [98-105].

Data Sets (Training, Testing, and Validation)

In this section, we briefly explain the training, testing, and validation of the data sets, and then present our results. The training data set is the subset of the data that are used to fit in the initial AI model and to train it. The testing data set is the subset of the data used to evaluate the model that fits the initial training data set. The validation data set is a subset of the data used to conduct an unbiased evaluation of the model that fits the training data set, while simultaneously optimizing the model's hyperparameters, namely the parameters whose values are used to control the learning process [106]. The evaluation of these parameters is important because it provides information about the accuracy of predictions made by the AI model, and the prospective effects of hyperparameter tuning [107].

Among the 90 included studies, 9 (10%) reported on all three data sets, 33 (36%) reported on the training and testing data sets, and 36 (40%) reported on the training and validation data sets. No descriptions of these data sets were provided in 49 (54%) of the included studies.

Legal Information and Data Privacy

Legal information concerning privacy was mentioned in 4% (4/90) of the studies in our review. Although health care records were anonymized to protect participants’ information in all four of these studies, only one explicitly reported ensuring data collection, storage, and sharing security. The remining studies did not report on data privacy and other legal information.

Involvement of Users


Two of the 90 included studies (2%) reported about the AI developers, all of whom were engineers [60,86]. None of the studies reported the involvement of the end users, including health care providers and patients, in the development stage.

Testing and Validation

Seven out of the 90 (8%) included studies reported information about those who participated in testing or validating the AI. This included general practitioners and nurses [86], engineers [60], general practitioners [51,81], occupational therapists [74], respirologists [64], and nurses [108].


Extraction of the data related to the benefits for patients, primary health care providers, and the health system explained in this section was conducted according to what the authors of the included studies clearly reported as specific benefits to each of these categories.

Potential Benefits for Patients

Included studies reported the following potential benefits of implementing AI in CBPHC: improvements in treatment adherence, person-centered care, quality of life, timeliness of high-risk patient identification, screening speed and cost-effectiveness, enhanced predictability of morbidities and risk factors, benefits related to early diagnosis, as well as early prevention of diseases for the elderly, and facilitated referrals.

Potential Benefits for Primary Health Care Providers

The included studies reported the following information regarding primary health care provider-related benefits of implementing AI in CBPHC: enhanced interprofessional communication and quality of primary care delivery, reduced workload of these providers, and facilitation of referrals and patient-centered care.

Other benefits included benefits with respect to use of AI as a reminder system, application of AI tools to inform commissioning health care priorities, the benefit of an AI system as a quality improvement intervention by generating warnings in electronic medical records and analyzing clinical reports, facilitating monitoring of the diseases, and using AI to reduce health risks.

Potential Benefits for the Health Care System

Studies in our review found that AI can play a role in improving individual patient care and population-based surveillance, can be beneficial by providing predictions to inform and facilitate policy makers decisions regarding the effective management of hospitals, benefits to community-level care, cost-effectiveness, and reducing burden at the system level.

Economic Aspects

Only one study (1%) among the included 90 papers assessed the cost-effectiveness of the AI system studied. The Predicting Out-of-Office Blood Pressure in the Clinic [PROOF-BP] system that the study authors developed for the diagnosis of hypertension in primary care was found to be cost-effective compared to conventional blood pressure diagnostic options in primary care [49].

Challenges of Implementing AI in CBPHC

Our results suggest that challenges of using AI in CBPHC include complications related to the variability of patient data as well as barriers to use AI systems or to participate in AI research owing to the age or cognitive abilities of patients.

With respect to the health care system, our review found challenges related to how information is recorded (eg, the use of abbreviations in medical records), poor interprofessional communication between nurses and physicians, inconsistent medical tests, and a lack of event recording in cases of communication failures. The included studies also mentioned problems with respect to the restricted resources and administrative aspects such as legislations and administrative approvals, as well as challenges with respect to the lack of digital or computer literacy among the primary health care providers.

In the included studies, other challenges were reported at the level of the health care system such as the data available for use with AI as well as challenges at the level of AI itself (eg, complexity of the system and difficulty in interpretation). The following were identified as the main barriers regarding the data: (1) insufficient data to train, test, and validate AI systems, leading to negative impacts on the robustness of AI models and the accuracy of their predictions; (2) poor quality data, inaccuracies in the data, misclassifications, and lack of representative data; (3) deidentification of protected medical data; and (4) variability in the data sets and combining different data sets. Regarding AI, computational complexity and difficulties in interpreting or explaining some AI model compositions were among the barriers at the AI level.

Risk of Bias

We identified the studies that were eligible to be evaluated using PROBAST. Among our included studies, 54% (49/90) were eligible to be evaluaeted using the PROBAST tool and most (39/49, 80%) were at high risk of bias according to our assessment with PROBAST (Figure 4). With respect to risk of bias for each of the four domains assessed, few studies presented risks regarding participants, (2/49, 4%), whereas 45% (22/49) studies exhibited risks of bias regarding outcomes. See Multimedia Appendices 5 and 6 for details on common causes of risks in each study).

Figure 4. Risk of bias graph: assessing risk of bias in five categories namely overall, participants, predictors, analysis, and outcome (presented as percentages).
View this figure

Principal Findings

We conducted a comprehensive systematic scoping review that included 90 studies on the use of AI systems in CBPHC and provided a critical appraisal of the current studies in this area. Our results highlighted an explosion in the number of studies since 2015. We observed variabilities in reporting the participants, type of AI methods, analysis, and outcomes, and highlighted the large gap in the effective development and implementation of AI in CBPHC. Our review led us to make the following main observations.

AI Models, Their Performance, and Risk of Bias

ML, NLP, and expert systems were the most commonly used in CBPHC. Convolutional neural networks and abductive networks were the methods with the highest performance accuracy within the given data sets for the given task. We observed that a small number of studies reported on the development and testing or implementation of a new AI model in their study, and most of the included studies (74/90, 82%) reported on the usage and testing or implementation of an off-the-shelf AI model. Previous work has demonstrated how off-the-shelf models cannot be directly used in all clinical applications [109]. We observed a high risk of overall bias in the diagnosis- and prognosis-related studies. The highest risk of bias was in the outcome, predictor, and analysis categories of the included studies; validation of studies (external and internal) was poorly reported, and calibration was rarely assessed. A high risk of bias implies that the performance of these AI models in a new data set might not be as optimal as it was reported in these studies. Given the high risk of bias observed in the included studies, AI models used in other settings (ie, with other data) may not exhibit the same level of prediction accuracy as observed.

Where to Use AI?

Primary health care providers are more likely to use AI systems for system-level support in administrative or health care tasks and for operational aspects, rather than for clinical making decisions [1]. However, our results show that few AI systems have been used for these purposes in CBPHC. Rather, the existing AI systems are mostly diagnosis- or prognosis-related, and used for disease detection, risk identification, or surveillance. Further studies in this regard are needed to evaluate the reason behind this tendency in addition to studies for proving the efficiency and accuracy of AI models for assisting in clinical decision making within CBPHC settings. In our review, we found that only 2 of the 90 studies used a (sociocognitive) theoretical framework. Future research needs to use knowledge, attitudes, and behavior theories to expand AI usage for clinical decision making, and more efforts are required to develop and validate frameworks guiding effective development and implementation of AI in CBPHC.

Consideration of Age, Sex, and Gender

Our results show that AI-CBPHC research rarely considers sex, gender, age, and ethnicity. In general, the effect of age is rarely investigated in the AI field and ageism is often ignored in the analysis of discrimination. In health research, AI studies that have evaluated facial and expression recognition methods identified bias toward older adults [109]. This bias could negatively affect the accuracy of the predictions made by AI systems that are commonly used by health care providers.

Furthermore, sex and gender are sources of variations in clinical conditions, affecting different aspects including prognosis, symptomatology manifestation, and treatment effectiveness, among others [110,111]. Despite this importance, big data analytics research focusing on health through the sex and gender lens has shown that current data sets are biased given they are incomplete with respect to gender-relevant indicators with sex-disaggregated data. Indeed, less than 35% of the indicators in international databases have full disaggregation with respect to sex [112]. Our results are consistent with this observation, as we found just half of the AI-CBPHC research with patient participants and nearly one-third with health care provider participants described the sex distribution. Moreover, no AI-CBPHC research has reported on gender-relevant indicators. These are important aspects that need to be considered in the future AI-based CBPHC studies to avoid potential biases in the AI systems.

Consideration of Ethnicity and Geographical Location

Less than one quarter of included studies have reported patient participants’ ethnicities, with no discussion on the ethnicities of participating health care providers. Moreover, for those studies that reported patient ethnicity, we observed that the collected data were related to causation populations, thus raising questions regarding the representativeness of the data set, leading to biases. Such biases could result in the AI system making predictions that discriminate against marginalized and vulnerable patient populations, ultimately leading to undesirable patient outcomes.

According to our results, most of the AI research in CBPHC has taken place in North American and Europe-centric settings. Several factors contribute to ethnoracial biases when using AI, including not accounting for ethnoracial information, thereby ignoring the different effects illnesses can have on different populations [113]. Consequently, studies can yield results with historical biases as well as biases related to over- or under-representation of population characteristics in data sets and in the knowledge, bases used to build AI systems. In turn, stereotypes and undesirable outcomes may be amplified. Ensuring ethnic diversity in study populations and accounting for this diversity in analyses is an imperative for developing AI systems that result in equitable CBPHC.

Involvement of Users

Despite the many potential benefits of AI to humans, the development of AI systems is often based on “technology-centered” design approaches instead of "human-centered" approaches [114]. Our results indicate that no AI-CBPHC study has involved any end users in the system development stage and involving primary health care professional users during the validation or testing stages has been rare. This results in AI systems that do not meet the needs of health care providers and patients; they suffer from poor usage scenarios and eventually fail during implementation in clinical practice. A recent assessment of the current user-centered design methods showed that most of the existing user-centered design methods were primarily created for non-AI systems and do not effectively address the unique issues in AI systems [115]. Further efforts are needed to include health care providers and patients as users of the developed AI systems in the design, development, validation, and implementation stages in CBPHC. Nevertheless, effectively involving these users in the development, testing, and validation of AI systems remains a challenge; further studies are required to overcome them.

Ethical and Legal Aspects

Ethical and legal challenges related to the use of AI in health care include, but are not limited to, informed consent to use AI, safety and transparency of personal data, algorithmic fairness, influenced by the aforementioned biases, liability, data protection, and data privacy. Our results indicate that ethical and legal aspects have rarely been addressed in AI-CBPHC research, except with respect to privacy and data security issues. There is a need to address all legal and ethical aspects and considerations within AI-CBPHC studies to facilitate implementation of AI in CBPHC settings. For instance, to increase the use of AI systems by CBPHC providers, clarifying scenarios in which informed consent is required could be useful, as would clarifying providers’ responsibilities regarding the use of AI systems. To improve patient outcomes related to AI use in CBPHC, defining the responsibilities of providers and researchers regarding the development and implementation of AI-health literacy programs for patients may be necessary, together with gaining an understanding of how and when patients need to be informed about the results that AI systems yield.

Economic Aspects

AI systems can provide solutions to rising health care costs; however, only one (1%) AI-CBPHC study has addressed this issue by conducting a cost-effectiveness analysis of AI use. This is consistent with other study results showing that the cost-effectiveness of using AI in health care is rarely and inadequately reported [116,117]. Thus, further research analyzing cost-effectiveness is needed for identifying the economic benefits of AI in CBPHC in terms of treatment, time and resource management, and mitigation of human error; this would be valuable as it could influence decisions for or against implementing AI in CBPHC.

AI in Clinical Practice

Our results show different barriers and facilitators for implementing AI in clinical practice. Aspects related to the data were among mostly mentioned ones. For instance, the lack of high amounts of quality data, specifically when using modern AI methods (eg, deep learning), is a challenge commonly faced when developing AI systems for use in CBPHC. The promotion of AI-driven innovation in any setting, including CBPHC, is closely linked to data governance, open data directives, and other data initiatives, as they help to establish trustworthy mechanisms and services for sharing, reusing, and pooling data [118] that are required for the development of high-quality data-driven AI systems.

In addition, some data security and privacy laws can create a bottleneck, limiting the use of AI systems in CBPHC and the sharing of health care information that is required for developing high- performance AI systems. To facilitate the implementation and adoption of high-quality AI systems in CBPHC and ensuring benefits to patients, providers and the health care system, research providing insights for addressing these implementation challenges is needed.

Limitations of the Study

Our review has some limitations. Firstly, given that we used the Canadian Institute of Health Research’s definition of CBPHC to determine our inclusion criteria and given that the definition of CBPHC differs from one country to another, our search strategy may not have captured all relevant records. Secondly, we excluded studies conducted in emergency care settings. In many countries, emergency departments are the points of access to community-based care. The European Commission recently released a legal framework (risk-based approach) for broad AI governance among EU member states [118] and categorized emergency care and first aid services as “high risk.” Requirements of high-quality data, documentation and traceability, transparency, human oversight, and model accuracy and robustness are cited as being strictly necessary to mitigate the risks in these settings [118].


In this systematic scoping review, we have demonstrated the extent and variety of AI systems being tested and implemented in CBPHC, critically evaluated these AI systems, showed that this field is growing exponentially, and exposed knowledge gaps that remain and that should be prioritized in future studies.


This study was funded by Canadian Institutes of Health Research’s Planning/Dissemination Grants (Principal Investigators [PIs]: SAR and FL), Québec SPOR SUPPORT Unit (PI: SAR) and start-up fund from McGill University (PI: SAR). We acknowledge the support from these institutions. SAR receives salary support from a Research Scholar Junior 1 Career Development Award from the Fonds de Recherche du Québec-Santé (FRQS), and her research program is supported by the Natural Sciences and Engineering Research Council (NSERC) Discovery (grant 2020-05246). FL is a Tier 1 Canada Research Chair in Shared Decision Making and Knowledge Translation. We thank the Québec-SPOR SUPPORT Unit for their methodological support and Paula L. Bush, PhD, for her revisions of the draft of this manuscript.

Conflicts of Interest

None declared.

Multimedia Appendix 1

PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews) checklist.

PDF File (Adobe PDF File), 104 KB

Multimedia Appendix 2

Full search strategy.

PDF File (Adobe PDF File), 518 KB

Multimedia Appendix 3

Timeline of artificial intelligence implementation in community-based primary health care between 1990 and 2020.

PDF File (Adobe PDF File), 668 KB

Multimedia Appendix 4

Data extracted from the included studies.

PDF File (Adobe PDF File), 324 KB

Multimedia Appendix 5

Details on the risk of bias in each evaluated study.

PDF File (Adobe PDF File), 162 KB

Multimedia Appendix 6

Risk of bias graph based on authors’ judgments about each risk of bias item presented as percentages.

PDF File (Adobe PDF File), 262 KB

  1. Liyanage H, Liaw S, Jonnagaddala J, Schreiber R, Kuziemsky C, Terry AL, et al. Artificial intelligence in primary health care: perceptions, issues, and challenges. Yearb Med Inform 2019 Aug;28(1):41-46 [FREE Full text] [CrossRef] [Medline]
  2. He J, Baxter SL, Xu J, Xu J, Zhou X, Zhang K. The practical implementation of artificial intelligence technologies in medicine. Nat Med 2019 Jan;25(1):30-36 [FREE Full text] [CrossRef] [Medline]
  3. Lin SY, Mahoney MR, Sinsky CA. Ten ways artificial intelligence will transform primary care. J Gen Intern Med 2019 Aug;34(8):1626-1630 [FREE Full text] [CrossRef] [Medline]
  4. Community-Based Primary Health Care.   URL: [accessed 2021-07-28]
  5. Canadian Institutes of Health Research. What is community-based primary health care? CBPHC Overview. 2015.   URL: [accessed 2021-07-28]
  6. Adashi EY, Geiger HJ, Fine MD. Health care reform and primary care—the growing importance of the community health center. N Engl J Med 2010 Jun;362(22):2047-2050. [CrossRef]
  7. Bodenheimer T, Chen E, Bennett HD. Confronting the growing burden of chronic disease: Can the U.S. health care workforce do the job? Health Aff (Millwood) 2009 Jan;28(1):64-74. [CrossRef] [Medline]
  8. Howard J, Clark EC, Friedman A, Crosson JC, Pellerano M, Crabtree BF, et al. Electronic health record impact on work burden in small, unaffiliated, community-based primary care practices. J Gen Intern Med 2013 Jan;28(1):107-113 [FREE Full text] [CrossRef] [Medline]
  9. The Lancet Global Health. Adding quality to primary care. The Lancet Global Health 2018 Nov;6(11):e1139. [CrossRef]
  10. Cecil E, Bottle A, Majeed A, Aylin P. Patient and health-care factors associated with potentially missed acute deterioration in primary care: a retrospective observational study of linked primary and secondary care data. The Lancet 2019 Nov;394:S30. [CrossRef]
  11. de Lusignan S, Sadek N, Mulnier H, Tahir A, Russell-Jones D, Khunti K. Miscoding, misclassification and misdiagnosis of diabetes in primary care. Diabet Med 2012 Feb;29(2):181-189. [CrossRef] [Medline]
  12. Casas Herrera A, Montes de Oca M, López Varela MV, Aguirre C, Schiavi E, Jardim JR, PUMA Team. COPD underdiagnosis and misdiagnosis in a high-risk primary care population in four Latin American countries. a key to enhance disease diagnosis: the PUMA study. PLoS One 2016 Apr;11(4):e0152266 [FREE Full text] [CrossRef] [Medline]
  13. Lanzarotto F, Crimí F, Amato M, Villanacci V, Pillan NM, Lanzini A, Brescia Coeliac Disease Study Group. Is under diagnosis of celiac disease compounded by mismanagement in the primary care setting? a survey in the Italian province of Brescia. Minerva Gastroenterol Dietol 2004 Dec;50(4):283-288. [Medline]
  14. Statham MO, Sharma A, Pane AR. Misdiagnosis of acute eye diseases by primary health care providers: incidence and implications. Med J Aust 2008 Oct;189(7):402-404. [CrossRef] [Medline]
  15. Shah TI, Clark AF, Seabrook JA, Sibbald S, Gilliland JA. Geographic accessibility to primary care providers: comparing rural and urban areas in Southwestern Ontario. Can Geogr/Le Géographe canadien 2019 Aug;64(1):65-78. [CrossRef]
  16. Forget C. The case of the vanishing Québec physicians: how to improve access to care. CD Howe Institute Commentary(410) 2014 May:1-20.
  17. Haggerty J, Pineault R, Beaulieu M, Brunelle Y, Gauthier J, Goulet F, et al. Room for improvement: patients' experiences of primary care in Quebec before major reforms. Can Fam Physician 2007 Jun;53(6):1056-1057 [FREE Full text] [Medline]
  18. Verghese A, Shah NH, Harrington RA. What this computer needs is a physician: humanism and artificial intelligence. JAMA 2018 Jan;319(1):19-20. [CrossRef] [Medline]
  19. Hosny A, Parmar C, Quackenbush J, Schwartz LH, Aerts HJWL. Artificial intelligence in radiology. Nat Rev Cancer 2018 Aug;18(8):500-510 [FREE Full text] [CrossRef] [Medline]
  20. Ting DSW, Pasquale LR, Peng L, Campbell JP, Lee AY, Raman R, et al. Artificial intelligence and deep learning in ophthalmology. Br J Ophthalmol 2019 Feb;103(2):167-175 [FREE Full text] [CrossRef] [Medline]
  21. Johnson KW, Torres Soto J, Glicksberg BS, Shameer K, Miotto R, Ali M, et al. Artificial intelligence in cardiology. J Am Coll Cardiol 2018 Jun;71(23):2668-2679 [FREE Full text] [CrossRef] [Medline]
  22. Olczak J, Fahlberg N, Maki A, Razavian AS, Jilert A, Stark A, et al. Artificial intelligence for analyzing orthopedic trauma radiographs. Acta Orthop 2017 Dec;88(6):581-586 [FREE Full text] [CrossRef] [Medline]
  23. Niazi MKK, Parwani AV, Gurcan MN. Digital pathology and artificial intelligence. Lancet Oncol 2019 May;20(5):e253-e261. [CrossRef] [Medline]
  24. Sun TQ, Medaglia R. Mapping the challenges of artificial intelligence in the public sector: evidence from public healthcare. Government Information Quarterly 2019 Apr;36(2):368-383. [CrossRef]
  25. Shaw J, Rudzicz F, Jamieson T, Goldfarb A. Artificial intelligence and the implementation challenge. J Med Internet Res 2019 Jul;21(7):e13659 [FREE Full text] [CrossRef] [Medline]
  26. Yu K, Kohane IS. Framing the challenges of artificial intelligence in medicine. BMJ Qual Saf 2019 Mar;28(3):238-241. [CrossRef] [Medline]
  27. Levac D, Colquhoun H, O'Brien KK. Scoping studies: advancing the methodology. Implement Sci 2010 Sep;5:69 [FREE Full text] [CrossRef] [Medline]
  28. Khalil H, Bennett M, Godfrey C, McInerney P, Munn Z, Peters M. Evaluation of the JBI scoping reviews methodology by current users. Int J Evid Based Healthc 2020 Mar;18(1):95-100. [CrossRef] [Medline]
  29. Tricco AC, Lillie E, Zarin W, O'Brien KK, Colquhoun H, Levac D, et al. PRISMA Extension for Scoping Reviews (PRISMA-ScR): checklist and explanation. Ann Intern Med 2018 Oct;169(7):467-473 [FREE Full text] [CrossRef] [Medline]
  30. National Institute for Health and Care Excellence. The NICE Public Health Guidance Development Process.   URL: [accessed 2021-07-28]
  31. Wolff RF, Moons KG, Riley RD, Whiting PF, Westwood M, Collins GS, et al. PROBAST: a Tool to assess the risk of bias and applicability of prediction model studies. Ann Intern Med 2019 Jan;170(1):51-58. [CrossRef]
  32. Stone PW. Popping the (PICO) question in research and evidence-based practice. Appl Nurs Res 2002 Aug;15(3):197-198. [CrossRef] [Medline]
  33. Cochrane Effective Practice and Organisation of Care Group (EPOC). Data Collection Checklist.   URL: https:/​/methods.​​sites/​​files/​public/​uploads/​EPOC%20Data%20Collection%20Checklist.​pdf [accessed 2021-07-28]
  34. Popay J, Roberts H, Sowden A, Petticrew M, Arai L, Rodgers M, et al. Guidance on the conduct of narrative synthesis in systematic reviews: a product from the ESRC methods Programme. 2006.   URL: https:/​/www.​​media/​lancaster-university/​content-assets/​documents/​fhm/​dhr/​chir/​NSsynthesisguidanceVersion1-April2006.​pdf
  35. Hoving C, Mudde AN, de Vries H. Intention to adopt a smoking cessation expert system within a self-selected sample of Dutch general practitioners. Eur J Cancer Prev 2006 Feb;15(1):82-86. [CrossRef] [Medline]
  36. McFadden P, Crim A. Comparison of the effectiveness of interactive didactic lecture versus online simulation-based CME programs directed at improving the diagnostic capabilities of primary care practitioners. J Contin Educ Health Prof 2016;36(1):32-37. [CrossRef] [Medline]
  37. de Vries H, Mudde A, Leijs I, Charlton A, Vartiainen E, Buijs G, et al. The European Smoking Prevention Framework Approach (EFSA): an example of integral prevention. Health Educ Res 2003 Oct;18(5):611-626. [CrossRef] [Medline]
  38. Collins J. The continuing professional development of physicians: from research to practice. J Am Coll Radiol 2004 Aug;1(8):609-610. [CrossRef]
  39. Zhou S, Fernandez-Gutierrez F, Kennedy J, Cooksey R, Atkinson M, Denaxas S, UK Biobank Follow-upOutcomes Group, et al. Defining disease phenotypes in primary care electronic health records by a machine learning approach: a case study in identifying rheumatoid arthritis. PLoS One 2016;11(5):e0154515 [FREE Full text] [CrossRef] [Medline]
  40. Chaudhry AP, Afzal N, Abidian MM, Mallipeddi VP, Elayavilli RK, Scott CG, et al. Innovative informatics approaches for peripheral artery disease: current state and provider survey of strategies for improving guideline-based care. Mayo Clin Proc Innov Qual Outcomes 2018 Jun;2(2):129-136 [FREE Full text] [CrossRef] [Medline]
  41. Goldstein MK, Coleman RW, Tu SW, Shankar RD, O'Connor MJ, Musen MA, et al. Translating research into practice: organizational issues in implementing automated decision support for hypertension in three medical centers. J Am Med Inform Assoc 2004;11(5):368-376 [FREE Full text] [CrossRef] [Medline]
  42. Hill NR, Ayoubkhani D, McEwan P, Sugrue DM, Farooqui U, Lister S, et al. Predicting atrial fibrillation in primary care using machine learning. PLoS One 2019;14(11):e0224582 [FREE Full text] [CrossRef] [Medline]
  43. Pakhomov SV, Jacobsen SJ, Chute CG, Roger VL. Agreement between patient-reported symptoms and their documentation in the medical record. Am J Manag Care 2008 Aug;14(8):530-539 [FREE Full text] [Medline]
  44. Pesko MF, Gerber LM, Peng TR, Press MJ. Home health care: nurse-physician communication, patient severity, and hospital readmission. Health Serv Res 2018 Apr;53(2):1008-1024 [FREE Full text] [CrossRef] [Medline]
  45. Press MJ, Gerber LM, Peng TR, Pesko MF, Feldman PH, Ouchida K, et al. Postdischarge communication between home health nurses and physicians: measurement, quality, and outcomes. J Am Geriatr Soc 2015 Jul;63(7):1299-1305. [CrossRef] [Medline]
  46. Safarova MS, Liu H, Kullo IJ. Rapid identification of familial hypercholesterolemia from electronic health records: the SEARCH study. J Clin Lipidol 2016;10(5):1230-1239. [CrossRef] [Medline]
  47. Selskyy P, Vakulenko D, Televiak A, Veresiuk T. On an algorithm for decision-making for the optimization of disease prediction at the primary health care level using neural network clustering. Family Med Prim Care Rev 2018;20(2):171-175. [CrossRef]
  48. Vijayakrishnan R, Steinhubl SR, Ng K, Sun J, Byrd RJ, Daar Z, et al. Prevalence of heart failure signs and symptoms in a large primary care population identified through the use of text and data mining of the electronic health record. J Card Fail 2014 Jul;20(7):459-464 [FREE Full text] [CrossRef] [Medline]
  49. Monahan M, Jowett S, Lovibond K, Gill P, Godwin M, Greenfield S, et al. Predicting out-of-office blood pressure in the clinic for the diagnosis of hypertension in primary care: an economic evaluation. Hypertension 2018 Feb;71(2):250-261. [CrossRef]
  50. Balas EA, Li ZR, Spencer DC, Jaffrey F, Brent E, Mitchell JA. An expert system for performance-based direct delivery of published clinical evidence. J Am Med Inform Assoc 1996;3(1):56-65 [FREE Full text] [CrossRef] [Medline]
  51. Gu Y, Kennelly J, Warren J, Nathani P, Boyce T. Automatic detection of skin and subcutaneous tissue infections from primary care electronic medical records. Stud Health Technol Inform 2015;214:74-80. [Medline]
  52. MacRae J, Love T, Baker MG, Dowell A, Carnachan M, Stubbe M, et al. Identifying influenza-like illness presentation from unstructured general practice clinical narrative using a text classifier rule-based expert system versus a clinical expert. BMC Med Inform Decis Mak 2015 Oct;15:78 [FREE Full text] [CrossRef] [Medline]
  53. Rennie TW, Roberts W. Data mining of tuberculosis patient data using multiple correspondence analysis. Epidemiol Infect 2009 Dec;137(12):1699-1704. [CrossRef] [Medline]
  54. Tou H, Yao L, Wei Z, Zhuang X, Zhang B. Automatic infection detection based on electronic medical records. BMC Bioinformatics 2018 Apr;19(Suppl 5):117 [FREE Full text] [CrossRef] [Medline]
  55. Turner NM, MacRae J, Nowlan ML, McBain L, Stubbe MH, Dowell A. Quantifying the incidence and burden of herpes zoster in New Zealand general practice: a retrospective cohort study using a natural language processing software inference algorithm. BMJ Open 2018 May;8(5):e021241 [FREE Full text] [CrossRef] [Medline]
  56. Zheng C, Luo Y, Mercado C, Sy L, Jacobsen SJ, Ackerson B, et al. Using natural language processing for identification of herpes zoster ophthalmicus cases to support population-based study. Clin Exp Ophthalmol 2019 Jan;47(1):7-14. [CrossRef] [Medline]
  57. Burton RJ, Albur M, Eberl M, Cuff SM. Using artificial intelligence to reduce diagnostic workload without compromising detection of urinary tract infections. BMC Med Inform Decis Mak 2019 Aug;19(1):171 [FREE Full text] [CrossRef] [Medline]
  58. Hertroijs DFL, Elissen AMJ, Brouwers MCGJ, Schaper NC, Köhler S, Popa MC, et al. A risk score including body mass index, glycated haemoglobin and triglycerides predicts future glycaemic control in people with type 2 diabetes. Diabetes Obes Metab 2018 Mar;20(3):681-688 [FREE Full text] [CrossRef] [Medline]
  59. Lappenschaar M, Hommersom A, Lucas PJ, Lagro J, Visscher S, Korevaar JC, et al. Multilevel temporal Bayesian networks can model longitudinal change in multimorbidity. J Clin Epidemiol 2013 Dec;66(12):1405-1416 [FREE Full text] [CrossRef] [Medline]
  60. Moreno E, Lujan MJA, Rusinol MT, Fernandez PJ, Manrique P, Trivino CA, et al. Type 2 diabetes screening test by means of a pulse oximeter. IEEE Trans Biomed Eng 2017 Feb;64(2):341-351. [CrossRef] [Medline]
  61. Perveen S, Shahbaz M, Keshavjee K, Guergachi A. Prognostic modeling and prevention of diabetes using machine learning technique. Sci Rep 2019 Sep;9(1):13805 [FREE Full text] [CrossRef] [Medline]
  62. Sayadi M, Zibaeenezhad M, Taghi Ayatollahi SM. Simple prediction of type 2 diabetes mellitus via decision tree modeling. Int Cardiovasc Res J 2017;11(2):71-76.
  63. Afzal Z, Engelkes M, Verhamme K, Janssens H, Sturkenboom M, Kors J, et al. Automatic generation of case-detection algorithms to identify children with asthma from large electronic health record databases. Pharmacoepidemiol Drug Saf 2013 Aug;22(8):826-833. [CrossRef] [Medline]
  64. Braido F, Santus P, Corsico A, Di Marco F, Melioli G, Scichilone N, et al. Chronic obstructive lung disease "expert system": validation of a predictive tool for assisting diagnosis. Int J Chron Obstruct Pulmon Dis 2018;13:1747-1753 [FREE Full text] [CrossRef] [Medline]
  65. Gautier V, Rédier H, Pujol JL, Bousquet J, Proudhon H, Michel C, et al. Comparison of an expert system with other clinical scores for the evaluation of severity of asthma. Eur Respir J 1996 Jan;9(1):58-64 [FREE Full text] [CrossRef] [Medline]
  66. Hung J, Posey J, Freedman R, Thorton T. Electronic surveillance of disease states: a preliminary study in electronic detection of respiratory diseases in a primary care setting. In: Proceedings/AMIA Annual Symposium. 1998 Presented at: AMIA Annual Symposium; 1998; Florida p. 688-692.
  67. Klann JG, Anand V, Downs SM. Patient-tailored prioritization for a pediatric care decision support system through machine learning. J Am Med Inform Assoc 2013 Dec 01;20(e2):e267-e274 [FREE Full text] [CrossRef] [Medline]
  68. MacRae J, Darlow B, McBain L, Jones O, Stubbe M, Turner N, et al. Accessing primary care Big Data: the development of a software algorithm to explore the rich content of consultation records. BMJ Open 2015 Aug;5(8):e008160 [FREE Full text] [CrossRef] [Medline]
  69. Morales DR, Flynn R, Zhang J, Trucco E, Quint JK, Zutis K. External validation of ADO, DOSE, COTE and CODEX at predicting death in primary care patients with COPD using standard and machine learning approaches. Respir Med 2018 May;138:150-155 [FREE Full text] [CrossRef] [Medline]
  70. Betancourt-Hernandez M, Viera-Lopez G, Serrano-Munoz A. Automatic diagnosis of rheumatoid arthritis from hand radiographs using convolutional neural networks. Revista Cubana De Fisica 2018;35(1):39-43.
  71. Jarvik JG, Gold LS, Tan K, Friedly JL, Nedeljkovic SS, Comstock BA, et al. Long-term outcomes of a large, prospective observational cohort of older adults with back pain. Spine J 2018 Sep;18(9):1540-1551. [CrossRef] [Medline]
  72. Kerr GS, Richards JS, Nunziato CA, Patterson OV, DuVall SL, Aujero M, et al. Measuring physician adherence with gout quality indicators: a role for natural language processing. Arthritis Care Res (Hoboken) 2015 Feb;67(2):273-279 [FREE Full text] [CrossRef] [Medline]
  73. Arroyo-Gallego T, Ledesma-Carbayo MJ, Butterworth I, Matarazzo M, Montero-Escribano P, Puertas-Martín V, et al. Detecting motor impairment in early parkinson's disease via natural typing interaction with keyboards: Validation of the neuroQWERTY approach in an uncontrolled at-home setting. J Med Internet Res 2018 Mar;20(3):e89 [FREE Full text] [CrossRef] [Medline]
  74. Lee SI, Adans-Dester CP, Grimaldi M, Dowling AV, Horak PC, Black-Schaffer RM, et al. Enabling stroke rehabilitation in home and community settings: a wearable sensor-based approach for upper-limb motor training. IEEE J Transl Eng Health Med 2018;6:1-11. [CrossRef]
  75. Tandon R, Adak S, Kaye J. Neural networks for longitudinal studies in Alzheimer's disease. Artif Intell Med 2006 Mar;36(3):245-255. [CrossRef] [Medline]
  76. Levy B, Gable S, Tsoy E, Haspel N, Wadler B, Wilcox R, et al. Machine learning detection of cognitive impairment in primary care. J Am Geriatr Soc 2018;66:S111-S111. [CrossRef]
  77. Ursenbach J, O'Connell ME, Neiser J, Tierney MC, Morgan D, Kosteniuk J, et al. Scoring algorithms for a computer-based cognitive screening tool: an illustrative example of overfitting machine learning approaches and the impact on estimates of classification accuracy. Psychol Assess 2019 Nov;31(11):1377-1382. [CrossRef] [Medline]
  78. Denny JC, Choma NN, Peterson JF, Miller RA, Bastarache L, Li M, et al. Natural language processing improves identification of colorectal cancer testing in the electronic medical record. Med Decis Making 2012;32(1):188-197. [CrossRef] [Medline]
  79. Hoogendoorn M, Szolovits P, Moons LM, Numans ME. Utilizing uncoded consultation notes from electronic medical records for predictive modeling of colorectal cancer. Artif Intell Med 2016 May;69:53-61 [FREE Full text] [CrossRef] [Medline]
  80. Kop R, Hoogendoorn M, Teije AT, Büchner FL, Slottje P, Moons LM, et al. Predictive modeling of colorectal cancer using a dedicated pre-processing pipeline on routine electronic medical records. Comput Biol Med 2016 Sep;76:30-38. [CrossRef] [Medline]
  81. Lau K, Wilkinson J, Moorthy R. A web-based prediction score for head and neck cancer referrals. Clin Otolaryngol 2018 Aug;43(4):1043-1049. [CrossRef] [Medline]
  82. Haslam N, Beck AT. Categorization of major depression in an outpatient sample. J Nerv Ment Dis 1993 Dec;181(12):725-731. [CrossRef] [Medline]
  83. Lin Y, Huang S, Simon GE, Liu S. Data-based decision rules to personalize depression follow-up. Sci Rep 2018 Mar;8(1):5064-5068 [FREE Full text] [CrossRef] [Medline]
  84. Patel R, Jayatilleke N, Broadbent M, Chang C, Foskett N, Gorrell G, et al. Negative symptoms in schizophrenia: a study in a large clinical sample of patients using a novel automated method. BMJ Open 2015 Sep;5(9):e007619. [CrossRef]
  85. Abràmoff MD, Lavin PT, Birch M, Shah N, Folk JC. Pivotal trial of an autonomous AI-based diagnostic system for detection of diabetic retinopathy in primary care offices. NPJ Digit Med 2018 Aug;1(1):39-48 [FREE Full text] [CrossRef] [Medline]
  86. Kanagasingam Y, Xiao D, Vignarajan J, Preetham A, Tay-Kearney M, Mehrotra A. Evaluation of artificial intelligence-based grading of diabetic retinopathy in primary care. JAMA Netw Open 2018 Sep;1(5):e182665 [FREE Full text] [CrossRef] [Medline]
  87. Verbraak FD, Abramoff MD, Bausch GC, Klaver C, Nijpels G, Schlingemann RO, et al. Diagnostic accuracy of a device for the automated detection of diabetic retinopathy in a primary care setting. Diabetes Care 2019 Apr 14;42(4):651-656. [CrossRef] [Medline]
  88. Anderson HD, Pace WD, Brandt E, Nielsen RD, Allen RR, Libby AM, et al. Monitoring suicidal patients in primary care using electronic health records. J Am Board Fam Med 2015 Jan;28(1):65-71 [FREE Full text] [CrossRef] [Medline]
  89. Jordan P, Shedden-Mora MC, Löwe B. Predicting suicidal ideation in primary care: An approach to identify easily assessable key variables. Gen Hosp Psychiatry 2018 Mar;51:106-111. [CrossRef] [Medline]
  90. Doukidis GI, Forster D. The potential for computer-aided diagnosis of tropical diseases in developing countries: an expert system case study. Eur J Oper Res 1990 Nov;49(2):271-278. [CrossRef]
  91. Thakur S, Dharavath R. Artificial neural network based prediction of malaria abundances using big data: a knowledge capturing approach. Clin Epidemiol Glob Health 2019 Mar;7(1):121-126. [CrossRef]
  92. Luo L, Small D, Stewart WF, Roy JA. Methods for estimating kidney disease stage transition probabilities using electronic medical records. EGEMS (Wash DC) 2013 Dec;1(3):1040 [FREE Full text] [CrossRef] [Medline]
  93. Xu G, Player P, Shepherd D, Brunskill NJ. Identifying acute kidney injury in the community--a novel informatics approach. J Nephrol 2016 Feb;29(1):93-98. [CrossRef] [Medline]
  94. Ben-Sasson A, Robins DL, Yom-Tov E. Risk assessment for parents who suspect their child has autism spectrum disorder: machine learning approach. J Med Internet Res 2018 Apr;20(4):e134 [FREE Full text] [CrossRef] [Medline]
  95. Thabtah F, Kamalov F, Rajab K. A new computational intelligence approach to detect autistic features for autism screening. Int J Med Inform 2018 Sep;117:112-124. [CrossRef] [Medline]
  96. Janssen KJ, Siccama I, Vergouwe Y, Koffijberg H, Debray T, Keijzer M, et al. Development and validation of clinical prediction models: marginal differences between logistic regression, penalized maximum likelihood estimation, and genetic programming. J Clin Epidemiol 2012 Apr;65(4):404-412. [CrossRef] [Medline]
  97. Taylor R, Taylor A, Smyth J. Using an artificial neural network to predict healing times and risk factors for venous leg ulcers. J Wound Care 2002 Mar;11(3):101-105. [CrossRef] [Medline]
  98. Chen Y, Lin C, Hong C, Lee D, Sun C, Lin H. Design of a clinical decision support system for predicting erectile dysfunction in men using NHIRD dataset. IEEE J Biomed Health Inform 2019 Sep;23(5):2127-2137. [CrossRef]
  99. Lin J, Bruni FM, Fu Z, Maloney J, Bardina L, Boner AL, et al. A bioinformatics approach to identify patients with symptomatic peanut allergy using peptide microarray immunoassay. J Allergy Clin Immunol 2012 May;129(5):1321-1328 [FREE Full text] [CrossRef] [Medline]
  100. Abdel-Aal R, Mangoud A. Modeling obesity using abductive networks. Comput Biomed Res 1997 Dec;30(6):451-471. [CrossRef] [Medline]
  101. Maizels M, Wolfe WJ. An expert system for headache diagnosis: the Computerized Headache Assessment tool (CHAT). Headache 2008 Jan;48(1):72-78. [CrossRef] [Medline]
  102. Knab JH, Wallace MS, Wagner RL, Tsoukatos J, Weinger MB. The use of a computer-based decision support system facilitates primary care physicians' management of chronic pain. Anesth Analg 2001 Sep;93(3):712-720. [CrossRef] [Medline]
  103. Betts K, Kisely S, Alati R. Predicting common maternal postpartum complications: leveraging health administrative data and machine learning. BJOG 2019 May;126(6):702-709. [CrossRef] [Medline]
  104. Penny K, Smith G. The use of data-mining to identify indicators of health-related quality of life in patients with irritable bowel syndrome. J Clin Nurs 2012 Oct;21(19-20):2761-2771. [CrossRef] [Medline]
  105. Tran T, Fang T, Pham V, Lin C, Wang PC, Lo MT. Development of an automatic diagnostic algorithm for pediatric otitis media. Otol Neurotol 2018 Sep;39(8):1060-1065. [CrossRef] [Medline]
  106. Alpaydin E. Introduction to Machine Learning. Cambridge, Massachusetts, United States: MIT Press; 2020.
  107. Bishop C. Pattern Recognition and Machine Learning. Cambridge, United Kingdom: Springer; 2006.
  108. Tariq A, Westbrook J, Byrne M, Robinson M, Baysari MT. Applying a human factors approach to improve usability of a decision support system in tele-nursing. Collegian 2017 Jun;24(3):227-236. [CrossRef]
  109. Taati B, Zhao S, Ashraf AB, Asgarian A, Browne ME, Prkachin KM, et al. Algorithmic bias in clinical populations—evaluating and improving facial analysis technology in older adults with dementia. IEEE Access 2019;7:25527-25534. [CrossRef]
  110. Cirillo D, Catuara-Solarz S, Morey C, Guney E, Subirats L, Mellino S, et al. Sex and gender differences and biases in artificial intelligence for biomedicine and healthcare. NPJ Digit Med 2020 Jun;3(1):81 [FREE Full text] [CrossRef] [Medline]
  111. Regitz-Zagrosek V. EMBO Rep 2012 Jun;13(7):596-603 [FREE Full text] [CrossRef] [Medline]
  112. Open Data Watch. Bridging the gap: mapping gender data availability in Latin America and the Caribbean. Data2X. 2020.   URL: https:/​/data2x.​org/​resource-center/​bridging-the-gap-mapping-gender-data-availability-in-latin-america-and-the-caribbean/​ [accessed 2021-07-28]
  113. Pham Q, Gamble A, Hearn J, Cafazzo JA. The need for ethnoracial equity in artificial intelligence for diabetes management: review and recommendations. J Med Internet Res 2021 Feb;23(2):e22320 [FREE Full text] [CrossRef] [Medline]
  114. Shneiderman B. Human-Centered Artificial Intelligence: Reliable, Safe & Trustworthy. International Journal of Human–Computer Interaction 2020 Mar 23;36(6):495-504. [CrossRef]
  115. Xu W, Dainoff MJ, Ge L, Gao Z. From human-computer interaction to human-AI Interaction: new challenges and opportunities for enabling human-centered AI. ArXiv Preprint posted online on May 12, 2021. [FREE Full text]
  116. Milne-Ives M, de Cock C, Lim E, Shehadeh M, de Pennington N, Mole G, et al. The effectiveness of artificial intelligence conversational agents in health care: systematic review. J Med Internet Res 2020 Oct;22(10):e20346 [FREE Full text] [CrossRef] [Medline]
  117. Wolff J, Pauling J, Keck A, Baumbach J. The economic impact of artificial intelligence in health care: systematic review. J Med Internet Res 2020 Feb;22(2):e16866 [FREE Full text] [CrossRef] [Medline]
  118. Proposal for a regulation laying down harmonised rules on artificial intelligence (Artificial Intelligence Act). European Commission. 2021.   URL: https:/​/www.​​record/​proposal-for-a-regulation-laying-down-harmonised- rules-on-artificial-intelligence-artificial-intelligence-act-and-amending-certain-union-legislative-acts/​ [accessed 2021-07-28]

AI: Artificial Intelligence
CBPHC: Community-Based Primary Health Care
CIHR: Canadian Institutes of Health Research
CINAHL: Cumulative Index to Nursing and Allied Health Literature
EPOC: Cochrane Effective Practice and Organisation of Care Review Group
FN: False negative
FP: False positive
JBI: Joanna Briggs Institute
MeSH: Medical Subject Headings
ML: Machine Learning
PICOS: Population, Intervention, Comparison, Outcomes, Setting and Study designs
PRISMA-ScR: Preferred Reporting Items for Systematic Reviews and Meta-Analysis-Scoping Reviews
PROBAST: Prediction Model Risk of Bias Assessment Tool
NLP: Natural Language Processing
OSI: Open Science Framework
TN: True negative
TP: True positive

Edited by G Eysenbach; submitted 23.04.21; peer-reviewed by R Hendricks-Sturrup; comments to author 17.05.21; revised version received 29.05.21; accepted 31.05.21; published 03.09.21


©Samira Abbasgholizadeh Rahimi, France Légaré, Gauri Sharma, Patrick Archambault, Herve Tchala Vignon Zomahoun, Sam Chandavong, Nathalie Rheault, Sabrina T Wong, Lyse Langlois, Yves Couturier, Jose L Salmeron, Marie-Pierre Gagnon, Jean Légaré. Originally published in the Journal of Medical Internet Research (, 03.09.2021.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on, as well as this copyright and license information must be included.