Abstract
Background: Primary health care (PHC) is critical for delivering accessible and continuous care but faces persistent challenges such as workforce shortages, administrative burden, and rising multimorbidity. Artificial intelligence (AI) has the potential to support PHC by enhancing diagnosis, workflow efficiency, and clinical decision-making. However, existing research often overlooks how AI tools function within the complex realities of primary care and how clinicians and patients experience them.
Objective: This scoping review maps the landscape of AI applications in PHC, with a focus on empirical studies involving direct engagement from PHC stakeholders. The review emphasizes real-world settings, clinical workflows, and the alignment of AI tools with the values and complexity of generalist care.
Methods: Following Joanna Briggs Institute methodology and PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews) guidelines, we searched PubMed, Web of Science, and Scopus databases up to April 13, 2024. Inclusion criteria were empirical, peer-reviewed studies published in English between January 2010 and April 2024, involving direct stakeholder interaction (general practitioners, nurses, or patients) in real-world PHC settings, evaluating AI applications (eg, diagnostics, workflow optimization, and documentation). Exclusions included algorithm-only validations, pediatric populations, secondary or tertiary care contexts not explicitly addressing PHC workflows, nonempirical research (eg, editorials or protocols), and non-English studies. We used thematic analysis to synthesize findings related to study aims, AI applications, and stakeholder roles.
Results: Of 5224 identified records, 73 studies met the inclusion criteria. Studies were grouped into four main themes: (1) early intervention and decision support (n=21; 29%), (2) chronic disease management (n=16; 22%), (3) operations and patient management (n=12; 16%), and (4) acceptance and implementation experiences (n=24; 33%). AI tools frequently demonstrated strong technical accuracy, particularly in diagnostic decision support. However, implementation in routine practice was often limited by usability barriers, workflow misalignment, trust concerns, equity gaps, and financial constraints.
Conclusions: Overall, AI holds significant potential to support PHC, especially when aligned with clinical reasoning, workflow needs, and relational care models. However, persistent implementation barriers such as usability challenges, training gaps, and workflow integration issues must be addressed. The evidence included in this review is limited by heterogeneity in study design and the predominance of small-scale feasibility studies. Future research should prioritize pragmatic trials, co-design with PHC professionals, and anticipatory planning using future methods to ensure responsible and equitable implementation.
doi:10.2196/65950
Keywords
Introduction
Primary health care (PHC) is the foundation of equitable, accessible, and continuous health service delivery across populations. As the first point of contact in the health system, PHC manages undifferentiated symptoms, provides preventive services, and coordinates chronic disease care. In many countries, general practitioners (GPs) deliver PHC through the family medicine model, which emphasizes continuity, comprehensiveness, and long-term therapeutic relationships []. However, it is increasingly challenged by workforce shortages, administrative tasks, and clinician burnout [-]. These issues are intensified by aging populations, multimorbidity, and persistent health inequalities, creating an urgent need for new strategies to maintain high-quality, person-centered care [,].
Digital technologies have become integral to primary care delivery as part of efforts to improve coordination, reduce administrative workload, and support clinical decision-making. Among these innovations, artificial intelligence (AI) has emerged as a particularly influential development, with applications spanning diagnostics, workflow optimization, and documentation [,-]. As the field shifts from narrow, task-specific models to more flexible, multimodal, and generative approaches, it is becoming increasingly important to evaluate how these systems interact with everyday practice [].
Despite growing interest, the literature on AI in PHC remains fragmented. Many studies focus on specific tasks, such as risk prediction or documentation support [-]. Others examine where and by whom AI tools are developed, often highlighting the dominance of bioinformatics and the limited involvement of frontline clinicians []. Previous reviews have typically categorized AI tools by technical function or task type but have rarely examined how these tools are implemented in clinical PHC or how they support PHC values such as continuity, accessibility, and patient engagement []. With the rise of more adaptable AI systems, particularly generative models, a systematic evaluation is therefore warranted at this stage of development [].
This scoping review identifies empirical studies on AI in PHC that involve direct participation of key stakeholders, including health care providers such as GPs and nurses, as well as patients. By focusing on real-world use, workflow integration, and clinical relevance, the review offers a practice-oriented overview of current applications and highlights areas for future research and implementation.
Methods
The review was conducted following the Joanna Briggs Institute methodology for scoping reviews and is reported per the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews) guidelines [,]. The completed PRISMA-ScR checklist is provided in . Eligibility criteria were developed using the Population, Concept, Context framework to ensure methodological rigor []. Detailed inclusion and exclusion criteria are outlined in .
| Domain | Inclusion criteria | Exclusion criteria |
| Population | PHC stakeholders directly involved with AI (GPs, nurses, other PHC clinicians, or patients) | Studies with no stakeholder interaction (such as algorithm-only validation) or pediatric patients |
| Concept | AI applications tested in practice (diagnostics, workflow, triage, documentation, etc) | Digital tools without explicit AI components or medical education usage |
| Context | Real-world PHC settings (community clinics or GP offices) | Secondary or tertiary care, unless explicitly addressing PHC workflows |
| Study design | Empirical peer-reviewed research | Editorials, reviews, protocols, and conference abstracts |
| Language | English | Non-English |
| Date range | January 01, 2010 to April 13, 2024 | Outside date range |
aAlternate text: studies were included if they involved empirical, peer-reviewed research published in English between January 1, 2010, and April 16, 2024. Eligible studies focused on artificial intelligence applications implemented or tested in real-world primary health care settings, involving direct interaction with primary health care stakeholders (eg, general practitioners, nurses, or patients). Studies were excluded if they lacked stakeholder interaction (eg, algorithm-only validations), focused solely on pediatric populations, or were conducted exclusively in secondary or tertiary care contexts without relevance to primary health care workflows. Additional exclusions applied to nonempirical work (eg, editorials or protocols) and non-English publications.
bPHC: primary health care.
cAI: artificial intelligence.
dGP: general practitioner.
A 2-step search strategy was conducted per recommended guidelines []. The complete search strategy, including database-specific queries, is provided in . First, a preliminary search was performed in PubMed by author GK to identify relevant keywords and indexing terms. Based on these findings, a comprehensive search was then conducted across PubMed, Web of Science, and Scopus, using a combination of controlled vocabulary (eg, MeSH, Medical Subject Headings terms) and free-text keywords related to AI and PHC, applied with Boolean operators. Identified studies were exported to Mendeley (version 1.109.1; Elsevier) and shared among the authors for further screening.
After the search, duplicates were removed. Title and abstract screening were independently conducted by GK and BM, who assessed each study against the inclusion criteria. Studies deemed potentially relevant proceeded to the full-text review phase, where both reviewers conducted a detailed evaluation.
Screening was carried out in multiple rounds, with iterative discussions to resolve uncertainties or discrepancies. Disagreements were resolved by consensus, with GK acting as the final reviewer. Additionally, NA conducted a final scan of the included studies to ensure consistency and alignment with the eligibility criteria.
Relevant data from the included studies were extracted and aggregated in Microsoft Excel (version 2402; Microsoft Corp). The extraction included key study characteristics: title, authors, year, journal, DOI, study design, setting or context, population or participants, data sources, clinical setting, key findings, summary, and the thematic group. The full data extraction table sorted by themes is provided in .
To further structure the evaluation, emerging themes were identified through the analysis of study aims, AI applications, and stakeholder roles, facilitating a structured mapping of evidence gaps and trends []. Following an initial familiarization with the dataset, open coding was conducted manually within the generated spreadsheet. Codes were iteratively reviewed and grouped into potential themes by all 3 researchers, then refined through multiple rounds of web-based and in-person discussions. The final themes were determined based on their recurrence across studies and their relevance to the research question. These themes informed the final synthesis, providing a structured lens for evaluating the included studies.
Results
Overview
We identified 5224 records, with 1954 duplicates removed. After screening 3270 titles and abstracts, 2874 studies were excluded. A total of 396 papers were assessed in full text. Three full texts were inaccessible, and 320 were excluded based on eligibility criteria, resulting in 73 studies included in the final review ().

These 73 studies encompassed diverse study designs and methodological approaches. The majority used quantitative research designs, including diagnostic accuracy studies, validation studies, and retrospective cohort analyses. A smaller subset used mixed-methods approaches, integrating quantitative performance assessments with qualitative evaluations of AI implementation. Additionally, 2 studies applied Delphi consensus methodology or choice experiments to understand expert and stakeholder perspectives on AI in clinical workflows.
The studies were geographically diverse, with a substantial number conducted in the United Kingdom, Germany, France, and North America, alongside contributions from other European, Asian, and Australian health care systems. Data sources varied widely, ranging from electronic health records (EHRs) and telemedicine platforms to AI-powered decision support systems and digital consultation transcripts.
We conducted a thematic analysis in which each study was assigned to 1 of 4 primary themes. Of the 73 studies included, 21 explored early intervention and decision support, 16 examined comprehensive chronic disease management and coordinated care, 12 addressed primary care operations and patient management, and 24 focused on acceptance, implementation, and experiences of AI in primary care. The distribution of these themes is illustrated in .
| Theme | Studies (N=73), n (%) | |
| Early intervention and decision support | 21 (29) | |
| Chronic disease management | 16 (22) | |
| Operations and patient management | 12 (16) | |
| Acceptance and implementation | 24 (33) | |
aStudies were thematically categorized based on their primary focus using an inductive thematic analysis. Of the 73 peer-reviewed empirical studies, 21 (28.8%) addressed early intervention and clinical decision support; 16 (21.9%) focused on chronic disease management and coordinated care pathways; 12 (16.4%) explored primary care operations and patient management, including workflow optimization; and 24 (32.9%) examined the acceptance, implementation, and lived experiences of artificial intelligence integration in primary health care.
Theme 1: Early Intervention and Decision Support
Several studies evaluated AI for earlier detection of cancer and cardiovascular conditions. One model using patient records predicted colorectal cancer with 73% sensitivity and 84% specificity, supporting earlier diagnostic decision-making []. An AI tool using routine blood-test data predicted risk more accurately than a statistical model, with performance scores of 86% and 80%, respectively []. Cardiovascular risk detection with an AI-interpreted electrocardiogram (ECG) program raised low-ejection-fraction heart failure diagnoses from 1.6% to 2.1% [], and a follow-up analysis found that frequent tool users were twice as likely to detect the condition []. A combined ECG-stethoscope with an AI algorithm identified reduced ejection fraction with 92% sensitivity and 80% specificity [], while the Conformité Européenne–certified PMcardio (Powerful Medical, Inc) app detected atrial fibrillation with 97% sensitivity and 99% specificity in the doctor’s room [].
AI also shows promise for skin lesion assessment in primary care: an AI morphology classifier reached 68% on top-1 accuracy across 44 conditions [], and a handheld elastic-scattering spectroscopy device boosted skin-cancer diagnostic sensitivity from 67% to 88% []. Teledermatology research shows that AI assistance cut biopsy and referral rates while increasing clinician-dermatologist agreement from roughly 48% to 58% across 1048 cases [], a prospective decision-support tool for melanoma screening achieved a 99.5% negative-predictive value in 253 lesions []. A feasibility pilot showed 90% sensitivity and 65% specificity for AI-assisted melanoma detection with high usability []. Four further studies reported accuracies ranging from 39% to 89%, often with sensitivities above 90% [-].
In ophthalmology, machine-learning classifiers for glaucoma referral achieved up to 60% sensitivity and 77% specificity [], while an AI-assisted telemedicine platform detected urgent retinal disease with 97% sensitivity and 99% specificity, and cut workload by 96% []. Beyond disease-specific applications, machine learning systems are demonstrating superior performance in general diagnostic tasks within primary care: a text-note classifier identified primary headache disorders with 85% accuracy versus 66% for GPs [], while 1 ensemble AI model identified significant liver fibrosis with 94% overall accuracy and a 98% negative predictive value, performing better than standard blood-based scoring methods []. AI-driven decision aids can also enhance prescribing: 1 urinary-tract-infection management tool boosted treatment success from 75% to 84% across 36 practices [], while another study on acute respiratory infections reported 39%-77% uptake of an antibiotic-prescribing aid, potentially reducing unnecessary antibiotic use [].
Overall, most tools identified in the review targeted highly relevant conditions such as cancer, cardiovascular disease, and retinal disorders, where early diagnosis is especially impactful. These tools showed high diagnostic accuracy and were often based on structured clinical data sources such as ECGs, dermoscopic images, and EHRs. Key enablers included diagnostic accuracy, alignment with existing workflows, and support for timely decision-making without undermining clinical autonomy.
Theme 2: Comprehensive Chronic Disease Management and Coordinated Care
An AI-driven system for classifying digital specialist communication messages categorized them correctly in 86% of cases while requiring only 10% of the labeled data []. Machine learning also supports chronic care in PHC: a decision support system integrating GP engagement and EHR data improved diabetes management by increasing complication-free rates by up to 12% [], while an AI-based diabetes program in Mexico achieved a 5% improvement in glycemic control, identifying subgroups that benefited most from GP-led interventions [].
Researchers have evaluated a range of AI systems for diabetic retinopathy screening: deep-learning classifiers, combined macular degeneration detection models, teleplatforms with pupil dilation, automated graders, and handheld devices. These systems achieved sensitivities of 87%-100% and specificities of 89%-98% [-]. Implementation studies for retinopathy screening have examined real-world uptake, workflow impact, and patient follow-up. A telemedical form engaged 85% of clinicians [], and a real-world AI grading software increased on-time report completion by 12 percentage points but showed only moderate concordance with endocrinologists. In low-resource AI screening, 100% sensitivity was maintained and follow-up adherence doubled [,].
AI can also support medication management in PHC: an AI web application reduced drug-interaction detection time from 37 minutes to 33.8 seconds, detecting 75.3% of potentially inappropriate medications [], while a CDSS for polymedicated older adult patients improved prescribing safety and reduced adverse‐drug events in feasibility testing []. AI also aids respiratory and speech disorders: a vocal-cord pathology classifier achieved an F1-score of 0.98 (which means near-perfect balance of sensitivity and precision), outperforming specialist review in dysphonia detection [], while qualitative research on AI-supported spirometry highlighted the need for robust validation and specialist integration [].
Across studies, AI supported chronic disease management by enabling earlier risk stratification, safer prescribing, and more consistent follow-up. Tools were most effective when they were embedded in existing care processes, drew on longitudinal data, and supported GP-led coordination. Rather than replacing clinical workflows, these systems helped structure care across time, improving communication, safety, and responsiveness for patients with complex needs.
Theme 3: Primary Care Operations and Patient Management
An AI model trained on 239 GP consultation recordings assigned clinical codes with approximately 50% accuracy, indicating potential for partially automating routine coding tasks []. Another AI approach accurately flagged 98% of consultations suitable for remote management, although it correctly identified the specific reason for the consultation, such as prescription renewals versus new symptoms, in only 48% of cases []. One triage AI tool matched physician assessments in only 17% of cases overall, though it performed substantially better when identifying nonurgent (74%) compared to urgent cases (42%) []. A different respiratory triage model accurately excluded pneumonia in low-risk patients, reducing unnecessary chest x-ray referrals by 25% [].
AI has been explored to streamline documentation and workflows. Ambient voice technology that automatically captures clinical conversations decreased documentation time by 28.8%, alleviating physician burnout []. ML-based audits of EHRs identified 80% of GP-assessed heart failure cases and reduced screening workloads by 33%, illustrating AI’s utility in medical record analysis []. Natural language processing models examining EHR notes identified discussions of prediabetes with high sensitivity (0.98) and specificity (0.96), revealing opportunities to address care gaps through early interventions [].
An AI-based risk prediction algorithm detected 45,493 new atrial fibrillation cases at £3994 (US $5423) for each additional year of healthy life gained, demonstrating cost-effectiveness []. Budget modeling indicates that a wider rollout could cut undiagnosed atrial fibrillation by 27%, prevent 3299 strokes, and reduce health care costs []. A machine-learning–based decision-tree model revealed that GPs based lipid-lowering prescriptions on individual risk factors and sociodemographic profiles rather than on guideline-recommended absolute-risk thresholds []. Appointment no-show predictors achieved 47% sensitivity and 79% specificity, enabling targeted reminders and fewer missed visits []. When primary care physicians evaluated chart summaries generated by topic models, they rated the 100-topic version as more coherent and appropriately detailed than 50 or 150-topic models, demonstrating its superior interpretability [].
Taken together, these studies show that AI is increasingly being tested out to support primary care operations, including triage, documentation, coding, and scheduling. While tools vary in performance, many have demonstrated meaningful improvements in efficiency, diagnostic support, and administrative workload reduction. AI tools that addressed operational tasks were most effective when they reduced clinician burden without compromising clinical autonomy. Tools were most effective when reducing clinician burden without compromising autonomy, particularly when integrated with EHRs, designed for interpretability, and applied to low-complexity tasks.
Theme 4: Acceptance, Implementation, and Experiences of AI in Primary Care
Physician attitudes, patient perspectives, usability, and system factors shape AI integration. One mixed-methods study identified optimism and perceived innovativeness as key predictors of acceptance, while privacy concerns and health awareness influenced readiness []. A survey of GPs emphasized priorities such as urgent diagnoses, integration with EHRs, and personalized care, though concerns about clinical autonomy and tool usability remained []. In a discrete choice experiment, primary care providers preferred AI for breast cancer screening as a triage support system rather than a standalone diagnostic solution [].
Stakeholders and professionals across multiple contexts highlighted factors influencing AI adoption. Younger physicians were generally more open to AI, though privacy and regulatory concerns remained a barrier []. Risk profiling and administrative support emerged as top priorities, but equity and data quality issues limited broader implementation []. Financial, technical, and attitudinal challenges were frequently cited in studies of AI-based diabetic retinopathy screening [], with cost, reimbursement, and usability ranked as key enablers of GP engagement []. Qualitative work further emphasized the gap between envisioned AI use and practical realities, underscoring the need for co-creation, high-quality data, and ethical safeguards []. Among professionals, 85.7% reported understanding AI and 91.4% expressed interest in training, though concerns about ethics and interoperability remained [].
Physician trust and system readiness also impact adoption. Interview-based research found that GPs’ concerns about autonomy and trust hindered AI uptake [], and deliberative dialogues emphasized bias, regulation, and co-design as critical for implementation []. Surveys on AI for nonmelanoma skin cancer reported enthusiasm for diagnostic support, but cost and software availability limited broader use []. Perspectives on AI-assisted skin cancer detection pointed to benefits in diagnostic accuracy and care pathways, yet highlighted bias, usability, and shifting professional roles as key concerns []. A Delphi consensus called for rigorous design, evaluation, and ethical safeguards, noting challenges with integration and workflow [].
Patient attitudes and broader system challenges further shape AI adoption. One qualitative study found that while patients supported AI for decision support, they emphasized the importance of maintaining GP autonomy and trust, particularly when sharing personal data []. Observational research on AI-enabled diabetic retinopathy screening reported improved access and uptake, with patients expressing willingness to continue screenings in general practice despite some implementation challenges []. A feasibility study on AI-based symptom checkers during the pandemic found that nearly half of patients considered them useful, though physicians raised concerns about usability and integration into clinical workflows []. In a pilot conducted in a GP waiting room, most patients, especially younger users, found an AI-driven symptom checker helpful for initial self-assessment [].
A stakeholder-informed agenda prioritized AI for documentation, triage, and decision support, with emphasis on equity, safety, and training []. Workflow analyses emphasized user-centered design, system interoperability, and communication integration as key requirements for AI decision support tools []. GPs expressed support for doctor-AI collaboration but raised concerns about usability and workflow integration []. A mixed-methods study identified equity, workflow, and technical challenges as key barriers to AI implementation []. A survey found that GPs with higher self-efficacy tended to view AI more positively []. Family physicians reported low levels of AI-related anxiety and indicated that AI-specific training could support integration [].
Taken together, the studies indicate that successful AI integration in primary care depends on clinician trust, perceived usefulness, and alignment with clinical roles. Adoption was influenced by usability, data quality, ethical transparency, and regulatory readiness. Key enablers included user-centered design, structured training, and cocreation with stakeholders. Barriers are commonly related to interoperability and unclear clinical value. Across studies, PHC professionals were most often engaged through post hoc feedback or during tool testing, with fewer examples of involvement in the design or validation phases. Across studies, implementation success depended on addressing both technical performance and professional integration needs.
Discussion
Principal Findings
This scoping review identified a wide range of AI applications in primary care, with studies grouped around 4 thematic areas: early diagnosis, chronic disease management, operational support, and implementation experiences. Many tools demonstrated strong technical performance, though most are in the early implementation stage and are not yet integrated into routine workflows. Across themes, studies frequently identified recurring enablers and challenges, including workflow alignment, clinician trust, and training availability. These findings suggest that technical accuracy alone is not sufficient to ensure real-world adoption in primary care.
Interpretation of Findings
Several recurring patterns emerged across the included studies. The consistent performance of structured-data-based tools suggests that aligning AI inputs with standardized clinical formats may be critical for diagnostic reliability and system integration in PHC. Tools that were designed to fit within routine clinical workflows, such as those used for screening, prescribing, or documentation, tended to be more usable and were adopted more readily, particularly when they reduced administrative burden while preserving clinician autonomy. In many cases, implementation success depended more on human and organizational factors than on technical capability. These included clinician trust, perceived usefulness, availability of training, and compatibility with existing professional roles. However, few studies engaged PHC professionals during the development phase, and most reported only postimplementation feedback, limiting opportunities for early alignment with clinical needs. Patient involvement was rare and typically limited to user testing or acceptability assessments. Together, these findings suggest that effective AI tools in primary care must respond to the relational, interpretive, and operational aspects of general practice. While these design features were present in several tools, broader integration was often limited by structural constraints that are explored in the following sections.
Technical Potential Versus Real-World Constraints
The reviewed studies demonstrate AI’s potential to enhance clinical decision-making, risk stratification, and operational efficiency. Despite promising technical performance during early pilot testing, most AI tools for PHC remain at the proof-of-concept stage, with limited integration into clinical workflows and unclear real-world impact. Bridging this gap requires tools that demonstrate clinical value while fitting into existing workflows, which is essential to address ongoing implementation challenges, including usability, workflow integration, and cost-related concerns. This gap between technical feasibility and clinical usability underscores the need for AI solutions tailored to PHC’s specific workflow demands, resource constraints, and the effort required to transform routine practice.
PHC deals with broad, often undifferentiated presentations, requiring AI systems to handle multimodal data and variable clinical reasoning, unlike task-specific tools in specialized care. This challenge was evident in triage tools and symptom checkers, which performed inconsistently depending on use case and clinical context. These variabilities highlight the difficulty of designing AI systems that can replicate the nuanced and situation dependent–sensitive reasoning of GPs, which often relies on patient history, symptom presentation, and social context.
These challenges are compounded by broader system-level issues. Primary care providers worldwide face high levels of administrative burden and burnout, often driven by staffing shortages, complex EHR systems, and increasing time pressures. The COVID-19 pandemic further intensified these issues by accelerating the shift toward asynchronous, electronic, and nonvisit care models, while also fostering novel diagnostic pathways and forms of doctor-patient interaction []. In other sectors of health care, such as hospital administration, AI has already begun to ease such burdens through tools such as ambient digital scribes, suggesting that successful models for reducing workload exist but have yet to be fully adapted for PHC settings.
The Human-Technology Divide in AI Adoption
A key theme emerging from the literature is the tension between the efficiency gains offered by AI and the central role of personal connection in PHC. Clinicians recognize AI’s potential to reduce administrative burden, a known contributor to burnout, and to enhance diagnostic precision. However, skepticism persists over issues of autonomy, interpretability, and transparency in decision-making. While AI tools for prescribing, risk assessment, and triage have demonstrated potential, hesitation persists around the risk of undermining clinical judgment and patient-centered care.
For patients, AI’s role in expediting referrals and diagnostic pathways was generally viewed positively, particularly when it improved access or screening uptake. However, a consistent preference for human-centered care and continuity in GP relationships emerged across studies. Given PHC’s emphasis on trust, shared decision-making, and holistic care, AI must be perceived as supportive of the clinician-patient relationship rather than replacing it. This suggests that AI systems designed to support clinical judgment, especially those developed through co-design with GPs and patients, are more likely to be accepted and integrated into primary care. The World Organization of Family Doctors’ Europe Future Plan 2023‐2027 identified delegable tasks as one of their thematic goals, in which AI can aid in improving GPs’ work [].
Equity and Global Challenges in AI Deployment
As seen in this review, the geographic concentration of AI research in high-income settings raises concerns about its global applicability. Tools developed in well-resourced systems may not perform reliably in low-resource environments, where infrastructure, data quality, and workflows differ significantly. Although the included studies represented several high-income countries, evidence from low-resource settings was limited. This geographic concentration raises concerns about the broader applicability of AI tools, especially in health care systems with different infrastructure, clinical workflows, or population health needs. Without validation in diverse contexts, AI systems risk introducing bias or failing to generalize across global primary care settings.
Given the concentration of included studies in high-income countries, inclusive AI development remains a priority. Ensuring equitable integration in primary care requires validation in diverse clinical and socioeconomic contexts. As PHC plays a critical role in promoting health equity, future AI tools should be developed with diverse data representation, bias mitigation strategies, and deployment models adapted to varied levels of health care access.
Comparison With Existing Literature
Previous reviews have established AI’s emerging role in diagnostics, chronic disease monitoring, and administrative support, but gaps remain in understanding its practical implementation in PHC workflows. This review builds on earlier work by offering a broader perspective that contextualizes AI’s challenges and opportunities within real-world PHC settings.
A scoping review on AI use in PHC identified ML, natural language processing, and expert systems as the most commonly used AI interventions in community-based PHC, primarily for diagnosis, detection, and surveillance []. Our review corroborates these findings, demonstrating AI’s role in early diagnosis, decision support, and chronic disease management while also expanding the discussion to include operational efficiency and administrative automation.
In contrast to research which found that AI research in primary care is at an early stage and often lacks interdisciplinary collaboration and end user engagement, our study delves into the practical implications of AI integration within PHC, emphasizing its impact on clinical workflows and patient outcomes [].
In other medical specialties, such as radiology and oncology, studies have similarly reported that despite promising technical developments, the real-world integration of AI tools remains limited. Common challenges across these fields include insufficient alignment with clinical workflows, limited trust in algorithmic outputs, unclear regulatory frameworks, and inadequate training for health care professionals. These issues closely resemble the barriers identified in our review of primary care, indicating that many of the obstacles to implementation are not unique to this setting. At the same time, the broader scope of patient presentations, the continuity of care, and the central role of the patient-clinician relationship in primary care may intensify these challenges. This comparison underscores the importance of developing AI implementation strategies that are not only technically robust but also sensitive to the everyday realities of general practice [-].
Aligning AI with GP Roles
Our findings can be conceptually mapped onto the fundamental roles of a GP. In this model, the physician is placed at the center of a triangle defined by acute care, chronic care, and practice management (). The theme of early intervention and decision support directly enhances acute care by enabling faster, more accurate diagnoses and interventions during urgent encounters. Similarly, the theme of comprehensive chronic disease management supports the GP’s role in long-term patient monitoring and treatment adjustments, which is essential in managing chronic conditions. Lastly, the themes addressing primary care operations and user acceptance underscore the importance of effective practice management. This aligns with the distinctive characteristics of primary care data, which are often longitudinal, heterogeneous, and rooted in undifferentiated clinical presentations. These complexities demand tools that are not only accurate but contextually sensitive to PHC’s comprehensive scope [].

Limitations
This review has several limitations. First, this study was limited to 3 indexed databases and empirical, peer-reviewed research papers, potentially excluding relevant research from other databases or gray literature sources. The cutoff date of April 16, 2024, means that newer advancements, particularly in generative AI and evolving clinical applications, are not present.
Second, language bias is a limitation, as the review included only English-language publications, potentially omitting valuable research from non-English–speaking regions. Third, the included studies varied in design and scope, ranging from small-scale feasibility studies to retrospective analyses, making direct comparisons difficult; this is why we also refrained from critical appraisal.
Additionally, as a scoping review, this study aimed to map available literature rather than assess the quality or strength of evidence. Future systematic reviews with meta-analyses will be necessary to determine AI’s clinical effectiveness relative to standard care.
Future Directions
To advance beyond narrow, disease-specific pilots, future research should adopt longitudinal, system-aware designs that reflect the real-world complexity of PHC. This includes evaluating how AI interacts with multimorbidity, time constraints, and relational continuity, elements that are often absent from current trials. Integrating patient experience and generalist clinical reasoning into evaluation frameworks will also be essential.
Beyond empirical research, the development of AI in primary care would benefit from structured, anticipatory planning. Future-oriented methods (such as scenario analysis and backcasting) can help stakeholders collaboratively envision pathways for responsible implementation. These approaches are well-suited to the uncertainties and ethical stakes of AI integration and offer a shared foundation for aligning innovation with the core values of primary care [].
Conclusions
This scoping review mapped the current landscape of AI applications in PHC, identifying tools aimed at early diagnosis, chronic disease management, operational support, and implementation experiences. While many tools demonstrated promising technical performance, especially those using structured clinical data, most of them were in an early testing phase and have not yet been integrated into routine practice. Common enablers across studies included alignment with existing workflows, structured data inputs, and clinician trust. However, persistent challenges, such as usability concerns, training gaps, and organizational barriers, continue to limit broader adoption. These findings emphasize that the future of AI in PHC depends not only on technological capability but also on thoughtful integration into the relational and practical realities of primary care.
Acknowledgments
A generative artificial intelligence (AI) tool (ChatGPT, developed by OpenAI) was used for copyediting and language refinement during manuscript preparation.
Data Availability
All data generated or analyzed during this study are included in this published article and its supplementary information files.
Authors' Contributions
GK conceptualized this study, curated the data, performed the formal analysis, managed the project, prepared the visualizations, and drafted the original paper. BM contributed to the methodology, participated in the formal analysis, supervised the project, and revised this paper critically. NA contributed to the validation of the findings and edited this paper.
Conflicts of Interest
BM is a guest editor for Journal of Medical Internet Research. The other authors have no conflicts of interest to declare.
Explanation of the search strategy.
DOCX File, 15 KBThe database used in the scoping review.
XLSX File, 37 KBThe PRISMA-ScR checklist for this paper.
DOCX File, 86 KBReferences
- Allen J, et al. The European definition of general practice / family medicine. In: The Wonca Tree-As Produced by the Swiss College of Primary Care Prepared for Wonca Europe (The European Society of General Practice/ Family Medicine). Wonca Europe; 2002:1-36. URL: https://www.woncaeurope.org/file/41f61fb9-47d5-4721-884e-603f4afa6588/WONCA_European_Definitions_2_v7.pdf [Accessed 2025-07-26]
- Shen X, Xu H, Feng J, Ye J, Lu Z, Gan Y. The global prevalence of burnout among general practitioners: a systematic review and meta-analysis. Fam Pract. Sep 24, 2022;39(5):943-950. [CrossRef]
- Abraham CM, Zheng K, Poghosyan L. Predictors and outcomes of burnout among primary care providers in the United States: a systematic review. Med Care Res Rev. Oct 2020;77(5):387-401. [CrossRef]
- Lawson E. The global primary care crisis. Br J Gen Pract. Jan 2023;73(726):3. [CrossRef] [Medline]
- Gunja MZ, et al. Stressed out and burned out: the global primary care crisis — findings from the 2022 international health policy survey of primary care physicians. The Commonwealth Fund; 2022. URL: https://www.commonwealthfund.org/publications/issue-briefs/2022/nov/stressed-out-burned-out-2022-international-survey-primary-care-physicians [Accessed 2025-07-12]
- Liaw W, Kakadiaris IA, Doubeni CA, Wilkinson JM, Korsen N, Midthun DE. Primary care artificial intelligence: a branch hiding in plain sight. Ann Fam Med. May 2020;18(3):194-195. [CrossRef] [Medline]
- Mainous AG. Will technology and artificial intelligence make the primary care doctor obsolete? Remember the Luddites. Front Med (Lausanne). 2022;9:878281. [CrossRef] [Medline]
- Liaw WR, Westfall JM, Williamson TS, Jabbarpour Y, Bazemore A. Primary care: the actual intelligence required for artificial intelligence to advance health care and improve health. JMIR Med Inform. Mar 8, 2022;10(3):e27691. [CrossRef] [Medline]
- Kueper JK. Primer for artificial intelligence in primary care. Can Fam Physician. Dec 2021;67(12):889-893. [CrossRef] [Medline]
- Turcian D, Stoicu-Tivadar V. Artificial intelligence in primary care: an overview. Stud Health Technol Inform. Jan 14, 2022;289:208-211. [CrossRef] [Medline]
- Liaw W, Kakadiaris IA. Artificial intelligence and family medicine: better together. Fam Med. 2020;52(1):8-10. [CrossRef]
- Lin SY, Mahoney MR, Sinsky CA. Ten ways artificial intelligence will transform primary care. J Gen Intern Med. Aug 2019;34(8):1626-1630. [CrossRef] [Medline]
- Howell MD, Corrado GS, DeSalvo KB. Three epochs of artificial intelligence in health care. JAMA. Jan 16, 2024;331(3):242. [CrossRef]
- Sørensen NL, Bemman B, Jensen MB, Moeslund TB, Thomsen JL. Machine learning in general practice: scoping review of administrative task support and automation. BMC Prim Care. Jan 14, 2023;24(1):14. [CrossRef] [Medline]
- Baviskar D, Ahirrao S, Potdar V, Kotecha K. Efficient automated processing of the unstructured documents using artificial intelligence: a systematic literature review and future directions. IEEE Access. 2021;9:72894-72936. [CrossRef]
- Jones OT, Calanzani N, Saji S, et al. Artificial intelligence techniques that may be applied to primary care data to facilitate earlier diagnosis of cancer: systematic review. J Med Internet Res. Mar 3, 2021;23(3):e23483. [CrossRef] [Medline]
- Kueper JK, Terry AL, Zwarenstein M, Lizotte DJ. Artificial intelligence and primary care research: a scoping review. Ann Fam Med. May 2020;18(3):250-258. [CrossRef] [Medline]
- Rahimi SA, Légaré F, Sharma G, et al. Application of artificial intelligence in community-based primary health care: systematic scoping review and critical appraisal. J Med Internet Res. Sep 3, 2021;23(9):e29839. [CrossRef] [Medline]
- Peters MDJ, Marnie C, Tricco AC, et al. Updated methodological guidance for the conduct of scoping reviews. JBI Evidence Synthesis. 2020;18(10):2119-2126. [CrossRef]
- Tricco AC, Lillie E, Zarin W, et al. PRISMA Extension for Scoping Reviews (PRISMA-ScR): checklist and explanation. Ann Intern Med. Oct 2, 2018;169(7):467-473. [CrossRef]
- Pollock D, Peters MDJ, Khalil H, et al. Recommendations for the extraction, analysis, and presentation of results in scoping reviews. JBI Evidence Synthesis. 2023;21(3):520-532. [CrossRef]
- Braun V, Clarke V. Using thematic analysis in psychology. Qual Res Psychol. Jan 2006;3(2):77-101. [CrossRef]
- Nemlander E, Ewing M, Abedi E, et al. A machine learning tool for identifying non-metastatic colorectal cancer in primary care. Eur J Cancer. Mar 2023;182:100-106. [CrossRef]
- Soerensen PD, Christensen H, Laursen SGW, Hardahl C, Brandslund I, Madsen JS. Using artificial intelligence in a primary care setting to identify patients at risk for cancer: a risk prediction model based on routine laboratory tests. Clin Chem Lab Med. Nov 25, 2022;60(12):2005-2016. [CrossRef] [Medline]
- Yao X, Rushlow DR, Inselman JW, et al. Artificial intelligence–enabled electrocardiograms for identification of patients with low ejection fraction: a pragmatic, randomized clinical trial. Nat Med. May 2021;27(5):815-819. [CrossRef]
- Rushlow DR, Croghan IT, Inselman JW, et al. Clinician adoption of an artificial intelligence algorithm to detect left ventricular systolic dysfunction in primary care. Mayo Clin Proc. Nov 2022;97(11):2076-2085. [CrossRef]
- Bachtiger P, Petri CF, Scott FE, et al. Point-of-care screening for heart failure with reduced ejection fraction using artificial intelligence during ECG-enabled stethoscope examination in London, UK: a prospective, observational, multicentre study. Lancet Digit Health. Feb 2022;4(2):e117-e125. [CrossRef] [Medline]
- Himmelreich JCL, Harskamp RE. Diagnostic accuracy of the PMcardio smartphone application for artificial intelligence-based interpretation of electrocardiograms in primary care (AMSTELHEART-1). Cardiovasc Digit Health J. Jun 2023;4(3):80-90. [CrossRef] [Medline]
- Dulmage B, Tegtmeyer K, Zhang MZ, Colavincenzo M, Xu S. A point-of-care, real-time artificial intelligence system to support clinician diagnosis of a wide range of skin diseases. J Invest Dermatol. May 2021;141(5):1230-1235. [CrossRef] [Medline]
- Jaklitsch E, Thames T, de Campos Silva T, Coll P, Oliviero M, Ferris LK. Clinical utility of an AI-powered, handheld elastic scattering spectroscopy device on the diagnosis and management of skin cancer by primary care physicians. J Prim Care Community Health. 2023;14:21501319231205979. [CrossRef] [Medline]
- Jain A, Way D, Gupta V, et al. Development and assessment of an artificial intelligence-based tool for skin condition diagnosis by primary care physicians and nurse practitioners in teledermatology pratices. JAMA Netw Open. Apr 1, 2021;4(4):e217249. [CrossRef] [Medline]
- Papachristou P, Söderholm M, Pallon J, et al. Evaluation of an artificial intelligence-based decision support for the detection of cutaneous melanoma in primary care: a prospective real-life clinical trial. Br J Dermatol. Jun 20, 2024;191(1):125-133. [CrossRef]
- Helenason J, Ekström C, Falk M, Papachristou P. Exploring the feasibility of an artificial intelligence based clinical decision support system for cutaneous melanoma detection in primary care - a mixed method study. Scand J Prim Health Care. Mar 2024;42(1):51-60. [CrossRef] [Medline]
- Escalé-Besa A, Yélamos O, Vidal-Alaball J, et al. Exploring the potential of artificial intelligence in improving skin lesion diagnosis in primary care. Sci Rep. Mar 15, 2023;13(1):4293. [CrossRef] [Medline]
- Giavina-Bianchi M, de Sousa RM, Paciello VDA, et al. Implementation of artificial intelligence algorithms for melanoma screening in a primary care setting. PLoS ONE. 2021;16(9):e0257006. [CrossRef] [Medline]
- Lucius M, De All J, De All JA, et al. Deep neural frameworks improve the accuracy of general practitioners in the classification of pigmented skin lesions. Diagnostics (Basel). Nov 18, 2020;10(11):969. [CrossRef] [Medline]
- Miller IJ, Stapelberg M, Rosic N, et al. Implementation of artificial intelligence for the detection of cutaneous melanoma within a primary care setting: prevalence and types of skin cancer in outdoor enthusiasts. PeerJ. 2023;11:e15737. [CrossRef] [Medline]
- Kaskar OG, Wells-Gray E, Fleischman D, Grace L. Evaluating machine learning classifiers for glaucoma referral decision support in primary care settings. Sci Rep. May 20, 2022;12(1):8518. [CrossRef] [Medline]
- Liu X, Zhao C, Wang L, et al. Evaluation of an OCT-AI-based telemedicine platform for retinal disease screening and referral in a primary care setting. Transl Vis Sci Technol. Mar 2, 2022;11(3):4. [CrossRef] [Medline]
- Ellertsson S, Loftsson H, Sigurdsson EL. Artificial intelligence in the GPs office: a retrospective study on diagnostic accuracy. Scand J Prim Health Care. Dec 2021;39(4):448-458. [CrossRef] [Medline]
- Blanes-Vidal V, Lindvig KP, Thiele M, Nadimi ES, Krag A. Artificial intelligence outperforms standard blood-based scores in identifying liver fibrosis patients in primary care. Sci Rep. Feb 21, 2022;12(1):2914. [CrossRef] [Medline]
- Herter WE, Khuc J, Cinà G, et al. Impact of a machine learning-based decision support system for urinary tract infections: prospective observational study in 36 primary care practices. JMIR Med Inform. May 4, 2022;10(5):e27795. [CrossRef] [Medline]
- Litvin CB, Ornstein SM, Wessell AM, Nemeth LS, Nietert PJ. Adoption of a clinical decision support system to promote judicious use of antibiotics for acute respiratory infections in primary care. Int J Med Inform. Aug 2012;81(8):521-526. [CrossRef]
- Ding X, Barnett M, Mehrotra A, Tuot DS, Bitterman DS, Miller TA. Classifying unstructured electronic consult messages to understand primary care physician specialty information needs. J Am Med Inform Assoc. Aug 16, 2022;29(9):1607-1617. [CrossRef] [Medline]
- Frontoni E, Romeo L, Bernardini M, et al. A decision support system for diabetes chronic care models based on general practitioner engagement and EHR data sharing. IEEE J Transl Eng Health Med. 2020;8:3000112. [CrossRef] [Medline]
- You Y, Doubova SV, Pinto-Masis D, Pérez-Cuevas R, Borja-Aburto VH, Hubbard A. Application of machine learning methodology to assess the performance of DIABETIMSS program for patients with type 2 diabetes in family medicine clinics in Mexico. BMC Med Inform Decis Mak. Nov 12, 2019;19(1):221. [CrossRef] [Medline]
- Bhuiyan A, Govindaiah A, Deobhakta A, Hossain M, Rosen R, Smith T. Automated diabetic retinopathy screening for primary care settings using deep learning. Intell Based Med. 2021;5:100045. [CrossRef] [Medline]
- Bhuiyan A, Govindaiah A, Alauddin S, Otero-Marquez O, Smith RT. Combined automated screening for age-related macular degeneration and diabetic retinopathy in primary care settings. Ann Eye Sci. Jun 2021;6:12. [CrossRef] [Medline]
- Mehra AA, Softing A, Guner MK, Hodge DO, Barkmeier AJ. Diabetic retinopathy telemedicine outcomes with artificial intelligence-based image analysis, reflex dilation, and image overread. Am J Ophthalmol. Dec 2022;244:125-132. [CrossRef]
- Kanagasingam Y, Xiao D, Vignarajan J, Preetham A, Tay-Kearney ML, Mehrotra A. Evaluation of artificial intelligence-based grading of diabetic retinopathy in primary care. JAMA Netw Open. Sep 7, 2018;1(5):e182665. [CrossRef] [Medline]
- Verbraak FD, Abramoff MD, Bausch GCF, et al. Diagnostic accuracy of a device for the automated detection of diabetic retinopathy in a primary care setting. Diabetes Care. Apr 1, 2019;42(4):651-656. [CrossRef]
- Abràmoff MD, Lavin PT, Birch M, Shah N, Folk JC. Pivotal trial of an autonomous AI-based diagnostic system for detection of diabetic retinopathy in primary care offices. NPJ Digit Med. 2018;1:39. [CrossRef] [Medline]
- Wintergerst MWM, Bejan V, Hartmann V, et al. Telemedical diabetic retinopathy screening in a primary care setting: quality of retinal photographs and accuracy of automated image analysis. Ophthalmic Epidemiol. May 4, 2022;29(3):286-295. [CrossRef]
- Liu J, Gibson E, Ramchal S, et al. Diabetic retinopathy screening with automated retinal image analysis in a primary care setting improves adherence to ophthalmic care. Ophthalmol Retina. Jan 2021;5(1):71-77. [CrossRef] [Medline]
- Li YH, Sheu WHH, Chou CC, et al. The clinical influence after implementation of convolutional neural network-based software for diabetic retinopathy detection in the primary care setting. Life (Basel). Mar 5, 2021;11(3):1-9. [CrossRef] [Medline]
- Akyon SH, Akyon FC, Yılmaz TE. Artificial intelligence-supported web application design and development for reducing polypharmacy side effects and supporting rational drug use in geriatric patients. Front Med (Lausanne). 2023;10:1029198. [CrossRef] [Medline]
- Manzanet JMP, Fico G, Merino-Barbancho B, et al. Feasibility study of a clinical decision support system for polymedicated patients in primary care. Healthc Technol Lett. Jun 2023;10(3):62-72. [CrossRef] [Medline]
- Compton EC, Cruz T, Andreassen M, et al. Developing an artificial intelligence tool to predict vocal cord pathology in primary care settings. Laryngoscope. Aug 2023;133(8):1952-1960. [CrossRef]
- Doe G, Taylor SJ, Topalovic M, et al. Spirometry services in England post-pandemic and the potential role of AI support software: a qualitative study of challenges and opportunities. Br J Gen Pract. Dec 2023;73(737):e915-e923. [CrossRef] [Medline]
- Pyne Y, Wong YM, Fang H, Simpson E. Analysis of “one in a million” primary care consultation conversations using natural language processing. BMJ Health Care Inform. Apr 2023;30(1):e100659. [CrossRef] [Medline]
- Seguí FL, Aguilar RAE, de Maeztu G, et al. Teleconsultations between patients and healthcare professionals in primary care in Catalonia: the evaluation of text classification algorithms using supervised machine learning. Int J Environ Res Public Health. Feb 9, 2020;17(3):1093. [CrossRef] [Medline]
- Entezarjou A, Bonamy AKE, Benjaminsson S, Herman P, Midlöv P. Human- versus machine learning-based triage using digitalized patient histories in primary care: comparative study. JMIR Med Inform. Sep 3, 2020;8(9):e18930. [CrossRef] [Medline]
- Ellertsson S, Hlynsson HD, Loftsson H, Sigur∂sson EL. Triaging patients with artificial intelligence for respiratory symptoms in primary care to improve patient outcomes: a retrospective diagnostic accuracy study. Ann Fam Med. May 2023;21(3):240-248. [CrossRef] [Medline]
- Owens LM, Wilda JJ, Hahn PY, Koehler T, Fletcher JJ. The association between use of ambient voice technology documentation during primary care patient encounters, documentation burden, and provider burnout. Fam Pract. Apr 15, 2024;41(2):86-91. [CrossRef]
- Raat W, Smeets M, Henrard S, et al. Machine learning optimization of an electronic health record audit for heart failure in primary care. ESC Heart Fail. Feb 2022;9(1):39-47. [CrossRef] [Medline]
- Tseng E, Schwartz JL, Rouhizadeh M, Maruthur NM. Analysis of primary care provider electronic health record notes for discussions of prediabetes using natural language processing methods. J Gen Intern Med. 2021. [CrossRef]
- Hill NR, Groves L, Dickerson C, et al. Identification of undiagnosed atrial fibrillation using a machine learning risk prediction algorithm and diagnostic testing (PULsE-AI) in primary care: cost-effectiveness of a screening strategy evaluated in a randomized controlled trial in England. J Med Econ. Dec 31, 2022;25(1):974-983. [CrossRef]
- Szymanski T, Ashton R, Sekelj S, et al. Budget impact analysis of a machine learning algorithm to predict high risk of atrial fibrillation among primary care patients. EP Europace. Sep 1, 2022;24(8):1240-1247. [CrossRef]
- Schilling C, Mortimer D, Dalziel K, Heeley E, Chalmers J, Clarke P. Using Classification and Regression Trees (CART) to identify prescribing thresholds for cardiovascular disease. Pharmacoeconomics. Feb 2016;34(2):195-205. [CrossRef]
- Ahmad MU, Zhang A, Mhaskar R. A predictive model for decreasing clinical no-show rates in a primary care setting. Int J Healthc Manag. Jul 3, 2021;14(3):829-836. [CrossRef]
- Arnold CW, Oh A, Chen S, Speier W. Evaluating topic model interpretability from a primary care physician perspective. Comput Methods Programs Biomed. Feb 2016;124:67-75. [CrossRef] [Medline]
- Ben-Gal HC. Artificial intelligence (AI) acceptance in primary care during the coronavirus pandemic: what is the role of patients’ gender, age and health awareness? A two-phase pilot study. Front Public Health. 2022;10:931225. [CrossRef] [Medline]
- Tabla S, et al. Artificial intelligence and clinical decision support systems or automated interpreters: What characteristics are expected by French general practitioners? In: Stud Health Technol Inform. IOS Press BV; 2022:887-891. [CrossRef]
- Hendrix N, Hauber B, Lee CI, Bansal A, Veenstra DL. Artificial intelligence in breast cancer screening: primary care provider preferences. J Am Med Inform Assoc. Jun 12, 2021;28(6):1117-1124. [CrossRef] [Medline]
- Alanzi T, Alotaibi R, Alajmi R, et al. Barriers and facilitators of artificial intelligence in family medicine: an empirical study with physicians in Saudi Arabia. Cureus. Nov 2023;15(11):e49419. [CrossRef] [Medline]
- Kueper JK, Terry A, Bahniwal R, et al. Connecting artificial intelligence and primary care challenges: findings from a multi stakeholder collaborative consultation. BMJ Health Care Inform. Jan 2022;29(1):e100493. [CrossRef] [Medline]
- Held LA, Wewetzer L, Steinhäuser J. Determinants of the implementation of an artificial intelligence-supported device for the screening of diabetic retinopathy in primary care – a qualitative study. Health Informatics J. Jul 2022;28(3):14604582221112816. [CrossRef]
- Wewetzer L, Held LA, Goetz K, Steinhäuser J. Determinants of the implementation of artificial intelligence-based screening for diabetic retinopathy-a cross-sectional study with general practitioners in Germany. Digit Health. 2023;9:20552076231176644. [CrossRef] [Medline]
- Terry AL, Kueper JK, Beleno R, et al. Is primary health care ready for artificial intelligence? What do primary health care stakeholders say? BMC Med Inform Decis Mak. Sep 9, 2022;22(1):237. [CrossRef] [Medline]
- Catalina QM, Fuster-Casanovas A, Vidal-Alaball J, et al. Knowledge and perception of primary care healthcare professionals on the use of artificial intelligence as a healthcare tool. Digit Health. 2023;9:20552076231180511. [CrossRef] [Medline]
- Buck C, Doctor E, Hennrich J, Jöhnk J, Eymann T. General practitioners’ attitudes toward artificial intelligence-enabled systems: interview study. J Med Internet Res. Jan 27, 2022;24(1):e28916. [CrossRef] [Medline]
- Darcel K, Upshaw T, Craig-Neil A, et al. Implementing artificial intelligence in Canadian primary care: barriers and strategies identified through a national deliberative dialogue. PLoS ONE. 2023;18(2):e0281733. [CrossRef] [Medline]
- Samaran R, L’Orphelin JM, Dreno B, Rat C, Dompmartin A. Interest in artificial intelligence for the diagnosis of non-melanoma skin cancer: a survey among French general practitioners. Eur J Dermatol. Aug 2021;31(4):457-462. [CrossRef]
- Sangers TE, Wakkee M, Moolenburgh FJ, Nijsten T, Lugtenberg M. Towards successful implementation of artificial intelligence in skin cancer care: a qualitative study exploring the views of dermatologists and general practitioners. Arch Dermatol Res. Jul 2023;315(5):1187-1195. [CrossRef] [Medline]
- Liyanage H, Liaw ST, Jonnagaddala J, et al. Artificial intelligence in primary health care: perceptions, issues, and challenges. Yearb Med Inform. Aug 2019;28(1):41-46. [CrossRef] [Medline]
- Mikkelsen JG, Sørensen NL, Merrild CH, Jensen MB, Thomsen JL. Patient perspectives on data sharing regarding implementing and using artificial intelligence in general practice - a qualitative study. BMC Health Serv Res. Apr 4, 2023;23(1):335. [CrossRef] [Medline]
- Nolan B, Daybranch ER, Barton K, Korsen N. Patient and provider experience with artificial intelligence screening technology for diabetic retinopathy in a rural primary care setting. J Maine Med Cent. 2023;5(2):2. [CrossRef] [Medline]
- Mahlknecht A, Engl A, Piccoliori G, Wiedermann CJ. Supporting primary care through symptom checking artificial intelligence: a study of patient and physician attitudes in Italian general practice. BMC Prim Care. Sep 4, 2023;24(1):174. [CrossRef] [Medline]
- Miller S, Gilbert S, Virani V, Wicks P. Patients’ utilization and perception of an artificial intelligence-based symptom assessment and advice technology in a British primary care waiting room: exploratory pilot study. JMIR Hum Factors. Jul 10, 2020;7(3):e19713. [CrossRef] [Medline]
- Upshaw TL, Craig-Neil A, Macklin J, et al. Priorities for artificial intelligence applications in primary care: a Canadian deliberative dialogue with patients, providers, and health system leaders. J Am Board Fam Med. Apr 3, 2023;36(2):210-220. [CrossRef]
- Schütze D, Holtz S, Neff MC, et al. Requirements analysis for an AI-based clinical decision support system for general practitioners: a user-centered design process. BMC Med Inform Decis Mak. Jul 31, 2023;23(1):144. [CrossRef] [Medline]
- Navarro DF, Kocaballi AB, Dras M, Berkovsky S. Collaboration, not confrontation: understanding general practitioners’ attitudes towards natural language and text automation in clinical practice. ACM Trans Comput-Hum Interact. Apr 30, 2023;30(2):1-34. [CrossRef]
- Allen MR, Webb S, Mandvi A, Frieden M, Tai-Seale M, Kallenberg G. Navigating the doctor-patient-AI relationship - a mixed-methods study of physician attitudes toward artificial intelligence in primary care. BMC Prim Care. Jan 27, 2024;25(1):42. [CrossRef] [Medline]
- Sola D, Borioli GS, Quaglia R. Predicting GPs’ engagement with artificial intelligence. Br J Healthcare Manage. Mar 2, 2018;24(3):134-140. [CrossRef]
- Başer A, Altuntaş SB, Kolcu G, Özceylan G. Artificial intelligence anxiety of family physicians in Turkey. Prog Nutr. 2021;23(S2):2021275. URL: https://www.mattioli1885journals.com/index.php/progressinnutrition/article/view/12003 [Accessed 2025-08-07]
- Position statement PRICOV endorsed by Equip—Moving forward after the COVID-19 pandemic: lessons learned in primary care. Wonca Europe. URL: https://www.woncaeurope.org/kb/position-statement-pricov-endorsed-by-equip [Accessed 2025-07-12]
- Wonca Europe future plan 2023-2027. Wonca Europe. URL: https://www.woncaeurope.org/page/wonca-europe-future-plan-2023-2027 [Accessed 2025-07-26]
- Sutton RA, Sharma P. Overcoming barriers to implementation of artificial intelligence in gastroenterology. Best Pract Res Clin Gastroenterol. Jun 2021;52-53:101732. [CrossRef]
- Strohm L, Hehakaya C, Ranschaert ER, Boon WPC, Moors EHM. Implementation of artificial intelligence (AI) applications in radiology: hindering and facilitating factors. Eur Radiol. Oct 2020;30(10):5525-5532. [CrossRef] [Medline]
- Lambert SI, Madi M, Sopka S, et al. An integrative review on the acceptance of artificial intelligence among healthcare professionals in hospitals. NPJ Digit Med. Jun 10, 2023;6(1):111. [CrossRef] [Medline]
- Meskó B, Kristóf T, Dhunnoo P, Árvai N, Katonai G. Exploring the need for medical futures studies: insights from a scoping review of health care foresight. J Med Internet Res. Oct 9, 2024;26:e57148. [CrossRef] [Medline]
Abbreviations
| AI: artificial intelligence |
| ECG: electrocardiogram |
| EHR: electronic health record |
| GP: general practitioner |
| MeSH: Medical Subject Headings |
| PHC: primary health care |
| PRISMA-ScR: Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews |
Edited by Andrew Coristine; submitted 30.08.24; peer-reviewed by Jacqueline Kueper, Jesse Jansen; final revised version received 25.06.25; accepted 25.06.25; published 15.08.25.
Copyright© Gellert Katonai, Nora Arvai, Bertalan Mesko. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 15.8.2025.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research (ISSN 1438-8871), is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.

