Published on in Vol 24, No 11 (2022): November

Preprints (earlier versions) of this paper are available at, first published .
The Impact of Artificial Intelligence on Health Equity in Oncology: Scoping Review

The Impact of Artificial Intelligence on Health Equity in Oncology: Scoping Review

The Impact of Artificial Intelligence on Health Equity in Oncology: Scoping Review


1Schulich School of Medicine and Dentistry, Western University, London, ON, Canada

2Rotman Institute of Philosophy, Western University, London, ON, Canada

3Department of Pathology & Laboratory Medicine, Schulich School of Medicine, Western University, London, ON, Canada

4Library Services, London Health Sciences Centre, London, ON, Canada

5Division of Clinical Public Health, Dalla Lana School of Public Health, University of Toronto, Toronto, ON, Canada

6Bridgepoint Collaboratory for Research and Innovation, Lunenfeld Tanenbaum Research Institute, Sinai Health System, Toronto, ON, Canada

7Division of Cancer Care and Epidemiology, Department of Oncology, Queen's University, Kingston, ON, Canada

8Division of Cancer Care and Epidemiology, Department of Public Health Sciences, Queen's University, Kingston, ON, Canada

9Faculty of Information and Media Studies, Western University, London, ON, Canada

10Division of Hematology, Schulich School of Medicine and Dentistry, Western University, London, ON, Canada

11Division of Hematology, Department of Medicine, London Health Sciences Centre, London, ON, Canada

Corresponding Author:

Benjamin Chin-Yee, MA, MD

Division of Hematology

Department of Medicine

London Health Sciences Centre

800 Comissioners Rd E

London, ON, N6A 5W9


Phone: 1 519 685 8475


Background: The field of oncology is at the forefront of advances in artificial intelligence (AI) in health care, providing an opportunity to examine the early integration of these technologies in clinical research and patient care. Hope that AI will revolutionize health care delivery and improve clinical outcomes has been accompanied by concerns about the impact of these technologies on health equity.

Objective: We aimed to conduct a scoping review of the literature to address the question, “What are the current and potential impacts of AI technologies on health equity in oncology?”

Methods: Following PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews) guidelines for scoping reviews, we systematically searched MEDLINE and Embase electronic databases from January 2000 to August 2021 for records engaging with key concepts of AI, health equity, and oncology. We included all English-language articles that engaged with the 3 key concepts. Articles were analyzed qualitatively for themes pertaining to the influence of AI on health equity in oncology.

Results: Of the 14,011 records, 133 (0.95%) identified from our review were included. We identified 3 general themes in the literature: the use of AI to reduce health care disparities (58/133, 43.6%), concerns surrounding AI technologies and bias (16/133, 12.1%), and the use of AI to examine biological and social determinants of health (55/133, 41.4%). A total of 3% (4/133) of articles focused on many of these themes.

Conclusions: Our scoping review revealed 3 main themes on the impact of AI on health equity in oncology, which relate to AI’s ability to help address health disparities, its potential to mitigate or exacerbate bias, and its capability to help elucidate determinants of health. Gaps in the literature included a lack of discussion of ethical challenges with the application of AI technologies in low- and middle-income countries, lack of discussion of problems of bias in AI algorithms, and a lack of justification for the use of AI technologies over traditional statistical methods to address specific research questions in oncology. Our review highlights a need to address these gaps to ensure a more equitable integration of AI in cancer research and clinical practice. The limitations of our study include its exploratory nature, its focus on oncology as opposed to all health care sectors, and its analysis of solely English-language articles.

J Med Internet Res 2022;24(11):e39748




Artificial intelligence (AI), a field that aims to create computers that can achieve human-like understanding and perform tasks normally associated with human intelligence, is finding increasing applications in health care and public health [1,2]. Machine learning (ML) is a form of AI that involves algorithms that draw on big data—data sets whose size go beyond the capabilities of standard data analysis software—to learn to make predictions [3]. Oncology has been the focus of significant AI research and development and serves as an important area to observe and assess the early integration of AI in health care [4]. AI applications in oncology are expanding to cover a wide range of uses, from pathology and diagnostic imaging to clinical risk prediction and treatment planning for several types of cancer [5-7].

Despite its promise, the use of AI in health care raises several ethical issues, most notably concerns over bias and the potential for AI systems to adversely impact health equity. Health equity has been defined as “the absence of systematic disparities in health between groups with different levels of underlying social advantage/disadvantage” [8]. Studies have demonstrated how the use of biased data sets in training ML algorithms can exacerbate health inequities [9-11]. For example, Obermeyer et al [11] revealed how an ML algorithm trained to predict health risk consistently underestimated the health of Black patients because of the use of health care cost as a proxy for health. However, others have argued that AI systems can help illuminate health inequities and, if used correctly, may help address existing disparities [12-15]; for example, AI has been used to analyze search engine results from 54 African nations to guide resource allocation and improve access to care [15]. It is no wonder that a recent report from the Wellcome Trust on the ethical, social, and political challenges of AI in health care was not able to reach a clear consensus on the impact of AI on health equity [16]. Moreover, despite cancer being a major focus of AI research and development, the impact of AI on health equity in oncology remains underexplored. There is growing literature characterizing the problems of health disparities in oncology, which range from issues of access to high-quality care and research to structural barriers in health promotion and the lack of awareness of existing health inequities [17]. Given the expanding use of AI in oncology, there is an urgent need to assess the interplay between AI technologies and health equity in oncology to better understand the social and ethical dimensions surrounding the integration of AI.


This scoping review of the literature aimed to address the question, “What are the current and potential impacts of AI applications on health equity in oncology?” We analyzed the literature on contemporary AI applications in oncology with a focus on implications for health equity to identify recurring themes as well as important gaps and areas for future research.


Our scoping review protocol followed previously established methods [18] with reporting in accordance with the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews) framework [19].

Search Strategy

We used a sensitive search strategy to identify a representative sample of the available literature on the influence of AI on health equity in oncology. On the basis of a combination of synonymous searches comprising controlled vocabularies, such as Medical Subject Headings in MEDLINE or EMTree descriptors in Embase, and free-text terms using alternative word spellings and endings for the 3 core concepts: AI (algorithm, machine learning, artificial intelligence, deep learning, and convolutional neural networks), equity (health equality, health inequality, health disparity, and socioeconomic factors), and oncology (neoplasm, cancer, squamous, and metaplasia), which informed a comprehensive search strategy developed by the clinical librarian (AI) with experience in conducting electronic literature searches on the recommendations from the review authors (PI, ALL, and BCY). We searched both databases (MEDLINE and Embase, via the OVID platform) from January 2000 to August 2021, and a preliminary search was performed on December 4, 2020. A detailed description of our search strategy is provided Multimedia Appendix 1.

Eligibility Criteria and Article Screening

In addition, the web-based search engine Google Scholar was used to identify additional potentially relevant studies that were not indexed in bibliographic databases. The bibliographies of all relevant retrieved articles were also examined to identify further relevant studies. To capture the breadth of literature on AI and health equity in oncology, we did not impose limits based on study type and included clinical studies—that is, studies in which AI was applied and evaluated for a specific clinical intervention, whether it be diagnostic, prognostic, screening, or treatment planning—commentaries and opinion articles. Limits were imposed for English-language-only articles, as it was the main language of proficiency for the research team, thus allowing for detailed and critical examination of the selected articles to take place. All identified records from the electronic search were imported into Covidence systematic review software (Veritas Health Innovation) for further analysis and screening.

After duplicate records were removed, 2 reviewers (PI and WSL) independently screened the titles and abstracts of selected records using the inclusion and exclusion criteria, which were defined a priori: records were selected during the title and abstract screening if they mentioned the core concepts (AI, health equity, and oncology) or related terms. Abstracts were excluded if they did not meet the inclusion criteria or if they involved nonhuman participants. All conflicts were resolved by a third reviewer (BCY). The list of selected abstracts was then reassessed by all 3 reviewers (PI, WSL, and BCY) in full-text reviews to identify records related to the research question. Records that generated a unanimous consensus were selected for full-text review, whereas those that did not engage with the 3 key concepts were excluded. Conflicts were resolved through discussion between all 3 reviewers. A further full-text review was conducted by all 3 authors, further applying the eligibility criteria.

Data Extraction and Analysis

Data extraction and analysis involved both descriptive and qualitative components. Descriptively, we extracted data on the year of publication, country of affiliation of the senior author, type of institution of affiliation of the senior author, type of study, type of AI, cancer type, and, when available, the cost of the proposed technology. The country of affiliation of the senior author was classified as high income, low income, and middle income following the most recent United Nations classification [20]. Qualitatively, we analyzed articles for emerging themes related to health equity in oncology, inherent assumptions, and gaps in the literature. Thematic analysis followed the steps outlined by Braun and Clarke, which have been widely applied in scoping reviews of qualitative research, including in health care [21], to generate a comprehensive thematic representation of a given area of research [22]. This process involved familiarization with the data set of included articles, generation of initial codes, collation of codes into provisional themes, review of themes in relation to initial codes, and the entire data set, followed by definition and naming of each theme to generate a comprehensive representation of the data. Steps of data familiarization and initial coding were performed independently by 3 reviewers (PI, WSL, and BCY); steps of collation, review of themes, and definition and naming were performed through discussion between study coauthors. Articles that had insufficient engagement with 3 key concepts, that is, those that mentioned the issues of bias or equity but did not elaborate on specific issues arising from AI, or made insufficient links between the core concepts, that is, those that mentioned all 3 core concepts but had no further exploration of their relationships, were excluded.

Selection and Characteristics of Sources of Evidence

Our search yielded 14,011 records. After removing duplicates, 10,468 records were screened, and 133 articles met the inclusion criteria [4,23-154] (Figure 1). All the records included in our review were published between 2010 and 2021, with the majority (124/133, 93.2%) published after 2018 (Table 1). Although a range of countries, based on the affiliation of the senior author, were represented in our review (Figure 2), most were from the United States (90/133, 67.7%). The majority were from academic centers (121/133, 90.9%), with a minority from the government, nonprofit organizations, and industry (Table 1). Approximately half of the records involved clinical studies (68/133, 51.1%), whereas the rest were epidemiological studies, commentaries, surveys, and interviews. Most of the records drew on ML techniques to address their research question: 12.8% (17/133) records discussed AI in general; 30.8% (41/133) records did not specify the type of ML used or used multiple ML algorithms; 47.4% (63/133) used supervised ML algorithms; and a smaller subset (4/133, 3%; 6/133, 4.5%; and 2/133, 1.5%) used unsupervised ML, natural language processing, and reinforcement ML, respectively. AI was used for a wide range of applications and often a combination of applications, including epidemiological (28/133, 21.1%), diagnostic (25/133, 18.8%), prognostic (25/133, 18.8%), and screening (25/133, 18.8%; Table 1).

Figure 1. PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews) flow diagram for the identification of studies via databases and registers. AI: artificial intelligence.
View this figure
Table 1. Characteristics of studies included in the scoping review (n=133).
Study characteristicsStudies, n (%)
Year of publicationa

2000-20179 (7.5)

201813 (9.8)

201921 (15.8)

202048 (36.1)

202142 (31.6)
Type of study

Clinical62 (46.6)

Epidemiological40 (30)

Review15 (11.3)

Commentary11 (8.3)

Survey and interviews5 (3.8)
Institution typeb

Academic121 (91)

Governmental and nongovernmental organizations12 (9)
Type of artificial intelligence applicationc

Screening41 (30.8)

Diagnostic41 (30.8)

Therapeutic15 (11.3)

Prognostic45 (33.8)

Epidemiological45 (33.8)
Type of cancer

General28 (21.1)

Gynecologic19 (14.3)

Breast16 (12)

Oral12 (9)

Prostate12 (9)

Skin12 (9)

Lung8 (6)

Hematologic6 (4.5)

Brain4 (3)

Liver4 (3)

Colorectal3 (2.3)

Esophageal3 (2.3)

Head and neck2 (1.5)

Pancreatic2 (1.5)

Gastrointestinal1 (0.8)

Thyroid1 (0.8)


bOn the basis of affiliation of the senior author.

cTotal numbers exceed 133 due to 26 articles falling into multiple categories.

Figure 2. Country of affiliation of senior author (map created with MapChart).
View this figure

AI Applications in Specific Cancer Types

Studies from our review investigated a wide range of cancers, with general oncological applications being the dominant category (28/133, 21.1%), followed by gynecologic (19/133, 14.3%), breast (16/133, 12%), oral (12/133, 9%), prostate (12/133, 9%), and dermatologic cancers (12/133, 9%). Among the articles on gynecologic cancers, 84% (16/19) were categorized under theme 1, discussing the use of AI technologies to address disparities in gynecologic cancer screening (11/16, 70%) [23,84-93], diagnosis (4/16, 25%) [94-97], and treatment (1/16, 6%) [98]. Of the 16 articles, 15 (94%) developed AI technologies to target gynecologic cancer disparities in low- and middle-income countries (LMICs) [84-98], while 1 (6%) did so for implementation in high-income countries (HICs) [23]. The other 3 (n=19, 16%) articles fell under theme 3, discussing the use of AI to explore the genetic (1/3, 33%) [99] and social (2/3, 67%) determinants of health outcomes in gynecologic cancers [100,101]. Moreover, most of these articles were clinical studies (14/19, 74%) [84-89,93-95,97-101], 16% (3/19) were commentaries [23,90,91], 5% (1/19) was an epidemiological study [96], and 5% (1/19) was a review [92].

Articles examining breast cancer have discussed a broader range of themes relating to health equity. Of the 16 articles, 6 (38%) focused on theme 1 [24,102-106], with all 6 looking at the applications of AI in LMICs. Of the 16 articles, 2 (13%) fell under theme 2: one discussed the use of AI to mitigate bias [107], whereas the other raised the issue of how AI might exacerbate and mitigate biases in breast cancer diagnoses [108]. Of the 16 articles, 7 (44%) fell under theme 3, with 6 (86%) examining the link between social determinants [109-114] and 1 (14%) examining the link between genetic determinants of health and breast cancer [115]. Of the 16 articles, 1 (6%) fell under multiple themes [116]. In addition to touching on a wider variety of themes than gynecologic cancers, articles examining breast cancer were also more varied: 44% (7/16) were clinical studies [24,102,103,110,112-114], 25% (4/16) were epidemiological studies [104,109,111,115], 25% (4/16) were reviews [106-108,116], and 6% (1/16) was a commentary [105].

Critical Appraisal Within Sources of Evidence

We identified three main themes related to the impact of AI on health equity in oncology: (1) the development of AI technologies to reduce health disparities faced by populations in both LMICs and HICs; (2) the concern that biased AI algorithms might exacerbate health inequities counterposed by the hope that AI technologies might help overcome human biases; and (3) the power of AI to uncover biological and social determinants of health in oncology. Themes were further broken down into subthemes, where applicable. A full list of the articles categorized by theme can be found in Multimedia Appendices 2-5.

AI and Health Disparities


The most prominent theme in our analysis, based on the number of records, was the development of AI technologies to address health disparities in oncology (58/133, 43.6%). This included the use of AI to address disparities in access to screening, diagnostic, and therapeutic technologies for underserved populations in LMICs (53/133, 39.8%) and minority populations in HICs (3/133, 2.3%). Of 133 studies, 2 (1.5%) used AI to address disparities in both LMICs and HICs. A total of 16 articles on this theme were commentaries or reviews discussing multiple applications in cancer care. Of the 58 articles, 17 (29%) were described as pilot studies. We further divided this theme into several subthemes based on the type of AI technology, including AI applications, to analyze the genomic, histological, radiographic, image, and demographic data.

Using AI to Address Disparities in Cancer Screening and Diagnosis

The literature under this theme highlighted how technologies could improve the delivery of health care to disadvantaged populations in both LMICs and HICs. In LMICs, these technologies were aimed at rectifying 2 main problems: addressing health care personnel shortages, thereby reducing the bottleneck effect created by a low ratio of health care professionals to the populations they serve and overcoming constraints resulting from limited medical equipment [117]. For example, point-of-care and smartphone-based technologies for oral cancer screening in low-resource settings aim to address the bottleneck effect created by a low number of health care professionals [118]. One example of AI technology aimed at addressing constraints from limited medical equipment is a mobile-based oral cancer image analysis software for use in rural India [119]. In the absence of a stable internet connection, the AI algorithm can analyze images directly on a smartphone, which are then uploaded to a cloud server and assessed by a remote specialist when internet is available. AI applications to address health disparities in oncology in HICs was a less explored topic, with some articles discussing algorithms to selectively target disadvantaged populations [120,121]. For instance, given the high prevalence of oral cancer in South Asian populations [155], 1 study used ML to develop a quantitative cytology program to selectively improve oral cancer screening among South Asians living in British Columbia, Canada [120].

The development of AI aimed at reducing health disparities drew on a range of data, from genomics and imaging to demographic data, all aimed at reducing demands on underresourced health care systems and improving the available medical equipment. One example is an AI image analysis algorithm for breast cancer detection that improves screening in underserved and low-resource settings by applying deep learning to novel ultrasound techniques [105,106,114]. Finally, AI has also been applied to address disparities in access to diagnostic pathology; these included examples such as decision support systems to assist with histopathological diagnosis of brain tumors in resource-poor settings [122] and image analysis of cervical lesions [97].

Studies have reported a range of outcomes, with screening and diagnostic technologies showing a wide variation in sensitivity (75%-100%), specificity (71%-100%), and accuracy (61%-100%). Most studies on this topic (43/58, 74%) offered no comparison between the performance of the proposed technology and the existing standard of care. When the AI algorithms were directly compared with the standard of care, the results varied. Most of the articles noted no difference between AI algorithms and the standard of care [93,97,123-125], whereas others observed that the accuracy of AI algorithms was lower than that of human physicians [85,126]. One study noted that AI outperformed its human counterpart when detecting and staging prostate cancer in a higher number of patients [117].

Gaps and Challenges With Using AI to Address Health Disparities

Although several articles have highlighted how AI might help address health care shortages in LMICs, a recurrent problem noted in the literature is the lack of consideration for the infrastructure and human resources necessary to implement these AI technologies. To support the use of digital technologies, and specifically AI, LMICs require both health care providers trained to use specific technologies and sufficient technological infrastructure, including buildings where the hardware can be housed and cables to carry digital signals leading to widespread and stable internet access; in other words, the performance of AI algorithms is intertwined with sociotechnical factors [156,157]. Although HICs may have existing technological infrastructure to implement AI technologies more readily, LMICs often lack such infrastructure [158]. Considerations such as the cost of implementation and the need for maintenance and ongoing support once implemented, the need for trained personnel to use AI technologies, and the need for technological support to allow for the integration of the developed AI technologies were rarely discussed by articles in our review. Only select articles mentioned the lack of infrastructural considerations in the development of AI technologies [87,117,119,127,128]. For example, Anirvan et al [129] noted that, “while in developed countries with a well-equipped health care model in place this may not be a problem, in poor, rural, and resource-constrained settings, it may aggravate the burdened health care system in place.”

In addition, our review identified equity issues related to the cost of AI technologies; such technologies can be costly and may not be affordable in many LMICs under existing economic circumstances. Love et al [102] developed an AI device to triage breast lumps in low-resource settings but noted that “the device used in this study is more expensive than most LMICs settings can afford, lower cost devices are becoming more available.” However, others were able to create technologies that may be more affordable for LMICs: a gene expression assay costing US $450 capable of assessing samples for only US $10 [130].

To ensure that AI technologies designed for HICs can be effectively applied in LMICs, collaboration between these 2 settings is invaluable. Of the 42 studies that were conducted in LMICs, 11 (26%) were led by research groups from LMICs in question, and from the remaining 31 records, 27 involved collaboration with coauthors from the specific LMIC. When such a collaboration occurred, AI technologies were primarily designed in HICs and implemented in LMICs. This divide between the location of development and location of the implementation of AI in global oncology can pose a barrier to integration in LMICs due to costs [102] and infrastructural considerations [88], thereby suggesting a need for greater attention to co-design, which refers to the involvement of end users in the design process of AI technologies [159]. Moreover, it is important to recognize that the inclusion of researchers from LMICs in the design of AI technologies alone does not guarantee widespread improvements in health for patients in these countries. Rather, benefits are often limited to select partner sites of HICs; therefore, while these technologies may help address global disparities, they may exacerbate inequities within LMICs [130]. To ensure a more equitable distribution of benefits within LMICs, research should extend beyond specific partner institutions, engaging additional stakeholders from relevant government and nongovernmental organizations to evaluate and implement technologies. However, as noted in our review, there was only limited involvement of nonacademic institutions in the articles included in our review.

One additional problem that has been raised surrounding the use of AI in LMICs is the issue of data colonialism [160], a practice in which data are extracted from LMICs by institutions in HICs for the purposes of building algorithms whose benefits accrue primarily to stakeholders in HICs [161]. Although articles from our review did not engage directly with these issues, some did discuss important considerations for what collaboration means between HICs and LMICs [85,97,130,131]. However, there was limited acknowledgment of ethical issues arising from the involvement of LMICs as mere resources for data extraction and algorithmic training or as an exploratory ground for novel applications of AI technologies for global health.

AI and Bias


The second theme identified in our review relates to the issue of bias. Bias in AI is a widely discussed topic and has the potential to exacerbate health disparities across different populations; while bias is an inherent feature of all AI systems, the main types of bias of ethical concern are those biases arising in algorithmic development or data sets [162] that can result in individuals being treated unfairly based on particular characteristics [163]. In a similar vein, 1 article in our review distinguished between concepts of desirable and undesirable biases, whereas desirable biases are those that take group data into consideration to account for base-rate differences and undesirable biases are those that are developed based on inaccurate or incomplete data, which in turn leads to group discrimination [132]. For instance, total melanoma rates are higher in men than in women [164]; thus, a desirable bias would include a training sample for an AI algorithm used to detect melanoma purposefully biased (desirably) to contain more men than women, representing the base rates of melanoma incidence. The authors suggest the use and integration of desirable biases to promote gender equity in health care while decreasing undesirable biases.

With rising concerns surrounding bias in AI [9-11], and conversely, the hope that AI algorithms may be able to help mitigate bias in human judgment [12,13,15], we expected to see a much larger number of articles discussing this issue; however, only 12% (16/133) articles directly engaged with the theme of bias. These articles fell into 2 main categories: those that explored how AI algorithms might help mitigate biased judgments in physicians’ clinical practice (5/133, 5%) and those that argued that AI trained on biased data sets can exacerbate existing inequities (10/133, 7.5%), while 1 article (1/133, 0.8%) focused on both subthemes.

The Use of AI to Uncover Bias in Clinical Practice

The use of AI technologies in health care can uncover biases in both data sets and physicians’ actions. For instance, head and neck cancers may develop spontaneously or in association with human papillomavirus (HPV), and characterization of such cancers as HPV-associated can affect treatment decisions [165]. Patients diagnosed with HPV-positive versus HPV-negative head and neck cancers have different demographic features, with younger individuals and individuals with more sexual partners being overrepresented in the HPV-positive group [166]. D’Souza et al [133] thus used AI to assess the use of clinical and demographic characteristics as diagnostic predictors of HPV-positive and HPV-negative head and neck cancers. However, these authors noted that clinical and demographic characteristics had only moderate accuracy in predicting HPV status, leading to a potential bias in treatment if these variables were used to predict HPV status without further investigation. In addition, AI can be used to uncover the biases found in data sets. Howard et al [134] deployed a deep learning model to assess institutional biases in data submitted to The Cancer Genome Atlas. They noted that biased digital histological signatures can stem from specific features of the institutions from which the data originate. AI algorithms may then provide prognostic information based on these institution-specific signatures rather than on the intrinsic histology of the sample.

The Use of AI to Mitigate Bias in Clinical Practice

We also identified articles that discussed the use of AI to mitigate bias in clinician decision-making. In criticizing the Fitzpatrick scale in dermatology, Okoji et al [135] argued that AI-based approaches might lead to a more objective classification system for skin typing. AI systems can identify subtle variations that are not visible to the human eye, thereby leading to more equitable dermatological assessments. However, a major caveat was the lack of discussion surrounding the populations used to train these AI algorithms in dermatology. For instance, several studies included predominantly White populations or did not specify the racial and ethnic makeup of the population used to develop their algorithms [124,136,167]. Only 1 article in our review specifically addressed this problem: to counterbalance the skewed nature of dermatologic data available for AI training, Pangti et al [137] sought selective patient populations to train an AI algorithm to detect skin diseases using locally generated data from India. As medical AI systems are prone to generating biased results that lead to disparities between ethnic groups, some authors proposed that stratification for minority communities that suffer from underrepresentation in training data sets could help rectify this bias [108]. Instead of a one-size-fits-all model, AI programs can be developed to target specific subpopulations. For instance, Gao and Cui [138] suggested the use of transfer learning, an AI training technique whereby knowledge gained from training an AI system on a larger data set, for example, a majority ethnic group, is transferred to be applied to a smaller data set, such as a minority ethnic group [138]. This technique attempts to compensate for missing data from “data-disadvantaged ethnic groups by leveraging knowledge learned from other groups with more abundant data” [138]. Yet, as the authors note, data inequality remains a central issue in training ML algorithms in multiethnic populations, and differential accuracy in performance between ethnic groups is an ongoing challenge.

Biased Data Sets and Biased AI

The final category in this theme was articles discussing the use of biased data sets to train AI algorithms; surprisingly, few articles discussed this topic. For instance, Khor et al [139] used a data set with racial demographics of 53% non-Hispanic White, 22% Hispanic, and 13% Black or African American to develop a recurrence risk prediction model for adults with prostate cancer. Even with the explicit inclusion of race, they noted that the model had “worse performance in minority subgroups compared to NHW [non-Hispanic White].” Conversely, others argued that bias in training data sets of AI algorithms may not always result in decreased generalizability; for example, Gilson et al [140] suggested that biased gender representation in training data sets did not lead to decreased generalizability in an algorithm to predict survival in non–small cell lung cancer.

Gaps in the Discussion of AI and Bias

Overall, engagement with issues of bias resulting from the use of AI in oncology was limited, an unexpected finding, given that this concern is widely discussed elsewhere in the literature on AI ethics and may act as a mechanism through which AI systems exacerbate health inequities. Our findings suggest that bias remains an underexplored topic in the literature on AI in oncology. It is also worth noting that the few articles that mentioned bias often did so briefly in their limitations section, usually in reference to how biased data sets might impact the validity and generalizability of AI algorithms but without further engagement with how these issues might be mitigated or addressed by future research.

AI and Determinants of Health Outcomes


The final theme identified in our review was the use of AI to investigate the determinants of health outcomes in oncology. A total of 41.4% (55/133) articles fell under this theme and were divided into subthemes based on the determinants of health examined, ranging from biological variables (9/133, 6.8%) to social determinants of health (43/133, 32.3%), whereas 2.3% (3/133) articles focused on both themes. This category can be understood as the use of AI as an extension of traditional statistical models in clinical and epidemiological research in oncology.

AI and Biological Determinants of Health

Several articles under this theme applied AI to genomic data to predict outcomes in patients with cancer. For instance, Li et al [141] applied AI to genomic analysis across 3 racial groups to identify the impact of differential gene expression on racial disparities in cancer prevalence. They found differential gene expression in several cancers between racial groups, which they interpreted as supporting a genetic basis for racial differences in cancer prevalence.

AI and Social Determinants of Health

Although several studies have similarly applied AI in a reductionist manner, for example, to look for a genetic basis of health disparities [115,142], others have used AI to examine additional individual, environmental, and societal factors contributing to differential health outcomes between populations. Several articles in our review applied AI to shed light on the influence of race and socioeconomic status on health outcomes in oncology. For example, An et al [143] used an ML algorithm to examine the risk factors for the development of hepatocellular carcinoma in a Korean cohort, noting that higher income is associated with a lower risk of developing hepatocellular carcinoma. Bibault et al [144] applied AI to satellite imagery to investigate the relationship between socioeconomic status and cancer prevalence, observing that “satellite features are highly correlated with individual socioeconomic and health measures that are linked to cancer prevalence.” Several studies have suggested that applying AI to demographic data could help provide more comprehensive risk stratification models in oncology [112,168,169].

AI has also been used to identify racial disparities in cancer outcome. Tossas et al [101] used AI to predict populations at risk of delayed diagnosis of cervical cancer. They noted that more than half of the patients with a late cancer diagnosis were African American, findings that they argue can be used to target cervical cancer screening. Others have also used AI to examine outcomes following neurosurgery for brain tumors, noting that minority race is an independent risk factor for an extended length of stay and increased cost [145,146].

AI has also been applied to examine the influence of rural and urban residences on cancer prevalence and outcomes. Rural residences are known to influence access to cancer treatment, with novel therapies often concentrated in academic centers located in urban settings [170]. The impact of rural residence on cancer outcomes was investigated by Zhong et al [112], who used AI to create personalized prognostication models for early invasive breast cancer in a Chinese cohort. By incorporating residential status in their algorithm, the group found that despite lower rates of breast cancer in rural populations, the associated mortality risk was significantly higher. Aghdam et al [147] used the AI algorithm to study access to stereotactic body radiation therapy for prostate cancer and noted that travel distance did not prevent access to stereotactic body radiation therapy for rural patients, suggesting that income and race may be more important determinants of access to treatment.

Gaps in Using AI to Investigate Determinants of Health

For most studies in our review, there was a lack of justification for the use of AI and, more specifically, a lack of discussion as to why particular AI algorithms were chosen and their advantages over other statistical methods to address a given research question. AI algorithms are undeniably powerful tools for analyzing large amounts of data and selecting articles that mention the benefits of AI over other statistical methods [143,144,167,168]. However, others have argued that the use of AI has not yielded better risk prediction models compared with traditional statistical methods [169]. In their review on the efficacy of AI as opposed to traditional statistics in medicine, Rajula et al [145] noted that the latter seemed to be more useful when the number of participants significantly outweighed the number of variables in question, whereas the former is more suitable in fields with a large quantity of data, such as omics or radiodiagnostics. In light of this discussion, further justification for the use of AI to address specific research questions in oncology should be undertaken.

Principal Findings

In this review, we evaluate the literature on the impact of AI on health equity in oncology. We identified 14,011 records in our search, of which 133 (0.95%) were substantially engaged with the core concepts of AI, health equity, and oncology. Our literature review revealed three main themes related to how AI technologies can (1) help address health disparities, (2) mitigate or exacerbate biased decision-making, and (3) elucidate the biological and social determinants of cancer outcomes. These themes relate to several issues discussed in the literature on AI and health equity in oncology and health care.

The first main theme noted in our review is how AI technologies can help address health disparities, both in LMICs and HICs. Previous scholarship examining the application of AI in global oncology has shed light on numerous practical and ethical challenges that have been discussed in the literature [171]. The existence of a “digital divide,” often cited as a key barrier to the implementation of AI technologies in global health, refers to the inequitable distribution of digital technologies, such as computational power, technical infrastructure, and data storage, that is required to use AI technologies [171]. Without prioritizing investment in the basic infrastructure, such as appropriate hardware to run AI programs, buildings where such hardware can be housed, and cables to carry digital signals, the utility of these technologies in the global health context should be questioned [172,173]. A number of articles identified in our review engaged with these voiced concerns, with some researchers creating technologies with the infrastructural capacities of specific LMICs in mind and others highlighting the need for additional infrastructure to support the technology they developed [87,102,129,130,148].

Another barrier to the implementation of AI technologies in LMICs discussed in the literature is the lack of generalizability of algorithms primarily designed in HICs but applied in LMICs [170]. As some researchers have observed, data used for training AI algorithms in HICs are “notorious for their lack of diversity, and concerns have been raised about their applicability even in their home countries” [172]. These data are often skewed toward the populations, diseases, and treatments available in countries training and developing AI technologies, thereby decreasing their generalizability to populations in LMICs. Articles from our review addressed this issue, voicing concerns about the applicability of AI algorithms developed in HICs to LMICs [108,110,134,143,149-151].

Our review also focused on solutions to the challenges posed by the integration of AI technologies in a global health context, which have been proposed elsewhere in the literature, with the predominant one being greater collaboration between HICs and LMICs in the development of AI technologies [171-173]. AI technologies created without appropriate consultation with the populations they are intended to serve may be highly inapplicable, impractical, and unethical. For example, treatment patterns produced by Watson for Oncology, an AI decision support system trained by data and experts from the Memorial Sloan Kettering Cancer Center, may be inapplicable to many LMICs [173]. In previous studies investigating this issue, some researchers have argued for the co-design of AI technologies, which requires the involvement of end users—and specifically marginalized groups—in AI research and development to ensure the equitable distribution of the benefits of these technologies [159,174].

To improve collaboration in global health research, others have proposed that journals publishing research conducted in LMICs have the responsibility of ensuring that at least one author involved in the study is from the countries in question [175]. We observed that this standard was met in most studies conducted in LMICs included in our review (27/31, 87%). However, further steps are required to ensure meaningful collaboration with investigators and stakeholders in LMICs, beyond simple inclusion in authorship, which risks fostering tokenism. As discussed earlier, this is especially important in AI research and development focused on addressing global health inequities in oncology, which needs to engage additional stakeholders beyond select partner sites to ensure fair distribution of benefits throughout populations [130,175]. This lacuna identified by our review reflects a broader lack of global coordination in AI research to set priorities and ensure fair distribution of research opportunities and resources, which is essential to prevent AI research from perpetuating existing global health inequities.

Finally, a balance must be struck between the global dissemination of existing diagnostic and treatment technologies and the development of new technologies for global health. Our review revealed how pilot studies of AI in global oncology are particularly common. Although pilot studies can provide an important starting point, if not followed by a robust evaluation to measure the clinical effectiveness of these technologies, which occurs in only a minority of cases [176], these applications will remain an ineffective means of addressing global health disparities in cancer care. Moreover, it has been noted that most cancer deaths occurring in LMICs are due to a lack of access to already present and cost-effective diagnostic and treatment strategies, as opposed to the latest cutting-edge technology [177,178]. Exploratory research into novel technologies in global oncology may detract from the need to develop cost-effective ways to disseminate existing evidence-based technologies in cancer care.

The second major theme noted in our review was the use of biased AI algorithms in clinical decision-making, which may impact the quality and accuracy of decisions and consequently lead to adverse health outcomes for patients [179]. One theme identified in our review was the use of AI algorithms to standardize and reduce bias in clinical decision-making in oncology. One high-profile example is Watson for Oncology, an AI decision support system that has been proposed as a method of standardizing clinical decisions. Watson for Oncology uses natural language processing to provide treatment recommendations in oncology based on the latest scientific literature. Select studies have shown high concordance between treatment plans produced by Watson for Oncology and recommendations from multidisciplinary tumor boards [180-182]. Previous criticisms of this technology have pointed toward problems using concordance to assess the capability of AI technologies, such as Watson for Oncology, because it simply assesses its ability to reproduce specific expert knowledge while not evaluating the validity of applying this knowledge in different contexts [183,184]. Treatment recommendations are based on the current literature as opposed to novel findings produced by the AI system, and preexisting biases found in data sets will be exacerbated rather than mitigated in an automation process. As Murphy et al [185] note, concerns regarding implicit bias becoming embedded in AI algorithms have been widely voiced. The authors noted that implicit biases often reflect preexisting societal values that may exacerbate already-existing health inequities for marginalized populations. Moreover, concerns surrounding lack of transparency in how Watson for Oncology integrates data from heterogeneous sources to arrive at decisions, including the influence of implicit value judgments found in different oncology guidelines, require further attention, specifically focusing on how this might impact the application of Watson for Oncology in different global contexts and its effects on health equity.

Despite the pressing nature of these concerns, the paucity of studies on biased AI algorithms in our search was surprising. Many AI applications identified in our study were trained on selecting data sets from single institutions, creating a high risk of bias, which should be a pressing concern, given that algorithmic bias can exacerbate health inequities [140,186]. A prominent cause of bias is the lack of consideration of the different contexts in which an algorithm is developed and subsequently deployed. Academics weary of these concerns have argued that a generalizable AI model should be developed from data reflecting the diversity of patients on whom it will be applied, yet “most health organizations lack the data infrastructure required to collect the data needed to optimally train these algorithms” [186,187]. Patterns detected when these algorithms are trained on majority groups may result in decreased accuracy when applied to minority groups [188]. For instance, most AI algorithms for diagnosing melanoma are trained on white-skinned individuals and thus may underperform in diagnosing lesions on persons of color [189]. Panch et al [186] note that solutions to these contextual problems involve establishing the appropriate context for which the algorithms will be used. Our literature review identified some proposed solutions, such as the application of transfer learning to improve outcomes for populations with data sparsity; stratification of groups based on race and ethnicity to mitigate bias; and the need for multidisciplinary collaboration between clinicians, engineers, social scientists, and ethicists to aid in the contextual design and development of AI algorithms to mitigate biases [108,135,138,190].

The final theme identified in our review was the use of AI to examine determinants of health outcomes in oncology. Social determinants of health such as education, neighborhood, social community, and socioeconomic status impact health outcomes in oncology [191], and the complex interactions between these variables suggest a potential area for AI applications. Several studies in our review applied AI to analyze large volumes of data to help elucidate the social determinants of cancer outcomes. The identification of social determinants of health can help support more comprehensive strategies to improve health equity in underserved populations [192].

However, as noted by several researchers comparing the use of AI with traditional statistical methods to analyze large amounts of data, it is not always clear what benefits the former provides over the latter to investigate the social determinants of health [145]. A systematic review compared the performance of logistic regression and ML in clinical prediction models and found no evidence that ML performs better than logistic regression [193]. Moreover, traditional statistical models are often easier to interpret than complex, multilayered ML models. Trade-offs between accuracy and transparency have been widely discussed in the literature on AI [194,195] and should be considered when deciding the method of analysis for a given research question. Appropriate and sufficient justification for the use of ML models in clinical and epidemiological oncology research is imperative.

Ethical concerns regarding the use of AI to analyze large amounts of health care data have also been raised in the literature. In establishing a research ethics framework for health care ML, McCraden et al [196,197] note how AI can influence 2 phases of health care research: hypothesis generation and hypothesis testing. AI research focused on hypothesis generation applies computational techniques to large data sets to explore models with potential clinical applicability [197]. This type of exploratory research raises important ethical issues, such as the protection of data privacy and tensions between enabling ready access to data and the requirements of informed consent [197]. Most articles from our review under this theme fit into the hypothesis generation phase and used AI for exploratory research on the determinants of health outcomes in oncology. In our review, the discussion of ethical issues in data privacy versus the need to enable ready access to data was sparse, despite the importance of such considerations in exploratory AI research on social determinants of health, which often requires large amounts of personal health information and other sensitive data. Moreover, as previously emphasized by advocates for equity in AI, exploratory AI research also entails an ethical commitment to ensure representative data sets, including minorities and “data-impoverished” groups, to avoid biased and misleading findings [198]. Few articles from our review addressed these ethical concerns [91,128,199-201].

Finally, it is important to note that the use of AI in health care research lends itself to the analysis of quantitative and categorical data, limiting its ability to understand and explain many social and health-related phenomena. The use of race and other contested social categories in AI algorithms often relies on third-party classification in a way that risks misrepresentation [202]. Therefore, although AI may offer insights into the social determinants of health in oncology, such tools do not obviate the need for other methods, including qualitative methods, in cancer research.


Our study has several limitations. First, the application of AI in oncology is a rapidly evolving field, and as such, the themes and gaps identified in our scoping review are necessarily provisional. To help mitigate this, we conducted a secondary search 9 months after our initial search, which yielded an additional 949 abstracts, of which 21 (2.2%) met the inclusion criteria. Despite this rapid evolution, our findings provide insights into the current state of the literature on the impact of AI on health equity in oncology and may also provide a lens for the early integration of AI technologies in health care more generally. Second, we focused our search strategy in the field of oncology and contemporary cancer research; while the themes and gaps highlighted may be illustrative of more general health equity issues arising from the integration of novel technologies in health care at large, there are likely additional themes pertaining to other areas of health care not covered by our review. Finally, our search was limited to records written in English; we were unable to include articles published in other languages, which may bias our findings toward research conducted in and themes prevalent in the English-speaking world; further work could involve a team of multilingual researchers to shed light on themes from non–English-language research literature.


In conclusion, we conducted a scoping review to characterize and assess the literature on the impact of AI on health equity in oncology. Our analysis identified 3 general themes related to how AI can be used to address health disparities, how bias might be mitigated or exacerbated by AI algorithms, and how AI can help investigate the social determinants of health. Our review also identified several gaps and areas in need of further research. These include fostering greater collaboration between HICs and LMICs in the design of AI technologies, ensuring representation in training data sets, considering the context of algorithmic development and application to mitigate bias, and recognizing ethical and methodological issues arising from the use of AI to investigate the determinants of cancer outcomes. As AI applications in oncology continue to expand, attention to these issues will be critical to prevent harm and ensure equitable distribution of the potential benefits of these technologies.


The authors would like to thank AMS Healthcare for their funding of this project, and Dr Luke Stark for his valuable comments on the earlier draft of this manuscript.

Conflicts of Interest

None declared.

Multimedia Appendix 1

Full search strategy.

DOCX File , 15 KB

Multimedia Appendix 2

Theme 1 articles: artificial intelligence to address health disparities.

DOCX File , 23 KB

Multimedia Appendix 3

Theme 2 articles: artificial intelligence and bias.

DOCX File , 16 KB

Multimedia Appendix 4

Theme 3 articles: artificial intelligence and determinants of health.

DOCX File , 23 KB

Multimedia Appendix 5

Multiple-theme articles.

DOCX File , 14 KB

  1. Bianchi P, Labory S. The fourth industrial revolution. In: Industrial Policy for the Manufacturing Revolution. Cheltenham, United Kingdom: Edward Elgar Publishing; Jun 29, 2018.
  2. Chin-Yee B, Upshur R. The impact of artificial intelligence on clinical judgment: a briefing document. AMS Healthcare. 2019.   URL: [accessed 2022-03-06]
  3. Chui M, Manyika J, Bughin J. Big Data's Potential for Businesses. McKinsey Global Institute. 2011.   URL: [accessed 2022-03-05]
  4. Kann BH, Hosny A, Aerts HJ. Artificial intelligence for clinical oncology. Cancer Cell 2021 Jul 12;39(7):916-927 [FREE Full text] [CrossRef] [Medline]
  5. Luchini C, Pea A, Scarpa A. Artificial intelligence in oncology: current applications and future perspectives. Br J Cancer 2022 Jan 26;126(1):4-9 [FREE Full text] [CrossRef] [Medline]
  6. Niazi MK, Parwani AV, Gurcan MN. Digital pathology and artificial intelligence. Lancet Oncol 2019 May;20(5):e253-e261 [FREE Full text] [CrossRef] [Medline]
  7. Dlamini Z, Francies FZ, Hull R, Marima R. Artificial intelligence (AI) and big data in cancer and precision oncology. Comput Struct Biotechnol J 2020;18:2300-2311 [FREE Full text] [CrossRef] [Medline]
  8. Braveman P, Gruskin S. Defining equity in health. J Epidemiol Community Health 2003 Apr;57(4):254-258 [FREE Full text] [CrossRef] [Medline]
  9. O'Neil C. Weapons of Math Destruction How Big Data Increases Inequality and Threatens Democracy. New York, United States: Penguin Books Limited; 2016.
  10. Topol E. Deep Medicine How Artificial Intelligence Can Make Healthcare Human Again. New York, United States: Basic Books; 2019.
  11. Obermeyer Z, Powers B, Vogeli C, Mullainathan S. Dissecting racial bias in an algorithm used to manage the health of populations. Science 2019 Oct 25;366(6464):447-453. [CrossRef] [Medline]
  12. Chen IY, Joshi S, Ghassemi M. Treating health disparities with artificial intelligence. Nat Med 2020 Jan 13;26(1):16-17. [CrossRef] [Medline]
  13. Chen I, Szolovits P, Ghassemi M. Can AI help reduce disparities in general medical and mental health care? AMA J Ethics 2019 Feb 01;21(2):E167-E179 [FREE Full text] [CrossRef] [Medline]
  14. Pfohl S, Marafino B, Coulet A, Rodriguez F, Palaniappan L, Shah N. Creating fair models of atherosclerotic cardiovascular disease risk. In: Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society. 2019 Presented at: AIES '19: AAAI/ACM Conference on AI, Ethics, and Society; Jan 27 - 28, 2019; Honolulu HI USA. [CrossRef]
  15. Abebe R, Hill S, Vaughan JW, Small P, Schwartz HA. Using search queries to understand health information needs in Africa. arXiv 2019. [CrossRef]
  16. Fenech M, Strukelj N, Buston O. Ethical, social, and political challenges of artificial intelligence in health. Future Advocacy. 2018.   URL: [accessed 2021-12-17]
  17. Patel MI, Lopez AM, Blackstock W, Reeder-Hayes K, Moushey EA, Phillips J, et al. Cancer disparities and health equity: a policy statement from the American society of clinical oncology. J Clin Oncol 2020 Oct 10;38(29):3439-3448. [CrossRef]
  18. Levac D, Colquhoun H, O'Brien KK. Scoping studies: advancing the methodology. Implement Sci 2010 Oct 20;5:69 [FREE Full text] [CrossRef] [Medline]
  19. Tricco AC, Lillie E, Zarin W, O'Brien KK, Colquhoun H, Levac D, et al. PRISMA extension for scoping reviews (PRISMA-ScR): checklist and explanation. Ann Intern Med 2018 Oct 02;169(7):467-473 [FREE Full text] [CrossRef] [Medline]
  20. United Nations Department of Economic and Social Affairs. Statistical annex. In: World Economic Situation and Prospects. New York, United States: United Nations; 2014.
  21. Hussain M, Figueiredo M, Tran B, Su Z, Molldrem S, Eikey EV, et al. A scoping review of qualitative research in JAMIA: past contributions and opportunities for future work. J Am Med Inform Assoc 2021 Feb 15;28(2):402-413 [FREE Full text] [CrossRef] [Medline]
  22. Braun V, Clarke V. Using thematic analysis in psychology. Qual Res Psychol 2006 Jan;3(2):77-101. [CrossRef]
  23. DeStephano C, Bakkum-Gamez J, Kaunitz A, Ridgeway J, Sherman M. Intercepting endometrial cancer: opportunities to expand access using new technology. Cancer Prev Res (Phila) 2020 Jul;13(7):563-568. [CrossRef] [Medline]
  24. Choudhary T, Mishra V, Goswami A, Sarangapani J. A transfer learning with structured filter pruning approach for improved breast cancer classification on point-of-care devices. Comput Biol Med 2021 Jul;134:104432. [CrossRef] [Medline]
  25. Zhang Y. Ep1.17-02 artificial intelligence in the qualitative study of pulmonary nodules by analyzing the genetic map and imaging data of lung adenocarcinoma. J Thorac Oncol 2019 Oct;14(10):S1083-S1084. [CrossRef]
  26. Snuderl M. Abstract IA-19: machine learning and AI in molecular pathology diagnostics and clinical management of cancer. Clin Cancer Res 2021;27(5_Supplement):IA-19. [CrossRef]
  27. Tunthanathip T, Oearsakul T. Machine learning approaches for prognostication of newly diagnosed glioblastoma. Int J Nutr Pharmacol Neurol Dis 2021;11(1):57. [CrossRef]
  28. Rocha HA, Emani S, Arruda CA, Rizvi R, Garabedian P, Machado de Aquino C, et al. Non-user physician perspectives about an oncology clinical decision-support system: a qualitative study. J Clin Oncol 2020 May 20;38(15_suppl):e14061. [CrossRef]
  29. Mohamed N, van de Goor R, El-Sheikh M, Elrayah O, Osman T, Nginamau ES, et al. Feasibility of a portable electronic nose for detection of oral squamous cell carcinoma in Sudan. Healthcare (Basel) 2021 May 03;9(5):534 [FREE Full text] [CrossRef] [Medline]
  30. James BL, Sunny SP, Heidari AE, Ramanjinappa RD, Lam T, Tran AV, et al. Validation of a point-of-care optical coherence tomography device with machine learning algorithm for detection of oral potentially malignant and malignant lesions. Cancers (Basel) 2021 Jul 17;13(14):3583 [FREE Full text] [CrossRef] [Medline]
  31. Ningrum DN, Yuan S, Kung W, Wu C, Tzeng I, Huang C, et al. Deep learning classifier with patient’s metadata of dermoscopic images in malignant melanoma detection. J Multidisciplinary Healthcare 2021 Apr;14:877-885. [CrossRef]
  32. Tanriver G, Soluk Tekkesin M, Ergen O. Automated detection and classification of oral lesions using deep learning to detect oral potentially malignant disorders. Cancers (Basel) 2021 Jul 02;13(11):2766 [FREE Full text] [CrossRef] [Medline]
  33. Warin K, Limprasert W, Suebnukarn S, Jinaporntham S, Jantana P. Automatic classification and detection of oral cancer in photographic images using deep learning algorithms. J Oral Pathol Med 2021 Oct 16;50(9):911-918. [CrossRef] [Medline]
  34. Chen B, Lu M, Lipkova J, Mahmood F. Abstract PR-01: real-time, point-of-care pathology diagnosis via embedded deep learning. Clin Cancer Res 2021;27((5_Supplement)):PR-01. [CrossRef]
  35. Jin L, Tang Y, Wu Y, Coole JB, Tan MT, Zhao X, et al. Deep learning extended depth-of-field microscope for fast and slide-free histology. Proc Natl Acad Sci U S A 2020 Dec 29;117(52):33051-33060 [FREE Full text] [CrossRef] [Medline]
  36. Wadhawan T, Situ N, Lancaster K, Yuan X, Zouridakis G. SkinScan©: a portable library for melanoma detection on handheld devices. In: Proceedings of the IEEE International Symposium on Biomedical Imaging: From Nano to Macro. 2011 Presented at: IEEE International Symposium on Biomedical Imaging: From Nano to Macro; Mar 30 - Apr 02, 2011; Chicago, IL, USA. [CrossRef]
  37. Tan MC, Bhushan S, Quang T, Schwarz R, Patel KH, Yu X, et al. Automated software-assisted diagnosis of esophageal squamous cell neoplasia using high-resolution microendoscopy. Gastrointest Endosc 2021 Apr;93(4):831-8.e2 [FREE Full text] [CrossRef] [Medline]
  38. Cerrato TR. Use of artificial intelligence to improve access to initial leukemia diagnosis in low- and middle-income countries. J Clin Oncol 2020 May 20;38(15_suppl):e14117. [CrossRef]
  39. Lee C, Light A, Alaa A, Thurtle D, van der Schaar M, Gnanapragasam VJ. Application of a novel machine learning framework for predicting non-metastatic prostate cancer-specific mortality in men using the Surveillance, Epidemiology, and End Results (SEER) database. Lancet Digital Health 2021 Mar;3(3):e158-e165. [CrossRef]
  40. Matin RN, Dinnes J. AI-based smartphone apps for risk assessment of skin cancer need more evaluation and better regulation. Br J Cancer 2021 May 19;124(11):1749-1750 [FREE Full text] [CrossRef] [Medline]
  41. Khoury M, Engelgau M, Chambers D, Mensah G. Beyond public health genomics: can big data and predictive analytics deliver precision public health? Public Health Genomics 2018 Jul 17;21(5-6):244-250 [FREE Full text] [CrossRef] [Medline]
  42. Guo LN, Lee MS, Kassamali B, Mita C, Nambudiri VE. Bias in, bias out: underreporting and underrepresentation of diverse skin types in machine learning research for skin cancer detection-A scoping review. J Am Acad Dermatol 2022 Jul;87(1):157-159. [CrossRef] [Medline]
  43. Chen Z, Chen S, Liang R, Peng Z, Shen J, Zhu W, et al. Can artificial intelligence support the clinical decision making for Barcelona clinic liver cancer stage 0/a hepatocellular carcinoma in China? J Clin Oncol 2019 May 20;37(15_suppl):e15634. [CrossRef]
  44. Bradley A, van der Meer R, McKay C. Personalized pancreatic cancer management: a systematic review of how machine learning is supporting decision-making. Pancreas 2019;48(5):598-604. [CrossRef] [Medline]
  45. Pramesh CS, Sirohi B, Nag SM, Gupta S, Anderson BO, Feldman NR, et al. Prospective study of an AI enabled online intervention to increase delivery of guideline compliant cancer care, on the ground. J Clin Oncol 2020 May 20;38(15_suppl):2011. [CrossRef]
  46. Chen Z, Cao B, Edwards A, Deng H, Zhang K. A deep imputation and inference framework for estimating personalized and race-specific causal effects of genomic alterations on PSA. J Bioinform Comput Biol 2021 Jul 02;19(04):2150016. [CrossRef]
  47. Pino L, Triana I, Mejia J, Camelo M, Galvez-Nino M, Ruiz R, et al. P09.14 predictive analytics in real-world data from Peru: the new models for personalized oncology. J Thoracic Oncol 2021 Mar;16(3):S294. [CrossRef]
  48. Das AK, Gopalan SS. Prevalence and predictors of routine prostate-specific antigen screening in Medicare beneficiaries in the USA: retrospective cohort analysis using machine learning. Open Public Health J 2019 Dec 31;12(1):521-531. [CrossRef]
  49. Sultana N. Predicting sun protection measures against skin diseases using machine learning approaches. J Cosmet Dermatol 2022 Feb 13;21(2):758-769. [CrossRef] [Medline]
  50. Sim J, Kim YA, Kim JH, Lee JM, Kim MS, Shim YM, et al. The major effects of health-related quality of life on 5-year survival prediction among lung cancer survivors: applications of machine learning. Sci Rep 2020 Jul 01;10(1):10693 [FREE Full text] [CrossRef] [Medline]
  51. Mahmood N, Shahid S, Bakhshi T, Riaz S, Ghufran H, Yaqoob M. Identification of significant risks in pediatric acute lymphoblastic leukemia (ALL) through machine learning (ML) approach. Med Biol Eng Comput 2020 Nov;58(11):2631-2640 [FREE Full text] [CrossRef] [Medline]
  52. Hu C, Li Q, Shou J, Zhang F, Li X, Wu M, et al. Constructing a predictive model of depression in chemotherapy patients with non-Hodgkin's lymphoma to improve medical staffs' psychiatric care. Biomed Res Int 2021 Jul 17;2021:9201235-9201212 [FREE Full text] [CrossRef] [Medline]
  53. Zhu VJ, Lenert LA, Bunnell BE, Obeid JS, Jefferson M, Halbert CH. Automatically identifying social isolation from clinical narratives for patients with prostate Cancer. BMC Med Inform Decis Mak 2019 Mar 14;19(1):43 [FREE Full text] [CrossRef] [Medline]
  54. Cirstea D, Fillmore N, Yameen H, Yellapragada S, Ifeorah C, Do N, et al. Abstract 1569: racial differences in incidence and impact of TP53 deletion on outcome in African American and Caucasian veterans with multiple myeloma. Cancer Res 2019;79(13_Supplement):1569. [CrossRef]
  55. Hassoon A, Baig Y, Naimann D, Celentano D, Lansey D, Stearns V, et al. Abstract 54: addressing cardiovascular health using artificial intelligence: randomized clinical trial to increase physical activity in cancer survivors using intelligent voice assist (amazon alexa) for patient coaching. Circulation 2020 Mar 03;141(Suppl_1):A54. [CrossRef]
  56. Kaplan B, Halmos P, Newberg J, Sokol E, Montesion M, Albacker L, et al. Abstract 2337: pan-cancer analysis of sex differences and their associations with ancestry and genomic biomarkers in a large comprehensive genomic profiling dataset. Cancer Res 2020;80(16_Supplement):2337. [CrossRef]
  57. Shew M, New J, Bur AM. Machine learning to predict delays in adjuvant radiation following surgery for head and neck cancer. Otolaryngol Head Neck Surg 2019 Jun 29;160(6):1058-1064. [CrossRef] [Medline]
  58. Sha S, Du W, Parkinson A, Glasgow N. Relative importance of clinical and sociodemographic factors in association with post-operative in-hospital deaths in colorectal cancer patients in New South Wales: an artificial neural network approach. J Eval Clin Pract 2020 Oct 16;26(5):1389-1398. [CrossRef] [Medline]
  59. Galadima H, Adunlin G, Blando J. Abstract A006: multi-modal estimation of causal influences of environmental agents on colorectal cancer in an understudied population. Cancer Epidemiol Biomarkers Prev 2020;29(6_Supplement_2):A006. [CrossRef]
  60. Cherry DR, Chen Q, Murphy JD. A novel prediction model to identify patients with early-stage pancreatic cancer. J Clin Oncol 2020 May 20;38(15_suppl):e16801. [CrossRef]
  61. Dillon ST, Bhasin MK, Feng X, Koh DW, Daoud SS. Quantitative proteomic analysis in HCV-induced HCC reveals sets of proteins with potential significance for racial disparity. J Transl Med 2013 Oct 01;11(1):239 [FREE Full text] [CrossRef] [Medline]
  62. Ding X, Tsang S, Ng S, Xue H. Application of machine learning to development of copy number variation-based prediction of cancer risk. Genomics Insights 2014 Jun 26;7:GEI.S15002. [CrossRef]
  63. Tran P, Monlezun D, De Sirkar S, Iliescu G, Kim P, Lopez-Mattei J, et al. Takotsubo cardiomyopathy mortality and costs in cancer, pci, and racial disparities: propensity score adjusted machine learning guided analysis of 6+ million inpatient admissions. J Cardiac Failure 2019 Aug;25(8):S60. [CrossRef]
  64. He J, Zhang J, Chen C, Ma Y, De Guzman R, Meng J, et al. The relative importance of clinical and socio-demographic variables in prognostic prediction in non-small cell lung cancer: a variable importance approach. Med Care 2020 May;58(5):461-467. [CrossRef] [Medline]
  65. Benci JL, Vachani CC, Hampshire MK, Bach C, Arnold-Korzeniowski K, Metz JM, et al. Factors influencing delivery of cancer survivorship care plans: a national patterns of care study. Front Oncol 2019 Jan 31;9:1577 [FREE Full text] [CrossRef] [Medline]
  66. Liao Y, Li C, Xia C, Zheng R, Xu B, Zeng H, et al. Spatial distribution of esophageal cancer mortality in China: a machine learning approach. Int Health 2021 Jan 14;13(1):70-79 [FREE Full text] [CrossRef] [Medline]
  67. Fiano R, Merrick G, Innes K, LeMasters T, Mattes M, Shen C, et al. PCN159 prediction of low-VALUE cancer care among older men with low-risk prostate cancer: a machine learning approach. Value in Health 2021 Jun;24:S49. [CrossRef]
  68. Gajra A, Zettler ME, Miller KA, Blau S, Venkateshwaran SS, Sridharan S, et al. Augmented intelligence to predict 30-day mortality in patients with cancer. Future Oncol 2021 Oct;17(29):3797-3807 [FREE Full text] [CrossRef] [Medline]
  69. Raffenaud A, Gurupur V, Fernandes SL, Yeung T. Utilizing telemedicine in oncology settings: patient favourability rates and perceptions of use analysis using Chi-Square and neural networks. Technol Health Care 2019 Mar 06;27(2):115-127. [CrossRef]
  70. Muhammad W, Hart G, Nartowt B, Deng J. In silico simulation to quantify liver cancer risk with smoking. Int J Radiation Oncol Biol Physics 2019 Sep;105(1):E137. [CrossRef]
  71. Pino LE, Large E, Mejía J, Triana IC. MAIA (Medical Artificial Intelligence Assistant) as interface for a new cancer healthcare integrative platform. J Global Oncol 2019 Oct 07;5(suppl):25. [CrossRef]
  72. Hanson HA, Martin C, O'Neil B, Leiser CL, Mayer EN, Smith KR, et al. The relative importance of race compared to health care and social factors in predicting prostate cancer mortality: a random forest approach. J Urol 2019 Dec;202(6):1209-1216. [CrossRef]
  73. Aghdam N, Arab A, Kumar D, Suy S, Dritschilo A, Lynch JJ, et al. Accessibility and utilization of stereotactic body radiation therapy for localized prostate cancer: an analysis of geodemographic clusters. J Clin Oncol 2018 May 20;36(15_suppl):e18636. [CrossRef]
  74. Ramakrishnan S, Xuan P, Qi Q, Hu Q, Ellman E, Azabdaftari G, et al. Abstract A003: epigenetic alterations as potential biologic determinants of racial health disparities. Cancer Res 2018;78(16_Supplement):A003. [CrossRef]
  75. Juacaba S, Rocha HA, Meneleu P, Hekmat R, Felix W, Arriaga YE, et al. A retrospective evaluation of treatment decision making for thyroid cancer using clinical decision support in Brazil. J Clin Oncol 2020 May 20;38(15_suppl):e19193. [CrossRef]
  76. Cheng B, Joe Stanley R, Stoecker WV, Stricklin SM, Hinton KA, Nguyen TK, et al. Analysis of clinical and dermoscopic features for basal cell carcinoma neural network classification. Skin Res Technol 2013 Feb 22;19(1):e217-e222 [FREE Full text] [CrossRef] [Medline]
  77. Urman A, Wang C, Dankwa-Mullan I, Scheinberg E, Young MJ. Harnessing AI for health equity in oncology research and practice. J Clin Oncol 2018 Oct 20;36(30_suppl):67. [CrossRef]
  78. Greatbatch O, Garrett A, Snape K. The impact of artificial intelligence on the current and future practice of clinical cancer genomics. Genet Res 2019 Oct 31;101:E9. [CrossRef]
  79. Lee J, Estevez M, Segal B, Sondhi A, Cohen A, Cherng S. Ai1 quantifying bias in ML-extracted variables for inference in clinical oncology. Value Health 2021 Jun;24:S1. [CrossRef]
  80. Agrawal N, Monlezun D, Grable C, Graham J, Marmagkiolis K, Chauhan S, et al. Abstract 13212: chronic total occlusion racial and income inequities by mortality and cost: propensity score and machine learning augmented nationally representative case-control study of 30 million hospitalizations. Circulation 2021 Nov 16;144(Suppl_1):A14107. [CrossRef]
  81. Kim J, Monlezun D, Palaskas N, Cilingirolu M, Marmagkiolis K, Iliescu C. Abstract A-8: Sex disparities in cardio-oncology treatment and mortality: Propensity score nationally representative case-control analysis with machine learning augmentation of over 30 million hospitalizations. 2021 Apr 28:S9-S111. [CrossRef]
  82. Giannitrapani K, Walling A, Gamboa R, O'Hanlon C, Canning M, Lindvall C, et al. Getting ahead of bias: qualitative pre-work for developing AI capture of palliative and end-of life quality measures. Harvard Medical School. 2020.   URL: [accessed 2022-09-13]
  83. Schlemmer HP. A40 Radiomics and AI: future of cancer imaging without radiologists? In: Proceedings of the International Cancer Imaging Society (ICIS) 18th Annual Teaching Course. 2018 Sep 3 Presented at: International Cancer Imaging Society (ICIS) 18th Annual Teaching Course; 7-9 October 2018; Menton, France.
  84. Asiedu M, Guillermo S, Ramanujam N. Low-cost, speculum-free, automated cervical cancer screening: bringing expert colposcopy assessment to community health. Annals Global Health 2017;83(1):199. [CrossRef]
  85. Holmström O, Linder N, Kaingu H, Mbuuko N, Mbete J, Kinyua F, et al. Point-of-care digital cytology with artificial intelligence for cervical cancer screening in a resource-limited setting. JAMA Netw Open 2021 Mar 01;4(3):e211740 [FREE Full text] [CrossRef] [Medline]
  86. Kudva V, Prasad K, Guruvare S. Andriod device-based cervical cancer screening for resource-poor settings. J Digit Imaging 2018 Oct 18;31(5):646-654 [FREE Full text] [CrossRef] [Medline]
  87. Hu L, Horning M, Banik D, Ajenifuja OK, Adepiti CA, Yeates K, Mehanian. Deep learning-based image evaluation for cervical precancer screening with a smartphone targeting low resource settings – Engineering approach. In: Proceedings of the 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC). 2020 Presented at: 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC); Jul 20-24, 2020; Montreal, QC, Canada. [CrossRef]
  88. Xue Z, Novetsky AP, Einstein MH, Marcus JZ, Befano B, Guo P, et al. A demonstration of automated visual evaluation of cervical images taken with a smartphone camera. Int J Cancer 2020 Nov 01;147(9):2416-2423 [FREE Full text] [CrossRef] [Medline]
  89. Bae JK, Roh H, You JS, Kim K, Ahn Y, Askaruly S, et al. Quantitative screening of cervical cancers for low-resource settings: pilot study of smartphone-based endoscopic visual inspection after acetic acid using machine learning techniques. JMIR Mhealth Uhealth 2020 Mar 11;8(3):e16467 [FREE Full text] [CrossRef] [Medline]
  90. Ajenifuja KO, Belinson J, Goldstein A, Desai KT, de Sanjose S, Schiffman M. Designing low-cost, accurate cervical screening strategies that take into account COVID-19: a role for self-sampled HPV typing2. Infect Agent Cancer 2020 Oct 14;15(1):61 [FREE Full text] [CrossRef] [Medline]
  91. Xue P, Ng MT, Qiao Y. The challenges of colposcopy for cervical cancer screening in LMICs and solutions by artificial intelligence. BMC Med 2020 Jun 03;18(1):169 [FREE Full text] [CrossRef] [Medline]
  92. Yang Z, Francisco J, Reese AS, Spriggs DR, Im H, Castro CM. Addressing cervical cancer screening disparities through advances in artificial intelligence and nanotechnologies for cellular profiling. Biophys Rev (Melville) 2021 Mar;2(1):011303 [FREE Full text] [CrossRef] [Medline]
  93. Castro C, Im H, Lee H, Avila-Wallace M, Weissleder R, Randall T. Harnessing artificial intelligence and digital diffraction to advance point-of-care HPV 16 and 18 detection. Gynecologic Oncol 2019 Jun;154:38. [CrossRef]
  94. Asiedu M, Skerrett E, Sapiro G, Ramanujam N. Combining multiple contrasts for improving machine learning-based classification of cervical cancers with a low-cost point-of-care Pocket colposcope. In: Proceedings of the 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC). 2020 Presented at: 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC); Jul 20-24, 2020; Montreal, QC, Canada. [CrossRef]
  95. Parra S, Carranza E, Coole J, Hunt B, Smith C, Keahey P, et al. Development of low-cost point-of-care technologies for cervical cancer prevention based on a single-board computer. IEEE J Transl Eng Health Med 2020;8:1-10. [CrossRef]
  96. Asiedu MN, Simhal A, Chaudhary U, Mueller JL, Lam CT, Schmitt JW, et al. Development of algorithms for automated detection of cervical pre-cancers with a low-cost, point-of-care, pocket colposcope. IEEE Trans Biomed Eng 2019 Aug;66(8):2306-2318. [CrossRef]
  97. Hunt B, Fregnani JH, Brenes D, Schwarz RA, Salcedo MP, Possati-Resende JC, et al. Cervical lesion assessment using real-time microendoscopy image analysis in Brazil: the CLARA study. Int J Cancer 2021 Jul 15;149(2):431-441 [FREE Full text] [CrossRef] [Medline]
  98. Kisling K, Zhang L, Simonds H, Fakie N, Yang J, McCarroll R, et al. Fully automatic treatment planning for external-beam radiation therapy of locally advanced cervical cancer: a tool for low-resource clinics. J Global Oncol 2019 Dec(5):1-9. [CrossRef]
  99. Azarianpour Esfahani S, Fu P, Mahdi H, Madabhushi A. Computational features of TIL architecture are differentially prognostic of uterine cancer between African and Caucasian American women. J Clin Oncol 2021 May 20;39(15_suppl):5585. [CrossRef]
  100. Asadi F, Salehnasab C, Ajori L. Supervised algorithms of machine learning for the prediction of cervical cancer. J Biomed Phys Eng 2020 Aug;10(4):513-522 [FREE Full text] [CrossRef] [Medline]
  101. Tossas K, Khan J, Winn R. Abstract A010: Hidden figures - an example of using machine learning to prioritize cervical cancer screening outreach. Cancer Epidemiol Biomarkers Prev 2020;29((6_Supplement_2)):A010. [CrossRef]
  102. Love SM, Berg WA, Podilchuk C, López Aldrete AL, Gaxiola Mascareño AP, Pathicherikollamparambil K, et al. Palpable breast lump triage by minimally trained operators in Mexico using computer-assisted diagnosis and low-cost ultrasound. J Global Oncol 2018 Dec(4):1-9. [CrossRef]
  103. Min J, Im H, Allen M, McFarland PJ, Degani I, Yu H, et al. Computational optics enables breast cancer profiling in point-of-care settings. ACS Nano 2018 Sep 25;12(9):9081-9090 [FREE Full text] [CrossRef] [Medline]
  104. Bakre M, Ramkumar C, Basavaraj C, Attuluri A, Madhav L, Prakash C, et al. Abstract P3-08-10: development and validation of a broad-based second generation multi marker “Morphometric IHC” test for optimal treatment planning of stage 1 and 2 breast cancer patients in low resource settings. Cancer Res 2018;78((4_Supplement)):P3-08-10. [CrossRef]
  105. Lehman C, Yala A, Lamb L, Barzilay R. Abstract SP080: hidden clues in the mammogram: how AI can improve early breast cancer detection. Cancer Res 2021;81((4_Supplement)):SP080. [CrossRef]
  106. Lehman C. Abstract IS-3: breast imaging in resource constrained regions: lessons from Uganda. Cancer Res 2018;78((4_Supplement)):IS-3. [CrossRef]
  107. Cobb AN, Janjua HM, Kuo PC. Big data solutions for controversies in breast cancer treatment. Clin Breast Cancer 2021 Jun;21(3):e199-e203. [CrossRef] [Medline]
  108. Thrall JH, Fessell D, Pandharipande PV. Rethinking the approach to artificial intelligence for medical image analysis: the case for precision diagnosis. J Am Coll Radiol 2021 Jan;18(1 Pt B):174-179. [CrossRef] [Medline]
  109. Hou C, Zhong X, He P, Xu B, Diao S, Yi F, et al. Predicting breast cancer in Chinese women using machine learning techniques: algorithm development. JMIR Med Inform 2020 Jun 08;8(6):e17364 [FREE Full text] [CrossRef] [Medline]
  110. Sidey-Gibbons C, Pfob A, Asaad M, Boukovalas S, Lin Y, Selber JC, et al. Development of machine learning algorithms for the prediction of financial toxicity in localized breast cancer following surgical treatment. JCO Clin Cancer Inform 2021 Dec(5):338-347. [CrossRef]
  111. Sidey-Gibbons C, Asaad M, Pfob A, Boukovalas S, Lin Y, Offodile A. Machine learning algorithms to predict financial toxicity associated with breast cancer treatment. J Clin Oncol 2020 May 20;38(15_suppl):2047. [CrossRef]
  112. Zhong X, Luo T, Deng L, Liu P, Hu K, Lu D, et al. Multidimensional machine learning personalized prognostic model in an early invasive breast cancer population-based cohort in china: algorithm validation study. JMIR Med Inform 2020 Nov 09;8(11):e19069 [FREE Full text] [CrossRef] [Medline]
  113. Wheeler SB, Spees L, Biddell CB, Rotter J, Trogdon JG, Birken SA, et al. Development of a personalized follow-up care algorithm for Medicare breast cancer survivors. J Clin Oncol 2020 Oct 10;38(29_suppl):204. [CrossRef]
  114. Lehman C. Abstract IA-21: AI in an imaging center: challenges and opportunities. Clin Cancer Res 2021;27((5_Supplement)):IA-21. [CrossRef]
  115. Yang X, Amgad M, Cooper LA, Du Y, Fu H, Ivanov AA. High expression of MKK3 is associated with worse clinical outcomes in African American breast cancer patients. J Transl Med 2020 Sep 01;18(1):334 [FREE Full text] [CrossRef] [Medline]
  116. Mema E, McGinty G. The role of artificial intelligence in understanding and addressing disparities in breast cancer outcomes. Curr Breast Cancer Rep 2020 May 18;12(3):168-174. [CrossRef]
  117. Zhang J, Chen Z, Wu J, Liu K. An intelligent decision-making support system for the detection and staging of prostate cancer in developing countries. Comput Math Methods Med 2020 Aug 17;2020:5363549-5363518 [FREE Full text] [CrossRef] [Medline]
  118. Uthoff RD, Song B, Sunny S, Patrick S, Suresh A, Kolur T, et al. Point-of-care, smartphone-based, dual-modality, dual-view, oral cancer screening device with neural network classification for low-resource communities. PLoS One 2018 Dec 5;13(12):e0207493 [FREE Full text] [CrossRef] [Medline]
  119. Song B, Sunny S, Li S, Gurushanth K, Mendonca P, Mukhia N, et al. Mobile-based oral cancer classification for point-of-care screening. J Biomed Opt 2021 Jun 1;26(06):065003. [CrossRef]
  120. Rock L, Datta M, Laronde D, Carraro A, Korbelik J, Harrison A, et al. Abstract 4223: conducting community oral cancer screening among South Asians in British Columbia. Cancer Res 2019;79((13_Supplement)):4223. [CrossRef]
  121. Adams SJ, Mondal P, Penz E, Tyan C, Lim H, Babyn P. Development and cost analysis of a lung nodule management strategy combining artificial intelligence and lung-RADS for baseline lung cancer screening. J Am Coll Radiol 2021 May;18(5):741-751. [CrossRef] [Medline]
  122. Otero J, Garagorry F, Cabell-Izquierdo Y, Thomas D. Neuropathology Decision Support Systems for Resource-Poor Pathologists. Berlin, Heidelberg: Springer Nature; 2020.
  123. Quang T, Schwarz RA, Dawsey SM, Tan MC, Patel K, Yu X, et al. A tablet-interfaced high-resolution microendoscope with automated image interpretation for real-time evaluation of esophageal squamous cell neoplasia. Gastrointest Endosc 2016 Dec;84(5):834-841 [FREE Full text] [CrossRef] [Medline]
  124. Esteva A, Kuprel B, Novoa RA, Ko J, Swetter SM, Blau HM, et al. Dermatologist-level classification of skin cancer with deep neural networks. Nature 2017 Jan 25;542(7639):115-118. [CrossRef]
  125. Wu J, Zhuang Q, Tan Y. Auxiliary medical decision system for prostate cancer based on ensemble method. Comput Math Methods Med 2020 May 18;2020:6509596-6509511 [FREE Full text] [CrossRef] [Medline]
  126. Briercheck E, Valvert F, Solorzano E, Silva O, Puligandla M, Tala MM, et al. High accuracy, low-cost transcriptional diagnostic to transform lymphoma care in low- and middle-income countries. Blood 2019;134((Supplement_1)):409. [CrossRef]
  127. Ilhan B, Lin K, Guneri P, Wilder-Smith P. Improving oral cancer outcomes with imaging and artificial intelligence. J Dent Res 2020 Mar 20;99(3):241-248 [FREE Full text] [CrossRef] [Medline]
  128. Ngwa W, Olver I, Schmeler KM. The use of health-related technology to reduce the gap between developed and undeveloped regions around the globe. Am Soc Clin Oncol Educ Book 2020 Mar;40:1-10 [FREE Full text] [CrossRef] [Medline]
  129. Anirvan P, Meher D, Singh SP. Artificial intelligence in gastrointestinal endoscopy in a resource-constrained setting: a reality check. Euroasian J Hepatogastroenterol 2020;10(2):92-97 [FREE Full text] [CrossRef] [Medline]
  130. Valvert F, Silva O, Solórzano-Ortiz E, Puligandla M, Siliézar Tala MM, Guyon T, et al. Low-cost transcriptional diagnostic to accurately categorize lymphomas in low- and middle-income countries. Blood Adv 2021 May 25;5(10):2447-2455 [FREE Full text] [CrossRef] [Medline]
  131. Timmerman R, Li B, Sarria G, Zhang Y, Perez T, Jiang S. Covering gaps in radiation oncology through artificial intelligence in low-resource countries: a survey-based analysis. Int J Radiation Oncol Biol Physics 2020 Nov;108(3):e420. [CrossRef]
  132. Lee M, Guo L, Nambudiri V. Towards gender equity in artificial intelligence and machine learning applications in dermatology. J Am Med Inform Assoc 2022 Jan 12;29(2):400-403 [FREE Full text] [CrossRef] [Medline]
  133. D'Souza G, Zhang HH, D'Souza WD, Meyer RR, Gillison ML. Moderate predictive value of demographic and behavioral characteristics for a diagnosis of HPV16-positive and HPV16-negative head and neck cancer. Oral Oncol 2010 Mar;46(2):100-104 [FREE Full text] [CrossRef] [Medline]
  134. Howard FM, Dolezal J, Kochanny S, Schulte J, Chen H, Heij L, et al. The impact of site-specific digital histology signatures on deep learning model accuracy and bias. Nat Commun 2021 Jul 20;12(1):4423 [FREE Full text] [CrossRef] [Medline]
  135. Okoji U, Taylor S, Lipoff J. Equity in skin typing: why it is time to replace the Fitzpatrick scale. Br J Dermatol 2021 Jul 22;185(1):198-199. [CrossRef] [Medline]
  136. Veronese F, Branciforti F, Zavattaro E, Tarantino V, Romano V, Meiburger KM, et al. The role in teledermoscopy of an inexpensive and easy-to-use smartphone device for the classification of three types of skin lesions using convolutional neural networks. Diagnostics (Basel) 2021 Mar 05;11(3):451 [FREE Full text] [CrossRef] [Medline]
  137. Pangti R, Mathur J, Chouhan V, Kumar S, Rajput L, Shah S, et al. A machine learning-based, decision support, mobile phone application for diagnosis of common dermatological diseases. J Eur Acad Dermatol Venereol 2021 Mar 12;35(2):536-545. [CrossRef] [Medline]
  138. Gao Y, Cui Y. Deep transfer learning for reducing health care disparities arising from biomedical data inequality. Nat Commun 2020 Oct 12;11(1):5131 [FREE Full text] [CrossRef] [Medline]
  139. Khor S, Hahn E, Haupt E, Shankaran V, Clark S, Rodriguez P, et al. AI2 the impact of including race and ethnicity in risk prediction models on racial bias. Value Health 2021 Jun;24:S1. [CrossRef]
  140. Gilson A, Du J, Janda G, Umrao S, Joel M, Choi R, et al. Abstract PO-074: the impact of phenotypic bias in the generalizability of deep learning models in non-small cell lung cancer. Clin Cancer Res 2021;27(5_Supplement):PO-074. [CrossRef]
  141. Li Y, Pang X, Cui Z, Zhou Y, Mao F, Lin Y, et al. Genetic factors associated with cancer racial disparity - an integrative study across twenty-one cancer types. Mol Oncol 2020 Nov 24;14(11):2775-2786 [FREE Full text] [CrossRef] [Medline]
  142. van Dams R, Kishan A, Nickols N, Raldow A, King C, Chang A, et al. Racial disparity in the genomic basis of radiosensitivity – an exploration of whole-transcriptome sequencing data via a machine-learning approach. Int J Radiation Oncol Biol Physics 2019 Sep;105(1):E138-E139. [CrossRef]
  143. An C, Choi JW, Lee HS, Lim H, Ryu SJ, Chang JH, et al. Prediction of the risk of developing hepatocellular carcinoma in health screening examinees: a Korean cohort study. BMC Cancer 2021 Jul 29;21(1):755 [FREE Full text] [CrossRef] [Medline]
  144. Bibault J, Bassenne M, Ren H, Xing L. Deep learning prediction of cancer prevalence from satellite imagery. Cancers (Basel) 2020 Dec 19;12(12):3844 [FREE Full text] [CrossRef] [Medline]
  145. Muhlestein WE, Akagi DS, Chotai S, Chambless LB. The impact of race on discharge disposition and length of hospitalization after craniotomy for brain tumor. World Neurosurg 2017 Aug;104:24-38 [FREE Full text] [CrossRef] [Medline]
  146. Muhlestein W, Akagi D, McManus A, Chambless L. Machine learning ensemble models predict total charges and drivers of cost for transsphenoidal surgery for pituitary tumor. J Neurosurg 2018 Sep 21;131(2):507-516. [CrossRef] [Medline]
  147. Aghdam N, Carrasquilla M, Wang E, Pepin AN, Danner M, Ayoob M, et al. Ten-year single institutional analysis of geographic and demographic characteristics of patients treated with stereotactic body radiation therapy for localized prostate cancer. Front Oncol 2020 Feb 25;10:616286 [FREE Full text] [CrossRef] [Medline]
  148. Im H, Pathania D, McFarland PJ, Sohani AR, Degani I, Allen M, et al. Design and clinical validation of a point-of-care device for the diagnosis of lymphoma via contrast-enhanced microholography and machine learning. Nat Biomed Eng 2018 Oct 23;2(9):666-674 [FREE Full text] [CrossRef] [Medline]
  149. Kar A, Wreesmann VB, Shwetha V, Thakur S, Rao VU, Arakeri G, et al. Improvement of oral cancer screening quality and reach: the promise of artificial intelligence. J Oral Pathol Med 2020 Oct 28;49(8):727-730. [CrossRef] [Medline]
  150. Lynch SM, Handorf E, Sorice KA, Blackman E, Bealin L, Giri VN, et al. The effect of neighborhood social environment on prostate cancer development in black and white men at high risk for prostate cancer. PLoS One 2020 Aug 13;15(8):e0237332 [FREE Full text] [CrossRef] [Medline]
  151. Manz CR, Chen J, Liu M, Chivers C, Regli SH, Braun J, et al. Validation of a machine learning algorithm to predict 180-day mortality for outpatients with cancer. JAMA Oncol 2020 Dec 01;6(11):1723-1730 [FREE Full text] [CrossRef] [Medline]
  152. Uthoff RD, Song B, Sunny S, Patrick S, Suresh A, Kolur T, et al. Small form factor, flexible, dualmodality handheld probe for smartphone-based, point-of-care oral and oropharyngeal cancer screening. J Biomed Opt ;24(06). [CrossRef]
  153. Song B, Sunny S, Uthoff RD, Patrick S, Suresh A, Kolur T, et al. Automatic classification of dual-modalilty, smartphone-based oral dysplasia and malignancy images using deep learning. Biomed Opt Express 2018 Oct 10;9(11):5318-5329. [CrossRef]
  154. Ilhan B, Guneri P, Wilder-Smith P. The contribution of artificial intelligence to reducing the diagnostic delay in oral cancer. Oral Oncol 2021 May;116. [CrossRef]
  155. Ahluwalia KP. Assessing the oral cancer risk of South-Asian immigrants in New York City. Cancer 2005 Dec 15;104(12 Suppl):2959-2961 [FREE Full text] [CrossRef] [Medline]
  156. Donia J, Shaw JA. Co-design and ethical artificial intelligence for health: an agenda for critical research and practice. Big Data Soc 2021 Dec 17;8(2):205395172110652. [CrossRef]
  157. Beede E, Baylor E, Hersch F, Iurchenko A, Wilcox L, Ruamviboonsuk P, et al. A human-centered evaluation of a deep learning system deployed in clinics for the detection of diabetic retinopathy. In: Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 2020 Presented at: CHI '20: CHI Conference on Human Factors in Computing Systems; Apr 25 - 30, 2020; Honolulu HI USA. [CrossRef]
  158. Mollura DJ, Culp MP, Pollack E, Battino G, Scheel JR, Mango VL, et al. Artificial intelligence in low- and middle-income countries: innovating global health radiology. Radiology 2020 Dec;297(3):513-520. [CrossRef] [Medline]
  159. Shaw JA, Donia J. The sociotechnical ethics of digital health: a critique and extension of approaches from bioethics. Front Digit Health 2021 Sep 23;3:725088 [FREE Full text] [CrossRef] [Medline]
  160. Couldry N, Mejias UA. Data colonialism: rethinking big data’s relation to the contemporary subject. Television New Media 2018 Sep 02;20(4):336-349. [CrossRef]
  161. Ferryman K. The dangers of data colonialism in precision public health. Glob Policy 2021 Aug 05;12(S6):90-92. [CrossRef]
  162. Veale M, Binns R. Fairer machine learning in the real world: mitigating discrimination without collecting sensitive data. Big Data Soc 2017 Nov 20;4(2):205395171774353. [CrossRef]
  163. Boddington P. Normative modes: codes and standards. In: The Oxford Handbook of Ethics of AI. Oxford, United Kingdom: Oxford University Press; 2020.
  164. Olsen CM, Thompson JF, Pandeya N, Whiteman DC. Evaluation of sex-specific incidence of Melanoma. JAMA Dermatol 2020 May 01;156(5):553-560 [FREE Full text] [CrossRef] [Medline]
  165. Kobayashi K, Hisamatsu K, Suzui N, Hara A, Tomita H, Miyazaki T. A review of HPV-related head and neck cancer. J Clin Med 2018 Aug 27;7(9):241 [FREE Full text] [CrossRef] [Medline]
  166. Marur S, D'Souza G, Westra WH, Forastiere AA. HPV-associated head and neck cancer: a virus-related cancer epidemic. Lancet Oncol 2010 Aug;11(8):781-789. [CrossRef]
  167. Tognetti L, Bonechi S, Andreini P, Bianchini M, Scarselli F, Cevenini G, et al. A new deep learning approach integrated with clinical data for the dermoscopic differentiation of early melanomas from atypical nevi. J Dermatol Sci 2021 Mar;101(2):115-122. [CrossRef] [Medline]
  168. Hart GR, Roffman DA, Decker R, Deng J. A multi-parameterized artificial neural network for lung cancer risk prediction. PLoS One 2018 Oct 24;13(10):e0205264 [FREE Full text] [CrossRef] [Medline]
  169. Hart GR, Yan V, Huang GS, Liang Y, Nartowt BJ, Muhammad W, et al. Population-based screening for endometrial cancer: human vs machine intelligence. Front Artif Intell 2020 Nov 24;3:539879 [FREE Full text] [CrossRef] [Medline]
  170. Yaemsiri S, Alfier JM, Moy E, Rossen LM, Bastian B, Bolin J, et al. Healthy people 2020: rural areas lag in achieving targets for major causes of death. Health Aff (Millwood) 2019 Dec 01;38(12):2027-2031 [FREE Full text] [CrossRef] [Medline]
  171. Vayena E, Feretti A. Big data and artificial intelligence for global health: ethical challenges and opportunities. In: Global Health: Ethical Challenges. Cambridge: Cambridge University Press; 2021.
  172. Hosny A, Aerts HJ. Artificial intelligence for global health. Science 2019 Nov 22;366(6468):955-956 [FREE Full text] [CrossRef] [Medline]
  173. Gyawali B. Does global oncology need artificial intelligence? Lancet Oncol 2018 May;19(5):599-600. [CrossRef]
  174. Harrington C, Erete S, Piper AM. Deconstructing community-based collaborative design. Proc ACM Hum Comput Interact 2019 Nov 07;3(CSCW):1-25. [CrossRef]
  175. Olusanya BO, Opoka RO. Obligations under global health partnerships in LMICs should be contractual. Lancet Global Health 2017 Sep;5(9):e869. [CrossRef]
  176. Kaur N, Figueiredo S, Bouchard V, Moriello C, Mayo N. Where have all the pilot studies gone? A follow-up on 30 years of pilot studies in Clinical Rehabilitation. Clin Rehabil 2017 Oct 01;31(9):1238-1248 [FREE Full text] [CrossRef] [Medline]
  177. Mutebi M, Dehar N, Nogueira LM, Shi K, Yabroff KR, Gyawali B. Cancer groundshot: building a robust cancer control platform in addition to launching the cancer moonshot. Am Soc Clin Oncol Educ Book 2022 Jul(42):100-115. [CrossRef]
  178. Gyawali B, Sullivan R, Booth CM. Cancer groundshot: going global before going to the moon. Lancet Oncol 2018 Mar;19(3):288-290. [CrossRef]
  179. Featherston R, Downie LE, Vogel AP, Galvin KL. Decision making biases in the allied health professions: a systematic scoping review. PLoS One 2020 Oct 20;15(10):e0240716 [FREE Full text] [CrossRef] [Medline]
  180. Somashekhar S, Bakre M, Ramkumar C, Basavaraj C, Arun Kumar A, Madhav L, et al. Risk of recurrence prediction and optimum treatment planning for early stage breast cancer patients: a cost-effective, accurate and broad based solution for Asia. Annals Oncol 2017 Sep;28:v587. [CrossRef]
  181. Tian Y, Liu X, Wang Z, Cao S, Liu Z, Ji Q, et al. Concordance between Watson for oncology and a multidisciplinary clinical decision-making team for gastric cancer and the prognostic implications: retrospective study. J Med Internet Res 2020 Feb 20;22(2):e14122 [FREE Full text] [CrossRef] [Medline]
  182. Jie Z, Zhiying Z, Li L. A meta-analysis of Watson for Oncology in clinical application. Sci Rep 2021 Mar 11;11(1):5792 [FREE Full text] [CrossRef] [Medline]
  183. Tupasela A, Di Nucci E. Concordance as evidence in the Watson for oncology decision-support system. AI Soc 2020 Feb 01;35(4):811-818. [CrossRef]
  184. Wajcman J. Automation: is it really different this time? Br J Sociol 2017 Mar 21;68(1):119-127. [CrossRef] [Medline]
  185. Murphy K, Di Ruggiero E, Upshur R, Willison DJ, Malhotra N, Cai JC, et al. Artificial intelligence for good health: a scoping review of the ethics literature. BMC Med Ethics 2021 Feb 15;22(1):14 [FREE Full text] [CrossRef] [Medline]
  186. Panch T, Mattie H, Atun R. Artificial intelligence and algorithmic bias: implications for health systems. J Glob Health 2019 Dec;9(2):010318 [FREE Full text] [CrossRef] [Medline]
  187. Panch T, Mattie H, Celi LA. The "inconvenient truth" about AI in healthcare. NPJ Digit Med 2019;2:77 [FREE Full text] [CrossRef] [Medline]
  188. Norori N, Hu Q, Aellen FM, Faraci FD, Tzovara A. Addressing bias in big data and AI for health care: a call for open science. Patterns (N Y) 2021 Oct 08;2(10):100347 [FREE Full text] [CrossRef] [Medline]
  189. Adamson AS, Smith A. Machine learning and health care disparities in dermatology. JAMA Dermatol 2018 Nov 01;154(11):1247-1248. [CrossRef] [Medline]
  190. Silberg J, Manyika J. Notes from the AI frontier: Tackling bias in AI (and in humans). McKinsey Global Institute. 2019.   URL: [accessed 2022-03-09]
  191. McDaniel JT, Nuhu K, Ruiz J, Alorbi G. Social determinants of cancer incidence and mortality around the world: an ecological study. Glob Health Promot 2019 Mar 16;26(1):41-49. [CrossRef] [Medline]
  192. Bompelli A, Wang Y, Wan R, Singh E, Zhou Y, Xu L, et al. Social and behavioral determinants of health in the era of artificial intelligence with electronic health records: a scoping review. Health Data Sci 2021 Aug 24;2021:1-19. [CrossRef]
  193. Christodoulou E, Ma J, Collins GS, Steyerberg EW, Verbakel JY, Van Calster B. A systematic review shows no performance benefit of machine learning over logistic regression for clinical prediction models. J Clin Epidemiol 2019 Jun;110:12-22. [CrossRef] [Medline]
  194. Reddy S, Allan S, Coghlan S, Cooper P. A governance model for the application of AI in health care. J Am Med Inform Assoc 2020 Mar 01;27(3):491-497 [FREE Full text] [CrossRef] [Medline]
  195. London AJ. Artificial intelligence and black-box medical decisions: accuracy versus explainability. Hastings Cent Rep 2019 Jan 21;49(1):15-21. [CrossRef] [Medline]
  196. McCradden MD, Stephenson EA, Anderson JA. Clinical research underlies ethical integration of healthcare artificial intelligence. Nat Med 2020 Sep 09;26(9):1325-1326. [CrossRef] [Medline]
  197. McCradden MD, Anderson JA, A Stephenson E, Drysdale E, Erdman L, Goldenberg A, et al. A research ethics framework for the clinical translation of healthcare machine learning. Am J Bioeth 2022 May 20;22(5):8-22. [CrossRef] [Medline]
  198. Ibrahim SA, Charlson ME, Neill DB. Big data analytics and the struggle for equity in health care: the promise and perils. Health Equity 2020 Apr 01;4(1):99-101 [FREE Full text] [CrossRef] [Medline]
  199. Catlow J, Bray B, Morris E, Rutter M. Power of big data to improve patient care in gastroenterology. Frontline Gastroenterol 2022 May 28;13(3):237-244 [FREE Full text] [CrossRef] [Medline]
  200. Chambers DA, Amir E, Saleh RR, Rodin D, Keating NL, Osterman TJ, et al. The impact of big data research on practice, policy, and cancer care. Am Soc Clin Oncol Educ Book 2019 Jan;39:e167-e175 [FREE Full text] [CrossRef] [Medline]
  201. Xu J, Yang P, Xue S, Sharma B, Sanchez-Martin M, Wang F, et al. Translating cancer genomics into precision medicine with artificial intelligence: applications, challenges and future perspectives. Hum Genet 2019 Mar 22;138(2):109-124 [FREE Full text] [CrossRef] [Medline]
  202. Moscou S, Anderson MR, Kaplan JB, Valencia L. Validity of racial/ethnic classifications in medical records data: an exploratory study. Am J Public Health 2003 Jul;93(7):1084-1086. [CrossRef] [Medline]

AI: artificial intelligence
HIC: high-income country
HPV: human papillomavirus
LMIC: low- and middle-income country
ML: machine learning
PRISMA-ScR: Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews

Edited by T Leung; submitted 21.05.22; peer-reviewed by J Lau, SJC Soerensen , S Pesälä; comments to author 21.07.22; revised version received 11.08.22; accepted 24.08.22; published 01.11.22


©Paul Istasy, Wen Shen Lee, Alla Iansavichene, Ross Upshur, Bishal Gyawali, Jacquelyn Burkell, Bekim Sadikovic, Alejandro Lazo-Langner, Benjamin Chin-Yee. Originally published in the Journal of Medical Internet Research (, 01.11.2022.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on, as well as this copyright and license information must be included.