Review
Abstract
Background: Currently, there is a lack of effective early assessment tools for predicting the onset and development of cardiac arrest (CA). With the increasing attention of clinical researchers on machine learning (ML), some researchers have developed ML models for predicting the occurrence and prognosis of CA, with certain models appearing to outperform traditional scoring tools. However, these models still lack systematic evidence to substantiate their efficacy.
Objective: This systematic review and meta-analysis was conducted to evaluate the prediction value of ML in CA for occurrence, good neurological prognosis, mortality, and the return of spontaneous circulation (ROSC), thereby providing evidence-based support for the development and refinement of applicable clinical tools.
Methods: PubMed, Embase, the Cochrane Library, and Web of Science were systematically searched from their establishment until May 17, 2024. The risk of bias in all prediction models was assessed using the Prediction Model Risk of Bias Assessment Tool.
Results: In total, 93 studies were selected, encompassing 5,729,721 in-hospital and out-of-hospital patients. The meta-analysis revealed that, for predicting CA, the pooled C-index, sensitivity, and specificity derived from the imbalanced validation dataset were 0.90 (95% CI 0.87-0.93), 0.83 (95% CI 0.79-0.87), and 0.93 (95% CI 0.88-0.96), respectively. On the basis of the balanced validation dataset, the pooled C-index, sensitivity, and specificity were 0.88 (95% CI 0.86-0.90), 0.72 (95% CI 0.49-0.95), and 0.79 (95% CI 0.68-0.91), respectively. For predicting the good cerebral performance category score 1 to 2, the pooled C-index, sensitivity, and specificity based on the validation dataset were 0.86 (95% CI 0.85-0.87), 0.72 (95% CI 0.61-0.81), and 0.79 (95% CI 0.66-0.88), respectively. For predicting CA mortality, the pooled C-index, sensitivity, and specificity based on the validation dataset were 0.85 (95% CI 0.82-0.87), 0.83 (95% CI 0.79-0.87), and 0.79 (95% CI 0.74-0.83), respectively. For predicting ROSC, the pooled C-index, sensitivity, and specificity based on the validation dataset were 0.77 (95% CI 0.74-0.80), 0.53 (95% CI 0.31-0.74), and 0.88 (95% CI 0.71-0.96), respectively. In predicting CA, the most significant modeling variables were respiratory rate, blood pressure, age, and temperature. In predicting a good cerebral performance category score 1 to 2, the most significant modeling variables in the in-hospital CA group were rhythm (shockable or nonshockable), age, medication use, and gender; the most significant modeling variables in the out-of-hospital CA group were age, rhythm (shockable or nonshockable), medication use, and ROSC.
Conclusions: ML represents a currently promising approach for predicting the occurrence and outcomes of CA. Therefore, in future research on CA, we may attempt to systematically update traditional scoring tools based on the superior performance of ML in specific outcomes, achieving artificial intelligence–driven enhancements.
Trial Registration: PROSPERO International Prospective Register of Systematic Reviews CRD42024518949; https://www.crd.york.ac.uk/prospero/display_record.php?RecordID=518949
doi:10.2196/67871
Keywords
Introduction
Background
Cardiac arrest (CA) remains a critical challenge in contemporary medicine, characterized by a dismally low survival rate and poor prognosis, and, therefore, has garnered global attention [
]. CA can be classified by the occurrence location into in-hospital CA (IHCA) and out-of-hospital CA (OHCA). Despite advancements in cardiopulmonary resuscitation techniques, global registry data indicate that the incidence and survival rates of CA have not significantly improved. The incidence of IHCA in the United States increased to 900 to 1000 per 100,000 hospitalized patients between 2008 and 2017, compared to 160 per 100,000 in the United Kingdom from 2011 to 2013 and 840 per 100,000 in China as of 2020 [ - ]. Meanwhile, the estimated averages of incidence of OHCA under emergency medical services (EMS) in North America, Asia, and Europe from 2010 to 2020 were 47.3, 45.9, and 40.6 per 100,000 people, respectively. The estimated averages of the survival rates of IHCA from 2010 to 2020 were 25% in the United States, 18% in the United Kingdom, and only 9.4% in China. For OHCA, the estimated averages of the survival rates during this same period were 10% to 12% in the United States, 8% in Europe, and just 3.6% in Asia [ , ]. These low survival rates also impose significant economic burdens on nations. According to relevant reviews, the cost-effectiveness threshold for CA ranged from US $20,000 to US $150,000 per quality-adjusted life year, with each life saved potentially reducing costs by US $19,000 to US $71,000 per case [ ].Although efforts to establish CA centers independently began in various regions of the United States as early as 2000 to 2010 [
] and Germany initiated CA center certification throughout the country [ ] in August 2019 aiming to provide evidence-based, bundled care to improve CA survival rates, CA remains a formidable clinical challenge. If resuscitation is not timely, the patient may lose consciousness within approximately 10 seconds, with irreversible hypoxic-ischemic brain injury occurring within 4 minutes [ ], and if resuscitation is delayed beyond 10 minutes, survival is practically impossible [ , ]. Thus, early prediction and identification of CA are critical factors in preventing death and poor outcomes and represent a major challenge that requires urgent clinical attention.Objectives
However, there is a scarcity of efficient, internationally recognized, and universally accepted assessment tools for early prediction and identification of CA risk and adverse outcomes. In recent years, with the rapid advancement of artificial intelligence (AI), many researchers have used machine learning (ML) to address clinical challenges. Commonly used ML approaches can be broadly categorized into supervised and unsupervised learning. In the context of supervised ML, clinical predictors can be incorporated into various models. In these models, their parameters are adjusted based on outcome variables to generate predictions regarding the probability of positive event occurrence [
]. It is now common to see ML being used to predict disease progression and even to diagnose and treat complex diseases effectively. For instance, in 2019, several authors, including Hatib et al [ ] and Wijnberge et al [ ], successfully predicted intraoperative hypotensive events using ML, leading to the clinical translation of these models into products that significantly enhanced patient safety during surgical anesthesia [ ]. By 2023, some researchers had similarly affirmed the substantial potential of ML models in cancer detection, prognosis, and treatment, recognizing their exciting discoveries and contributions to advancing medical practice [ ]. The aforementioned studies were based on supervised ML and the extensive use of interpretable clinical features to construct predictive models and simultaneously demonstrate the promising predictive performance of ML in clinical events across various fields. In this context, some researchers have also developed different ML models for risk prediction in CA. Recent reviews by Sem et al [ ] and Chen et al [ ] indicate that ML appears to exhibit high accuracy in both the management and risk prediction of CA. However, these reviews do not quantitatively synthesize the results of ML models, which significantly limits our ability to interpret the specific value of various ML models in CA applications and the selection of appropriate models. Therefore, we conducted this systematic review and meta-analysis to review the predictive performance of ML for the occurrence of CA, good neurological prognosis after CA, mortality, and the return of spontaneous circulation (ROSC) after CA to provide evidence-based guidance for the development and updating of simple prediction tools with high accuracy and direct access to results.Methods
Study Registration
This study was conducted in adherence to the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines and prospectively registered with PROSPERO (International Prospective Register of Systematic Reviews; ID CRD42024518949). The detailed PRISMA checklist is presented in
.Eligibility Criteria
Detailed inclusion and exclusion criteria were defined to screen the original studies relevant to our systematic review from the retrieved literature (
).Inclusion criteria
- Study type: the included studies must be case-control, cohort, nested case-control, case-cohort, or cross-sectional studies.
- Model construction: although some studies, due to limited sample sizes, lacked independent external validation, we could not dismiss their contributions. In our analysis, it was necessary to synthesize results from the training and validation sets to assess the presence of severe overfitting. Therefore, those with no external validation were also included.
- Outcomes: studies that comprehensively constructed machine learning (ML) models for cardiac arrest (CA) occurrence prediction or clinical outcomes following CA were selected.
- Language: we included original studies in English.
Exclusion criteria
- Study type: studies categorized as meta-analyses, reviews, guidelines, expert opinions, or conference abstracts and not fully peer reviewed and published were removed.
- Model construction: studies with only risk factor analysis but no construction of a complete ML model were excluded, those with a limited number of samples (<20) were not included, and those only focusing on the accuracy of univariate predictors were removed.
- Outcomes: in existing ML studies, model performance was assessed using the receiver operating characteristic curve, C-statistic, sensitivity, specificity, accuracy, recall, precision, confusion matrix, or F1-score. However, a few original studies that lacked at least one of these metrics and, therefore, did not evaluate model performance adequately were excluded.
- Language: non–English-language original studies were excluded.
Data Sources and Search Strategy
A systematic search of the PubMed, Embase, Cochrane Library, and Web of Science databases was carried out from their inception to May 17, 2024. The search strategy involved controlled vocabulary and free-text terms, with no restrictions on geographical location or publication year. The detailed search strategy is presented in
- .Study Selection and Data Extraction
The retrieved studies were imported into EndNote X9 (Clarivate Analytics). Their titles and abstracts were reviewed. After the exclusion of duplicates, the preliminary eligible original studies were selected and their full texts downloaded for determining the final inclusion. An electronic spreadsheet was prepared to extract the following information: first author, publication year, author’s country, study type, patient source, prediction events, data balance, location of CA occurrence, number of cases with study events, total number of cases, number of cases in the training and validation sets, method of validation set generation, missing data–handling methods, and types of models used. Study selection and data extraction were independently conducted by 2 researchers. Disagreements were discussed and resolved with a third author.
Risk of Bias in the Studies
The Prediction Model Risk of Bias Assessment Tool (PROBAST) was used to assess the risk of bias in all the included original studies. PROBAST comprises several questions across 4 domains—participant, predictor, outcome, and statistical analysis—which reflect the overall risk of bias and applicability. These domains consist of 2, 3, 6, and 9 questions, respectively, each with 3 possible answers (Yes or Probablyyes, No or Probably no, and No information). A domain was classified as high risk if any question was answered with No or Probably no. Conversely, a domain was regarded as low risk only if every question was answered with Yes or Probably yes. The overall risk of bias was assessed as low when all domains were deemed to be low risk. When at least one domain was high risk, the overall risk of bias was rated as high. In total, 2 authors independently assessed the risk of bias using PROBAST and cross-checked their findings. Any discrepancies were addressed by consulting with a third author to reach agreement.
Outcomes
The primary outcome measure was the C-index, which reflects the predictive ability of ML models for IHCA and OHCA. However, we found that the C-index might not have accurately described the predictive performance of ML for positive events, particularly in models built on severely imbalanced data, as these original studies often suffered from such imbalance. This limitation was evident in predicting the occurrence of IHCA and OHCA, the good cerebral performance category score 1 to 2 (CPC 1-2), mortality, and ROSC. Therefore, in addition to the C-index, our primary outcome measures encompassed sensitivity and specificity. Our secondary outcome was the frequency of variables used in the ML models.
Synthesis Methods
A meta-analysis of the C-index, a measure of the general accuracy of ML models, was carried out. When the 95% CI and SE for the C-index were missing in some studies, the SE was estimated based on the study by Debray et al [
]. Due to the differences in the included variables and inconsistent parameters among the ML models, random-effects models were prioritized in the meta-analysis of the C-index. In addition, a meta-analysis on sensitivity and specificity was conducted through a bivariate mixed-effects model based on diagnostic 2 × 2 tables. However, most original studies did not report these tables. In such cases, we used the following methods to calculate the 2 × 2 tables: (1) calculation based on sensitivity, specificity, precision, and case numbers; and (2) calculation based on the best Youden index to extract sensitivity and specificity, followed by case number integration. Nevertheless, this method allowed for meta-analysis only when there were ≥4 models. For <4 models, we presented the range of sensitivity and specificity. Our meta-analysis was conducted in R (version 4.2.0; R Foundation for Statistical Computing).Results
Study Selection
A total of 1270 articles were obtained from databases, with 599 (47.17%) being duplicates. Among these 599 duplicates, 471 (78.6%) were found to be duplicates via software, and 128 (21.4%) were manually identified as duplicates. After the elimination of duplicates, 671 articles were screened by title and abstract, with 169 (25.2%) being selected for full-text review. After the exclusion of conference abstracts published in full text without peer review (19/169, 11.2%), studies with risk factor analyses but no complete ML models (22/169, 13%), studies lacking outcome indicators (28/169, 16.6%), and studies with severe statistical errors (7/169, 4.1%), a total of 93 articles were included finally. The detailed process is illustrated in
.
Study Characteristics
The 93 selected studies were published between 2011 and 2024, covering 14 countries, primarily South Korea, China, Japan, the United States, and Singapore. Among the 93 studies, there were 23 (25%) prospective cohort studies and 3 (3%) case-control studies, with the remainder (67/93, 72%) being retrospective cohort studies. Data for 26% (24/93) of the studies were sourced from multiple centers, whereas 37% (34/93) of the studies used data from registry databases and the rest (35/93, 38%) were single-center studies. In 30% (28/93) of the studies, the predicted outcome was the occurrence of CA. In 42 studies, the predicted outcome was the neurological prognosis of patients with CA, with 10 (24%) studies focused on patients with IHCA and the remainder (n=32, 76%) focused on patients with OHCA. In 27% (25/93) of the studies, the predicted outcome was CA mortality, and in 12% (11/93) of the studies, the predicted outcome was ROSC in patients with CA. The 93 studies collectively encompassed a total of 5,729,721 cases, including 1,737,085 OHCA cases and 3,992,636 IHCA cases. Regarding the predictive models constructed, 81% (75/93) of the studies had independent validation sets, but only 27% (25/93) used external validation, primarily using k-fold cross-validation and random-sampling internal validation methods. A total of 34% (32/93) of the studies described methods to prevent data overfitting, mainly through cross-validation. In total, 17 types of ML models were involved, with logistic regression (LR), random forest (RF), deep learning, and decision trees (DTs) being the most prominent. In addition, these studies validated several previously established scoring tools, including the Cardiac Arrest Neurological Prognosis score, distance scoring system, Emergency Department In-Hospital Cardiac Arrest Score, FACTOR score, Modified Early Warning Score, National Early Warning Score, National Early Warning Score 2, OHCA score, proposed scoring system, Rapid Emergency Medicine Score, Simplified Acute Physiology Score II, Cardiac Arrest Hospital Prognosis score, and ROSC after CA score. The details of the included studies are shown in
and .Study | Year of publication | Country of first author | Study type (case-control, cohort study [retrospective or prospective], nested cohort study, or case-cohort study) | Patient sources (single center, multicenter, or registration database) | Predictive events |
Wang et al [ | ]2024 | Taiwan, China | Retrospective cohort study | Multicenter | Cardiac arrest |
Raheem et al [ | ]2024 | Pakistan | Retrospective cohort study | Single center | Cardiac arrest |
Amacher et al [ | ]2024 | Switzerland | Prospective cohort study | Single center | In-hospital mortality and CPC 3-5a |
Cho et al [ | ]2024 | Republic of Korea | Retrospective cohort study | Single center | Cardiac arrest |
Shin et al [ | ]2024 | Republic of Korea | Retrospective cohort study | Single center | Cardiac arrest |
Ding et al [ | ]2024 | China | Prospective cohort study | Single center | CPC 1-2b and in-hospital mortality |
Nishioka et al [ | ]2024 | Japan | Retrospective cohort study | Multicenter | CPC 3-5 |
Kajino et al [ | ]2024 | Japan | Retrospective cohort study | Registration database | CPC 1-2 |
Pham et al [ | ]2024 | United States | Prospective cohort study | Multicenter | Cardiac arrest |
Rahadian et al [ | ]2024 | Japan | Prospective cohort study | Registration database | VFc or VTd |
Wang et al [ | ]2024 | Taiwan, China | Retrospective cohort study | Registration database | Cardiac arrest |
Tsai et al [ | ]2024 | Taiwan, China | Retrospective cohort study | Single center | CPC 1-2 |
Schweiger et al [ | ]2024 | Switzerland | Prospective cohort study | Single center | In-hospital mortality |
Caputo et al [ | ]2024 | Switzerland | Prospective cohort study | Registration database | ROSCe |
Lu et al [ | ]2023 | Taiwan, China | Retrospective cohort study | Single center | Cardiac arrest |
Dünser et al [ | ]2023 | Austria | Retrospective cohort study | Single center | NROSCf and CPC 3-5 |
Bang et al [ | ]2023 | Republic of Korea | Retrospective cohort study | Multicenter | In-hospital mortality |
Zhang et al [ | ]2023 | China | Retrospective cohort study | Multicenter | In-hospital mortality |
Li and Xing [ | ]2023 | China | Retrospective cohort study | Single center | CPC 3-5 and NROSC |
Ding et al [ | ]2023 | China | Retrospective cohort study | Single center | Cardiac arrest |
Uehara et al [ | ]2023 | Japan | Prospective cohort study | Multicenter | CPC 1-2 |
Shin et al [ | ]2023 | Republic of Korea | Retrospective cohort study | Registration database | CPC 1-2 and ROSC |
Kawai et al [ | ]2023 | Japan | Retrospective cohort study | Single center | CPC 3-5 |
Imamura et al [ | ]2023 | Japan | Retrospective cohort study | Multicenter | 30-day mortality |
Hessulf et al [ | ]2023 | Sweden | Retrospective cohort study | Multicenter | 30-day survival |
Yoon et al [ | ]2023 | Republic of Korea | Retrospective cohort study | Single center | CPC 3-5 |
Chang et al [ | ]2023 | Republic of Korea | Retrospective cohort study | Registration database | ROSC, survival to discharge, and CPC 1-2 |
Wang et al [ | ]2023 | China | Retrospective cohort study | Multicenter | ROSC |
Shinada et al [ | ]2023 | Japan | Retrospective cohort study | Registration database | CPC 1-2 |
Xu et al [ | ]2022 | China | Case-control study | Single center | Cardiac arrest |
Tsai et al [ | ]2022 | Taiwan, China | Retrospective cohort study | Registration database | Cardiac arrest |
Tang et al [ | ]2022 | China | Retrospective cohort study | Registration database | Cardiac arrest |
Kim et al [ | ]2022 | Republic of Korea | Retrospective cohort study | Registration database | Cardiac arrest |
Chae et al [ | ]2022 | Republic of Korea | Retrospective cohort study | Single center | Cardiac arrest |
Sun et al [ | ]2022 | Taiwan, China | Retrospective cohort study | Single center | Cardiac arrest |
Wong et al [ | ]2022 | Singapore | Prospective cohort study | Multicenter | Survival to discharge |
Tran et al [ | ]2022 | United States | Prospective cohort study | Registration database | In-hospital mortality |
Rajendram et al [ | ]2022 | Singapore | Retrospective cohort study | Multicenter | Survival to discharge and CPC 1-2 |
Rafi et al [ | ]2022 | France | Retrospective cohort study | Single center | Cardiac arrest |
Liu et al [ | ]2022 | Singapore | Retrospective cohort study | Registration database | ROSC |
Lin et al [ | ]2022 | Taiwan, China | Retrospective cohort study | Registration database | CPC 1-2 |
Kawai et al [ | ]2022 | Japan | Retrospective cohort study | Multicenter | CPC 1-2 |
Itagaki et al [ | ]2022 | Japan | Retrospective cohort study | Single center | Brain death |
Harris et al [ | ]2022 | United States | Retrospective cohort study | Registration database | Prehospital ROSC in pediatric OHCAg |
Harford et al [ | ]2022 | United States | Retrospective cohort study | Registration database | CPC 1-2 |
Harford et al [ | ]2022 | United States | Retrospective cohort study | Registration database | CPC 1-2 |
Chung et al [ | ]2021 | Taiwan, China | Retrospective cohort study | Single center | CPC 1-2 |
Chi et al [ | ]2021 | Taiwan, China | Retrospective cohort study | Registration database | In-hospital mortality |
Wang et al [ | ]2021 | China | Retrospective cohort study | Multicenter | CPC 1-2 |
Bae et al [ | ]2021 | Republic of Korea | Retrospective cohort study | Single center | CPC 3-5 |
Mueller et al [ | ]2021 | Austria | Prospective cohort study | Single center | CPC 1-2 |
Lee et al [ | ]2021 | Republic of Korea | Retrospective cohort study | Multicenter | Cardiac arrest |
Lim et al [ | ]2021 | Republic of Korea | Prospective cohort study | Registration database | CPC 1-2 |
Lo and Siu [ | ]2021 | Hong Kong, China | Retrospective cohort study | Registration database | ROSC |
Lonsain et al [ | ]2021 | Belgium | Retrospective cohort study | Single center | 24-hour survival |
Nishioka et al [ | ]2021 | Japan | Prospective cohort study | Registration database | CPC 1-2 |
Beom et al [ | ]2021 | Republic of Korea | Prospective cohort study | Multicenter | Survival to discharge and CPC 1-2 |
Cheng et al [ | ]2021 | Taiwan, China | Retrospective cohort study | Single center | CPC 1-2 |
Kim et al [ | ]2021 | Republic of Korea | Retrospective cohort study | Registration database | Survival to discharge and CPC 1-2 |
Seo et al [ | ]2021 | Republic of Korea | Prospective cohort study | Registration database | CPC 1-2 |
Song et al [ | ]2021 | Republic of Korea | Retrospective cohort study | Single center | CPC 3-5 |
Sun et al [ | ]2021 | Hong Kong, China | Retrospective cohort study | Multicenter | ROSC |
Youn et al [ | ]2021 | Republic of Korea | Prospective cohort study | Multicenter | Significant coronary artery disease among survivors of OHCA without STEh |
Heo et al [ | ]2021 | Republic of Korea | Prospective cohort study | Multicenter | CPC 3-5 |
Wang et al [ | ]2020 | China | Retrospective cohort study | Registration database | CPC 1-2 |
Hong et al [ | ]2020 | Republic of Korea | Retrospective cohort study | Single center | Cardiac arrest |
Cho et al [ | ]2020 | Republic of Korea | Retrospective cohort study | Single center | Cardiac arrest |
Hirano et al [ | ]2020 | Japan | Retrospective cohort study | Registration database | Death at 1 month or survival with poor neurological function (CPC 3-5) and 30-day mortality |
Okada et al [ | ]2020 | Japan | Prospective cohort study | Registration database | CPC 1-2 |
Liu et al [ | ]2020 | Singapore | Retrospective cohort study | Registration database | ROSC |
Elola et al [ | ]2020 | Spain | Retrospective cohort study | Registration database | Cardiac arrest |
Hsieh et al [ | ]2020 | Taiwan, China | Retrospective cohort study | Registration database | Cardiac arrest |
Baldi et al [ | ]2020 | Italy | Prospective cohort study | Multicenter | Survival to hospital admission |
Li et al [ | ]2019 | China | Case-control study | Multicenter | Cardiac arrest |
Srivilaithon et al [ | ]2019 | Thailand | Case-control study | Single center | Cardiac arrest |
Lee et al [ | ]2019 | Republic of Korea | Retrospective cohort study | Single center | CPC 3-5 |
Liu et al [ | ]2019 | Taiwan, China | Retrospective cohort study | Single center | Cardiac arrest |
Jang et al [ | ]2019 | Republic of Korea | Retrospective cohort study | Single center | Cardiac arrest |
Seki et al [ | ]2019 | Japan | Prospective cohort study | Multicenter | 1-year survival |
Park et al [ | ]2019 | Republic of Korea | Retrospective cohort study | Registration database | CPC 1-2 |
Kwon et al [ | ]2019 | Republic of Korea | Retrospective cohort study | Registration database | CPC 1-2 and survival to discharge |
Kong et al [ | ]2019 | Republic of Korea | Prospective cohort study | Multicenter | CPC 1-2 and survival to discharge |
Harford et al [ | ]2019 | United States | Retrospective cohort study | Registration database | CPC 1-2 |
Kwon et al [ | ]2018 | Republic of Korea | Retrospective cohort study | Multicenter | Cardiac arrest and in-hospital mortality |
Chang et al [ | ]2018 | Taiwan, China | Retrospective cohort study | Single center | Cardiac arrest |
Shin et al [ | ]2018 | Republic of Korea | Retrospective cohort study | Multicenter | CPC 1-2 |
Lee et al [ | ]2017 | Republic of Korea | Retrospective cohort study | Single center | Survival to hospital discharge |
Liu et al [ | ]2015 | Singapore | Retrospective cohort study | Single center | Cardiac arrest |
Goto et al [ | ]2014 | Japan | Retrospective cohort study | Registration database | CPC 1-2 and 30-day survival |
Goto et al [ | ]2013 | Japan | Retrospective cohort study | Registration database | CPC 1-2 and 30-day survival |
Hock Ong et al [ | ]2012 | Singapore | Prospective cohort study | Single center | Cardiac arrest and in-hospital mortality |
Hayakawa et al [ | ]2011 | Japan | Prospective cohort study | Registration database | CPC 1-2 |
aCPC 3-5: poor cerebral performance category score 3 to 5.
bCPC 1-2: good cerebral performance category score 1 to 2.
cVF: ventricular fibrillation.
dVT: ventricular tachycardia.
eROSC: return of spontaneous circulation.
fNROSC: non-ROSC.
gOHCA: out-of-hospital cardiac arrest.
hSTE: ST segment elevation.
Study | Balance of data (balanced or unbalanced) | Location of CAa | Number of cases of studied events | Total number of cases | Number of cases in the training set | Generation of validation set | Number of cases in the validation set | Handling method for missing values | Model type |
Wang et al [ | ]Unbalanced | In hospital | 474 | 224,413 | 182,716 | External validation | 41,697 | Deletion | Logistic regression, National Early Warning Score, and Modified Early Warning Score |
Raheem et al [ | ]Unbalanced | In hospital | 5483 | 97,353 | 77,886 | Internal validation | 19,467 | Deletion | Artificial neural network, random forest, and logistic regression |
Amacher et al [ | ]Balanced | In hospital and out of hospital | IHMb: 309; CPC 3-5c: 309 | 713 | —d | — | 713 | No processing | Out-of-hospital CA score, the Cardiac Arrest Hospital Prognosis score, and logistic regression |
Cho et al [ | ]Unbalanced | In hospital | 228 | 95,607 | — | External validation | 95,607 | Deletion | Deep learning, Modified Early Warning Score, and National Early Warning Score |
Shin et al [ | ]Unbalanced | In hospital | 198 | 1995 | 970 | External validation | 1025 | Deletion | Deep learning, logistic regression, random forest, and National Early Warning Score |
Ding et al [ | ]Balanced | In hospital | CPC 1-2e: 20; IHM: 30 | 53 | — | Internal validation | 53 | Deletion | Logistic regression and Cox regression |
Nishioka et al [ | ]Balanced | Out of hospital | 6486 | 7587 | 3337 | External validation | 4250 | Supplement | Logistic regression |
Kajino et al [ | ]Unbalanced | Out of hospital | 11,411 | 302,799 | 149,425 | Internal validation | 153,374 | Deletion | Deep learning |
Pham et al [ | ]Balanced | Out of hospital | 210 | 434 | 231 | External validation (multicenter) | 203 | — | Logistic regression |
Rahadian et al [ | ]Unbalanced | Out of hospital | 860 | 20,713 | 17,162 | — | 3551 | Imputation | Logistic regression, LASSOf, and random forest |
Wang et al [ | ]Unbalanced | Out of hospital | 84 | 48,371 | 32,244 | — | 16,127 | — | Logistic regression |
Tsai et al [ | ]Balanced | Out of hospital | CPC 1-2: 127 | 443 | 265 | Internal validation | 178 | — | Logistic regression |
Schweiger et al [ | ]Balanced | Out of hospital | 120 | 291 | 138 | Internal validation | 153 | Supplement | FACTOR score |
Caputo et al [ | ]Unbalanced | Out of hospital | 2719 | 12,577 | — | Internal validation | 12,577 | — | Logistic regression |
Lu et al [ | ]Unbalanced | Emergency department | 636 | 316,465 | 237,349 | Random sampling | 79,116 | Supplement | Logistic regression, random forest, National Early Warning Score 2, and XGBoostg |
Dünser et al [ | ]Balanced | Operating rooms and departments outside the ICUh | NROSCi: 390; CPC 3-5: 559 | 630 | — | Internal validation | 630 | No processing | Random forest |
Bang et al [ | ]Balanced | In hospital | 411 | 1133 | 754 | Random sampling | 379 | Deletion | Logistic regression |
Zhang et al [ | ]Balanced | In hospital | 495 | 561 | 561 | — | — | Deletion | Logistic regression |
Li and Xing [ | ]Balanced | In hospital | NROSC: 564; CPC 3-5: 229 | 851 | 851 | Internal validation (bootstrap) | — | Deletion | Logistic regression |
Ding et al [ | ]Balanced | In hospital | 1769 | 3592 | 2873 | Internal validation | 719 | Deletion | Support vector machine, random forest, XGBoost, decision tree, and logistic regression |
Uehara et al [ | ]Unbalanced | Out of hospital | 71 | 8422 | 4239 | Random sampling (1:1) | 4183 | Deletion | Logistic regression |
Shin et al [ | ]Unbalanced | Out of hospital | ROSCj: 3095; CPC 1-2: 990 | 16,992 | — | Random sampling | — | Deletion | K-nearest neighbor, decision tree, random forest, support vector machine, logistic regression, and deep learning |
Kawai et al [ | ]Balanced | Out of hospital | 254 | 321 | 257 | Random sampling (8:2) | 64 | Deletion | Deep learning |
Imamura et al [ | ]Balanced | Out of hospital | 172 | 274 | 194 | External validation (multicenter) | 80 | Deletion | Logistic regression |
Hessulf et al [ | ]Unbalanced | Out of hospital | 6191 | 55,615 | 44,492 | Random sampling | 11,123 | Algorithm | XGBoost |
Yoon et al [ | ]Balanced | Out of hospital | 74 | 131 | — | External validation | 131 | Deletion | Logistic regression |
Chang et al [ | ]Unbalanced | Out of hospital | ROSC: 11,996; STDk: 11,833; 30-day survival: 7760; CPC 1-2: 3673 | 157,654 | 157,654 | Internal validation | — | Deletion | LightGBMl |
Wang et al [ | ]Unbalanced | Out of hospital | 156 | 2685 | 2685 | Internal validation | — | Deletion | Logistic regression |
Shinada et al [ | ]Unbalanced | Out of hospital | 1128 | 5340 | 4286 | Internal validation | 1054 | Deletion | Naïve Bayes |
Xu et al [ | ]Balanced | Emergency department and out of hospital | 150 | 600 | 600 | — | — | Deletion | Logistic regression |
Tsai et al [ | ]Unbalanced | Emergency department | 623 | 325,502 | 325,502 | — | — | No processing | Logistic regression, Modified Early Warning Score, and National Early Warning Score |
Tang et al [ | ]Balanced | ICU | 107 | 486 | — | Internal validation | 486 | Algorithm | National Early Warning Score, random forest, artificial neural network, and deep learning |
Kim et al [ | ]Unbalanced | Emergency department | 5431 | 1,350,693 | 1,080,554 | Random sampling | 270,139 | Deletion | Logistic regression, XGBoost, artificial neural network, and logistic regression |
Chae et al [ | ]Unbalanced | In hospital | 573 | 34,452 | — | Random sampling | 34,452 | Supplement | Decision tree, random forest, logistic regression, and artificial neural network |
Sun et al [ | ]Unbalanced | Emergency department | 240 | 145,557 | — | External validation | 145,557 | Deletion | Emergency department, in-hospital CA score, Modified Early Warning Score, and Rapid Emergency Medicine Score |
Wong et al [ | ]Unbalanced | Out of hospital | 855 | 4776 | 3582 | Random sampling | 1194 | Deletion | Random forest |
Tran et al [ | ]Balanced | Out of hospital | 996 | 2999 | 2999 | Internal validation | — | Deletion | Logistic regression |
Rajendram et al [ | ]Unbalanced | Out of hospital | STD: 3549; CPC 1-2: 1754 | 24,897 | — | External validation | 24,897 | Deletion | Random forest |
Rafi et al [ | ]Balanced | Out of hospital | 410 | 820 | — | Internal validation | 820 | Supplement (algorithm) | Logistic regression, random forest, and artificial neural network |
Liu et al [ | ]Unbalanced | Out of hospital | 12,729 | 153,611 | 119,477 | External validation (multicenter) | 34,134 | Deletion | Random forest |
Lin et al [ | ]Unbalanced | Out of hospital | 160 | 3520 | 2816 | Random sampling (8:2) | 704 | Deletion | Decision tree and random forest |
Kawai et al [ | ]Unbalanced | Out of hospital | 286 | 8274 | 6620 | Internal validation (cross-validation) | 1654 | Deletion | Artificial neural network |
Itagaki et al [ | ]Balanced | Out of hospital | BDm: 77 | 419 | — | Internal validation (bootstrap) | 419 | Deletion | Logistic regression |
Harris et al [ | ]Unbalanced | Out of hospital | 399 | 1726 | 1381 | Random sampling | 345 | Supplement (algorithm) | Logistic regression, random forest, and LightGBM |
Harford et al [ | ]Unbalanced | Out of hospital | 379 | 1798 | 957 | Internal validation (cross-validation) | 241 and 600 | Deletion | LightGBM, XGBoost, decision tree, random forest, k-nearest neighbor, logistic regression, and deep learning |
Harford et al [ | ]Unbalanced | Out of hospital | 670 | 9595 | 5750 | Random sampling | 1445 and 2400 | Deletion | Deep learning |
Chung et al [ | ]Unbalanced | In hospital | 94 | 796 | 637 | Random sampling | 159 | No processing | Artificial neural network |
Chi et al [ | ]Balanced | In hospital | 87,311 | 168,693 | 168,693 | — | — | Deletion | HVecn |
Wang et al [ | ]Unbalanced | In hospital | 46 | 159 | 80 | External validation (multicenter) | 79 | Deletion | CANPo score |
Bae et al [ | ]Balanced | In hospital | 643 | 982 | 671 | External validation (prospective) | 311 | Deletion | Logistic regression |
Mueller et al [ | ]Balanced | In hospital | 223 | 475 | 475 | — | — | Deletion | Logistic regression |
Lee et al [ | ]Unbalanced | In hospital | 425 | 332,371 | 173,368 | External validation (multicenter) | 159,003 | Algorithm | Deep learning and Modified Early Warning Score |
Lim et al [ | ]Unbalanced | — | 892 | 8240 | 4712 | External validation (prospective) | 3528 | Deletion | Logistic regression |
Lo and Siu [ | ]Unbalanced | Out of hospital | 2787 | 8157 | 6525 | Random sampling | 1632 | Deletion | Logistic regression, random forest, and artificial neural network |
Lonsain et al [ | ]Balanced | Out of hospital | 168 | 192 | 192 | Internal validation | — | — | Logistic regression |
Mueller et al [ | ]Balanced | Out of hospital | 761 | 1874 | — | Internal validation | 1874 | — | Logistic regression |
Nishioka et al [ | ]Balanced | Out of hospital | 382 | 2354 | 1329 | External validation (prospective) | 1025 | Supplement (algorithm) | Logistic regression |
Beom et al [ | ]Balanced | Out of hospital | STD: 475; CPC 1-2: 315 | 1432 | 496 (survival prognosis validation group) and 489 (neurological prognosis validation group) | Random sampling (7:3) | STD: 227; CPC 1-2: 220 | Deletion | Logistic regression |
Cheng et al [ | ]Unbalanced | Out of hospital | 86 | 1071 | — | Random sampling (9:1) | 1071 | Deletion | Logistic regression, XGBoost, and support vector machine |
Kim et al [ | ]Unbalanced | Out of hospital | 1986 | 39,602 | 39,602 | Internal validation | — | Deletion | Random forest, LightGBM, and artificial neural network |
Seo et al [ | ]Balanced | Out of hospital | 105 | 5739 | 5739 | Internal validation | — | Supplement (algorithm) | Random forest, XGBoost, and logistic regression |
Song et al [ | ]Balanced | Out of hospital | 61 | 106 | — | External validation | 106 | Deletion | Out-of-hospital CA score |
Sun et al [ | ]Unbalanced | Out of hospital | 148 | 447 | 447 | Internal validation | — | Deletion | Logistic regression |
Youn et al [ | ]Unbalanced | Out of hospital | 127 | 331 | — | Internal validation | 331 | Deletion | Random forest, CatBoost, and logistic regression |
Heo et al [ | ]Balanced | Out of hospital | 704 | 903 | 631 | External validation (prospective) | 158 and 114 | Mean | Ensemble learning and logistic regression |
Wang et al [ | ]Balanced | In hospital and out of hospital | 114 | 262 | 262 | Internal validation | — | No processing | Logistic regression |
Hong et al [ | ]Unbalanced | Emergency department | 993 | 214,307 | 168,488 | Random sampling | 45,819 | Supplement | Modified Early Warning Score, logistic regression, artificial neural network, and random forest |
Cho et al [ | ]Unbalanced | Inpatient ward | 11 | 8039 | — | External validation | 8039 | No processing | Modified Early Warning Score and deep learning |
Hirano et al [ | ]Balanced | Out of hospital | 30-day mortality: 13,329 | 30,049 | 23,668 | Internal validation (10-fold cross-validation) | 6381 | Deletion | Logistic regression, support vector machine, random forest, artificial neural network, and multilayer perceptron |
Okada et al [ | ]Unbalanced | Out of hospital | 114 | 916 | 458 | Internal validation | 458 | — | Logistic regression |
Liu et al [ | ]Unbalanced | Out of hospital | 5190 | 63,059 | 44,141 | Internal validation | 18,918 | — | ROSC after CA score and random forest |
Elola et al [ | ]Unbalanced | Out of hospital | 55 | 162 | 96 | Internal validation (5-fold cross-validation) | 66 | — | Random forest |
Hsieh et al [ | ]Unbalanced | Out of hospital | 660 | 252,771 | 168,522 | Internal validation | 84,249 | Deletion | Logistic regression |
Baldi et al [ | ]Unbalanced | Out of hospital | 625 | 2709 | 1962 | Internal validation | 747 | — | Logistic regression |
Li et al [ | ]Unbalanced | Emergency department | 164 | 656 | 656 | Random sampling | — | Supplement (algorithm) | Decision tree |
Srivilaithon et al [ | ]Unbalanced | Emergency department | 250 | 1250 | — | External validation | 1250 | Deletion | National Early Warning Score |
Lee et al [ | ]Balanced | In hospital | 367 | 580 | 580 | — | — | Deletion | Logistic regression |
Liu et al [ | ]Unbalanced | Emergency department | 124 | 43,569 | 43,569 | Internal validation | — | No processing | AdaBoostp, random forest, naïve Bayes, decision tree, logistic regression, artificial neural network, and deep learning |
Jang et al [ | ]Unbalanced | Emergency department | 1568 | 523,852 | 261,926 | — | 261,926 | Deletion | Artificial neural network, Modified Early Warning Score, logistic regression, and random forest |
Seki et al [ | ]Unbalanced | Out of hospital | 432 | 7326 | 5718 | External validation (prospective) | 1608 | Imputation | Random forest |
Park et al [ | ]Unbalanced | Out of hospital | 2805 | 19,832 | 15,860 | Random sampling (8:2) | 3972 | Deletion | Logistic regression, XGBoost, support vector machine, random forest, and artificial neural network |
Kwon et al [ | ]Unbalanced | Out of hospital | CPC 1-2: 3812; STD: 6435 | 36,190 | 28,045 | — | 8145 | — | Deep learning, logistic regression, random forest, and support vector machine |
Kong et al [ | ]Unbalanced | Out of hospital | CPC 1-2: 156; STD: 251 | 737 | 524 | External validation | 213 | — | Logistic regression |
Harford et al [ | ]Unbalanced | Out of hospital | 250 | 2244 | 1584 | Internal validation | 660 | Supplement (algorithm) | Deep learning |
Kwon et al [ | ]Unbalanced | In hospital | CA: 415; IHM: 795 | 50,359 | 46,725 | External validation (multicenter) | 3634 | Supplement (median) | Deep learning, Modified Early Warning Score, logistic regression, and random forest |
Chang et al [ | ]Unbalanced | Emergency department | 124 | 43,569 | — | Internal validation | 43,569 | Supplement (mean) | Logistic regression, decision tree, random forest, and XGBoost |
Shin et al [ | ]Unbalanced | Out of hospital | CPC 1-2: 86 | 456 | 228 | Internal validation | 228 | Deletion | Decision tree |
Lee et al [ | ]Unbalanced | Emergency department | 21 | 111 | — | Internal validation (bootstrap) | 111 | Deletion | Logistic regression and Simplified Acute Physiology Score II |
Liu et al [ | ]Unbalanced | Emergency department | 52 | 1025 | — | Internal validation (cross-validation) | 1025 | Deletion | Proposed scoring system and distance scoring system |
Goto et al [ | ]Unbalanced | Out of hospital | CPC 1-2: 205; 30-day survival: 581 | 5379 | 3693 | External validation (prospective) | 1686 | Deletion | Decision tree |
Goto et al [ | ]Unbalanced | Out of hospital | CPC 1-2: 7769; 30-day survival: 16,332 | 390,226 | 307,896 | Internal validation | 82,330 | Deletion | Decision tree |
Hock Ong et al [ | ]Unbalanced | Emergency department | CA: 43; IHM: 86 | 925 | — | External validation | 925 | Deletion | Modified Early Warning Score and support vector machine |
Hayakawa et al [ | ]Unbalanced | Out of hospital | 244 | 1497 | 862 | External validation (prospective) | 635 | Deletion | Logistic regression |
aCA: cardiac arrest.
bIHM: in-hospital mortality.
cCPC 3-5: poor cerebral performance category score 3 to 5.
dNot provided.
eCPC 1-2: good cerebral performance category score 1 to 2.
fLASSO: least absolute shrinkage and selection operator.
gXGBoost: Extreme Gradient Boosting.
hICU: intensive care unit.
iNROSC: nonreturn of spontaneous circulation.
jROSC: return of spontaneous circulation.
kSTD: survival to discharge.
lLightGBM: Light Gradient-Boosting Machine.
mBD: brain death.
nHVec: hierarchical vectorizer.
oCANP: Cardiac Arrest Neurological Prognosis.
pAdaBoost: Adaptive Boosting.
Risk of Bias in the Studies
After our exclusion of previously established scoring tools, a quality assessment of 208 ML models, involving 17 types, was conducted. In total, 24% (50/208) of these models originated from case-control studies, which introduced a high risk of bias in study participant selection. Regarding predictive factors, 1% (2/208) of the models were linked to a high risk of bias owing to the use of outcome information. Regarding outcome assessment, as both CA and prognosis outcomes were clearly defined using standard definitions, no additional predictive factors were required, resulting in a low risk of bias in outcome assessment. The included ML models were primarily derived from large-sample statistical analyses; however, 9.1% (19/208) of the models were based on a very small number of cases, with an event per variable value of <10. In addition, inappropriate deletion methods were applied to address missing data in 65.4% (136/208) of the models, and only univariate analysis was used to screen for predictive factors in 22.6% (47/208) of the models, ultimately resulting in a high risk of bias for 76.9% (160/208) of the models in the domain of statistical analysis, as detailed in
.
Meta-Analysis
CA Occurrence
A meta-analysis of ML models for predicting CA occurrence in the training set was conducted through a random-effects model. The analysis revealed a C-index of 0.84 (95% CI 0.82-0.86; 38/208, 18.3% of the models), sensitivity of 0.78 (95% CI 0.70-0.84; 34/208, 16.3% of the models), and specificity of 0.84 (95% CI 0.80-0.88; 34/208, 16.3% of the models). Similarly, a meta-analysis of ML models for predicting CA occurrence in the validation set was conducted, yielding a C-index of 0.89 (95% CI 0.87-0.91; 52/208, 25% of the models), sensitivity of 0.83 (95% CI 0.78-0.87; 43/208, 20.7% of the models), and specificity of 0.93 (95% CI 0.88-0.96; 43/208, 20.7% of the models;
- ).Because of the diverse sources of modeling data from both balanced and imbalanced datasets and the variety of models, a subgroup analysis was conducted based on the data model type. The detailed results are presented in
- .Favorable Neurological Outcomes (CPC 1-2)
A meta-analysis of ML models for predicting CPC 1-2 in the training set was conducted using a random-effects model. The results indicated a C-index of 0.90 (95% CI 0.89-0.92; 21/208, 10.1% of the models), sensitivity of 0.72 (95% CI 0.47-0.98; 15/208, 7.2% of the models), and specificity of 0.85 (95% CI 0.79-0.90; 15/208, 7.2% of the models). Similarly, the meta-analysis of ML models for predicting CPC 1-2 in the validation set revealed a C-index of 0.86 (95% CI 0.85-0.87; 69/208, 33.2% of the models), sensitivity of 0.72 (95% CI 0.61-0.81; 44/208, 21.2% of the models), and specificity of 0.79 (95% CI 0.66-0.88; 44/208, 21.2% of the models;
, and Table S1 and Figures S1-S3 in ).It was hypothesized that there might have been differences in CPC 1-2 between patients experiencing IHCA and OHCA. As the real-world data closely resembled balanced datasets, a subgroup analysis was conducted exclusively on the IHCA and OHCA populations. The detailed results are shown in
, and Table S1 and subgroup analysis report S1 in .CA Mortality
A random-effects model was used for the meta-analysis of ML models for predicting CA mortality in the training set. The analysis indicated a C-index of 0.80 (95% CI 0.76-0.84; 14/208, 6.7% of the models), sensitivity of 0.82 (95% CI 0.58-0.94; 7/208, 3.4% of the models), and specificity of 0.76 (95% CI 0.51-0.91; 7/208, 3.4% of the models). Similarly, a meta-analysis of ML models for predicting CA mortality in the validation set revealed a C-index of 0.85 (95% CI 0.82-0.87; 28/208, 13.5% of the models), sensitivity of 0.83 (95% CI 0.79-0.87; 23/208, 11.1% of the models), and specificity of 0.79 (95% CI 0.74-0.83; 23/208, 11.1% of the models; Tables S2-S3 and Figures S4-S6 in
).It was thought that there might have been differences in mortality rates between patients experiencing IHCA and OHCA. Given that the real-world data closely resembled balanced datasets, a subgroup analysis was conducted exclusively on the IHCA and OHCA populations. The detailed analysis results are provided in Tables S2-S3 and subgroup analysis report S2 in
.ROSC Analysis
A meta-analysis of ML models for predicting ROSC following CA in the training set was conducted using a random-effects model. The analysis yielded a C-index of 0.83 (95% CI 0.79-0.88; 10/208, 4.8% of the models), sensitivity of 0.52 (95% CI 0.31-0.73; 8/208, 3.8% of the models), and specificity of 0.91 (95% CI 0.88-0.93; 8/208, 3.8% of the models). Similarly, a meta-analysis of ML models for predicting ROSC in the validation set revealed a C-index of 0.77 (95% CI 0.74-0.80; 13/208, 6.3% of the models), sensitivity of 0.53 (95% CI 0.31-0.74; 6/208, 2.9% of the models), and specificity of 0.88 (95% CI 0.71-0.96; 6/208, 2.9% of the models; Tables S4-S5 and Figures S7-S9 in
).It was postulated that there may have been differences in ROSC between patients who experienced IHCA and OHCA. As the real-world data closely resembled balanced datasets, a subgroup analysis was performed solely on the IHCA and OHCA populations. The comprehensive analysis results are provided in Tables S4-S5 and subgroup analysis report S3 in
.Modeling Variables
Modeling variables were extracted and weighted for analysis from the 93 studies on ML models for predicting CA and CPC 1-2. Among the 28 studies on predicting CA, the variables with the highest weights were respiratory rate (n=22, 79%), blood pressure (n=20, 71%), age (n=19, 68%), temperature (n=19, 68%), oxygen saturation (n=15, 54%), and airway (n=9, 32%). Among the 42 studies on predicting CPC 1-2, the results showed that the modeling variables with the highest weights in the IHCA group were rhythm (shockable or nonshockable; 8/10, 80%), age (7/10, 70%), medication use (6/10, 60%), gender (5/10, 50%), and Glasgow Coma Scale (GCS; 5/10, 50%). The modeling variables with the highest weights in the OHCA group were age (25/32, 78%), rhythm (shockable or nonshockable; 24/32, 75%), medication use (18/32, 56%), ROSC (14/32, 44%), gender (12/32, 38%), no-flow time (resuscitation duration; 12/32, 38%), EMS transport (scene interval, arrival time, and response time; 12/32, 38%), defibrillation (11/32, 34%), and GCS (6/32, 19%). The detailed results of the modeling variables and weight analysis are presented in Tables S6 and S7 in
.Discussion
Summary of the Principal Findings
It was observed that ML has garnered widespread attention among numerous researchers in the management of CA, particularly focusing on early CA risk prediction in both in-hospital and out-of-hospital populations. Our systematic review and meta-analysis demonstrated a relatively favorable predictive value of ML in the validation set for forecasting CA risk, with a C-index of 0.89. Similarly, ML also appeared to exhibit a relatively favorable predictive value for neurological outcomes (CPC 1-2) and mortality in patients who had already experienced CA, with pooled C-indexes of 0.86 and 0.85, respectively. However, in predicting ROSC following CA, ML seemed to display a predictive value comparable to that of traditional scoring tools, with a pooled C-index of 0.77.
Comparison With Previous Reviews
Currently, in clinical practice, classic early warning scoring tools, including the Cardiac Arrest Risk Triage (CART) score, Modified Early Warning Score, and VitalPAC Early Warning Score, are commonly used for predicting the occurrence of CA. A previous review by Churpek et al [
] found that, among these tools, the CART had the highest accuracy in predicting CA compared to the others. However, the CART had certain limitations, with an area under the curve of 0.83, sensitivity calculated at 0.61 based on the optimal Youden index, and a specificity of 0.84. Moreover, the CART has not been externally validated, and the included population is limited to ward inpatients. Therefore, whether the CART can dynamically monitor the occurrence of CA in real-time clinical events, improve rescue success rates, and enhance patient outcomes requires prospective validation using high-quality, large-sample external data. Our summarized results of the ML models reveal that ML has certain clinical predictive value in forecasting the occurrence of CA and demonstrates relatively favorable accuracy, with an overall C-index of 0.89, sensitivity of 0.83, and specificity of 0.93. Comparatively, this is superior to previous scoring tools, providing a certain clinical basis for the future establishment of more reliable early warning scoring systems for predicting CA.In a recent review by Carrick et al [
], the accuracy of scoring tools for predicting survival or neurological outcomes following CA, such as the OHCA score, Cardiac Arrest Hospital Prognosis score, and Good Outcome Following Attempted Resuscitation score, was summarized. These 3 tools, which have undergone rigorous clinical validation, exhibited relatively high accuracy, with C-indexes of 0.79, 0.83, and 0.76, respectively. However, our summarized results of ML models suggested that ML seems to exhibit more favorable accuracy, with an overall C-index of 0.86, sensitivity of 0.72, and specificity of 0.79 for predicting favorable neurological outcomes. For predicting CA mortality, ML achieved an overall C-index of 0.85, sensitivity of 0.83, and specificity of 0.79.Among various ROSC prediction models for CA that have been developed in the current clinical field, the ROSC after CA score developed by Gräsner et al [
] using data from 5471 patients with OHCA from the German Resuscitation Registry has attracted the most attention. It has been externally validated in several European and Asian countries, demonstrating relatively good accuracy, with an area under the curve of 0.736 in a recent large-scale external validation study [ ]. However, our summarized results of ML models indicated that the overall C-index of ML was 0.77, with a sensitivity of 0.53 and specificity of 0.88. Comparatively, these ML models did not seem to significantly outperform traditional scoring tools in predicting ROSC outcomes for patients with CA.Modeling Variables in ML
In our review, the modeling variables of the discussed models primarily originated from common clinical features. It was found that variables such as respiratory rate, blood pressure, age, temperature, oxygen saturation, and airway were key predictors in existing ML models, and they also constitute critical variables in traditional scoring models [
, ]. Therefore, the impact of these variables on predicting the occurrence of CA is well established. However, these predictors differ to some extent from the findings of recent studies, such as the review by Andersen et al [ ], which identified CA risk factors. The review by Andersen et al [ ] suggested that age was more associated with post-CA prognosis and reduced survival rates, whereas a history of cardiac diseases such as myocardial infarction, arrhythmias, and heart failure was recognized as the most common risk factor for CA occurrence. Other potential risk factors included the use of certain medications, such as those that prolong the QT interval, opioids, and sedatives. Nonetheless, the review concurred that respiratory function and body temperature also had predictive significance for CA, with early interventions targeting these factors being crucial for achieving reversible outcomes [ ].In the models we reviewed that aimed to predict CPC 1-2 outcomes in patients with IHCA and OHCA, the modeling variables with the highest weight were age, rhythm (shockable or nonshockable), medication use, ROSC, gender, no-flow time (resuscitation duration), defibrillation, EMS transport (scene interval, arrival time, and response time), and GCS. When compared to the review by Sandroni et al [
], which highlighted the predictive value of GCS, biological markers (eg, neuron-specific enolase), and electrophysiological indicators (eg, somatosensory evoked potential) for favorable neurological outcomes, our findings show some differences. In addition, complex variables such as medical imaging might need consideration in clinical practice. In recent years, AI methods have been widely used in medical imaging for identifying disease progression and prognosis, demonstrating superior accuracy and cost-effectiveness compared to traditional clinical feature–based predictive models [ ]. Therefore, in the prediction of CA occurrence and prognosis, the high-value variables identified in recent studies, such as electrocardiography [ ] and ultrasound [ ], are not reflected in traditional scoring tools. This raises the question of whether it is worth further validating these more complex variables and attempting to identify more efficient predictive factors to develop or update risk-scoring tools in the field of CA.Clinical Applications of ML
Our study reveals that ML methods appear to outperform traditional scoring tools in predicting the occurrence and progression of CA. Therefore, the development of simple auxiliary tools based on ML theory is recommended to facilitate rapid risk screening of CA for both in-hospital and out-of-hospital patients, enabling timely formulation of appropriate treatment strategies. These ML-based CA prediction models would be particularly beneficial for emergency departments and out-of-hospital response teams. Under the current circumstances, in which emergency departments worldwide are facing challenges of overcrowding, resource limitations, and a high influx of patients who are critically ill [
- ], relying solely on human assessment of CA risk based on various clinical data could pose significant challenges to the efficiency of CA treatment. Furthermore, the complexity and volume of clinical data, including patient demographics, laboratory results, imaging data, and textual notes from health care providers, are continuously increasing. Thus, using ML to analyze large datasets and handle complex variables, such as clinical images, seems to be a more feasible approach [ ]. The development of simplified ML prediction tools or intelligent reading systems has the potential to mitigate risks such as treatment delays and poor prognoses in patients with CA in emergency departments. These tools could also enhance health care service quality, reduce human resource costs, and support the formulation of targeted therapeutic strategies. Similarly, ML models that incorporate real-time input of variables such as vital signs, electrocardiograms, and response times for out-of-hospital rescue scenarios can accurately predict positive CA events. This capability aids response teams in avoiding repeated and frequent evaluations, enabling timely decisions on whether to implement preventive therapeutic interventions to avert CA or, in cases of CA occurrence, whether to initiate extracorporeal membrane oxygenation cardiopulmonary resuscitation to improve survival rates [ , ].In addition, our findings indicate that the balance of data significantly impacts the outcomes of ML model construction in CA-related studies. This effect is particularly pronounced in scenarios in which the outcome metrics exhibit severe imbalance. For instance, in predicting the occurrence of CA in our study, the rationale for the selected predictive factors remained challenging, and the accuracy of the constructed model was often influenced by the overwhelming proportion of negative events [
]. In such cases, the C-index hardly represented the actual outcome prediction accuracy of the model. Therefore, in our study, the sensitivity and specificity of ML models were also summarized [ ]. The data balance in the studies we included was primarily addressed using oversampling, but these studies rarely considered validating models constructed from balanced data against imbalanced data. This raises certain doubts regarding the accuracy of existing models constructed based on balanced data when applied to real-world cases of CA. Our study reveals that CA events in the real world are often inherently imbalanced. Therefore, we recommended prioritizing the use of models constructed from real-world data or validating models constructed from balanced data against real-world data to ascertain their true effectiveness [ ].Ethical Considerations and Model Selection
Although ML models demonstrated relatively satisfactory accuracy in predicting the occurrence and progression of CA in our study, several common challenges inherent to ML modeling should be acknowledged. For instance, compared to traditional scoring tools, ML models rely on general algorithms to generate desired outputs in response to specific input data, a process characterized by less explicit rules [
]. In addition, algorithmic biases may result in unrepresentative datasets [ ], and the reliability of model validation remains a concern [ ]. These issues underscore the ethical challenges associated with the application of AI in medicine, including result interpretability, algorithmic transparency, predictive fairness, and data privacy [ , ]. These potential ethical concerns are specifically reflected in a patient survey study on the prevention of CA occurrence and development conducted by Maris et al [ ]. The study results indicate that, while AI-driven CA treatment decisions offer objective data, the absence of patient involvement and informed consent, along with the interpretability of the model, suggest that the overuse of AI technology may ultimately undermine patient trust in physicians. This, in turn, poses challenges to the current high-quality health care goal of patient-centered care [ ] in the field of cardiovascular disease treatment, which is built on shared decision-making [ ], respect for patient autonomy, and mutual trust. Therefore, in the high-risk and critical treatment of CA, it seems that physicians should continue to make final decisions in collaboration with ML models based on evidence-based clinical experience and the values of the patients.Therefore, based on ethical considerations, the choice of different models during research remains challenging as model interpretability and accuracy are factors that need to be considered comprehensively during model construction [
]. Selecting models with higher interpretability, such as LR, Cox regression, or DTs, can facilitate better communication, interaction, and trust between health care providers and patients. However, these models may have limited predictive value for certain outcome events [ ]. On the other hand, models whose interpretability is poorer, including neural networks, support vector machines (SVMs), and Extreme Gradient Boosting, often perform exceptionally well in predicting outcomes [ ]. At this point, it may become necessary to grant patients and their families greater rights to information and autonomy, enabling their active participation in medical decision-making. In our research, LR was the most frequently used model type as it facilitated the development of predictive nomograms, which are simple and easily applicable tools.Among the 17 ML models that we included, artificial neural networks, RF, and LR appeared to demonstrate relatively ideal predictive value in forecasting the occurrence of CA and were more frequently used by clinical researchers. In predicting neurological outcomes, our study found that LR remained the model most commonly selected by clinicians, followed by DT, RF, SVM, and others. If we aim to develop a simplified predictive scoring scale to assist in clinical practice, priority may be given to using LR for its development and subsequent updates. This preference arises because, according to our research findings, LR demonstrates relatively satisfactory accuracy and facilitates the construction of straightforward and practical predictive nomograms [
, ]. Furthermore, considering the interpretability of models is essential in real-world clinical practice. However, if the objective is to develop auxiliary applications for disease surveillance and prediction in clinical settings, alternative, more complex models may be considered. For example, models such as neural networks, SVM, and Extreme Gradient Boosting, which demonstrated higher accuracy in our study, could be appropriate choices. On the other hand, when dealing with image-based features, such as medical imaging or electrocardiograms, it may be necessary to use models with lower interpretability, such as those based on deep learning [ ], rather than confining the analysis to commonly used clinical features with stronger interpretability.Prospects
In addition, we observed a minimal number of studies that constructed models based on artificial neural networks and ensemble learning, which exhibited highly favorable results, indicating that further validation of these models may be required in future research. Currently, in clinical practice, there is an increasing preference for using simple scoring tools based on interpretable clinical features. While opting for such tools may reduce the ethical dilemmas encountered in clinical settings, relying solely on traditional methods and highly interpretable clinical indicators, such as the Delphi method, during the development of these scoring tools seems to introduce significant bias into the constructed models. Therefore, we considered using multicenter real-world big data, using ML approaches, and incorporating a broader range of cases and clinical features to construct interpretable scoring tools and promote their application. Regarding the processing of clinical image features, our expectation lies in the development of intelligent reading tools based on deep learning methods. Nonetheless, in our study, there was limited research on deep learning based on medical imaging and ultrasounds, particularly in the field of CA, where such research remains relatively underexplored. Therefore, future research on CA should actively explore the integration of medical imaging and ultrasonography. In selecting datasets and algorithms for the development of AI prediction models, it is crucial to rigorously investigate and address the ethical shortcomings of AI applications in health care. Efforts should be made to minimize the influence of individual characteristics such as gender, race, skin color, and socioeconomic status, ensuring that population representation and sample size are carefully considered. Sufficient numbers and the quality of representative populations should be selected from diverse regions, ethnicities, and age groups to establish standardized big data models, thereby maximizing the potential of AI technologies [
, ]. It is essential to adopt a multifaceted, interdisciplinary approach; strengthen data protection systems to prevent the leakage of patient information; and conduct extensive reviews to avoid biases [ ], ultimately preventing unfairness toward individuals or patient populations in the development of intelligent diagnostic or predictive tools for CA [ , ].Advantages and Limitations of This Study
Our study has 3 strengths. First, it represents the first attempt to summarize evidence comparing ML models and scoring tools in predicting the occurrence and prognosis of CA, thereby providing an evidence-based foundation for the subsequent clinical update and development of new scoring tools or AI early warning systems in the field of CA. Second, our study encompassed 93 original studies with large sample sizes, covering 14 countries and involving 5,729,721 patients, significantly enhancing the strength of our evidence. Third, we conducted a detailed discussion of the accuracy of different models on balanced and imbalanced data. However, this study also had the following limitations. First, most of the original studies on the prediction of CA occurrence (28/93, 30%) constructed models based solely on imbalanced data without validating them on balanced data. Second, many model validation processes primarily used internal validation through random sampling, lacking external multicenter validation to examine their generalizability. Third, due to potential differences in the predictive performance of different models for outcome events, despite our in-depth discussion of various ML models and datasets, the limited number of studies on certain ML models restricted our ability to interpret the results of ML applications in CA more comprehensively. Fourth, due to the small number of included studies, we did not strictly distinguish between IHCA and OHCA populations when summarizing the predictors of CA. Fifth, as this review only included English-language original studies, there may be potential language bias.
Conclusions
Current traditional scoring tools have demonstrated relatively ideal efficacy in predicting the occurrence and prognosis of CA. On the basis of this review, ML appeared to offer greater advantages in predicting the occurrence of CA, neurological functional prognosis, and mortality outcomes. However, for predicting outcomes associated with ROSC after CA, ML models did not seem to significantly outperform traditional models. Therefore, in future studies on CA, researchers may explore the systematic updating of traditional scoring tools based on the superior performance of ML in specific outcomes. This approach would enable the implementation of AI-driven enhancements within complex and diverse clinical data, thereby assisting clinicians in monitoring and providing early warnings for multiple predictive factors. For outcomes that are still unpredictable, multicenter large-sample studies are warranted.
Acknowledgments
This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.
Data Availability
All data generated or analyzed during this study are included in this published article and its supplementary information files.
Authors' Contributions
SW, XG, and SH contributed to original draft preparation and writing. SW, QL, and FZ conducted the initial study investigation and design. SW, CZ, and ZC critically reviewed and edited the manuscript. SW, YH, and QL extracted the data. FZ and JC performed the formal analysis. QL, YH, and FZ assessed the risk of bias of the included studies. QL and FZ collaborated on project administration and supervision and serve as co–corresponding authors. All authors commented on previous versions of the manuscript and read and approved the final manuscript.
Conflicts of Interest
None declared.
PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) checklist.
PDF File (Adobe PDF File), 98 KBLiterature search strategy in PubMed.
DOCX File , 12 KBLiterature search strategy in the Cochrane Library.
DOCX File , 12 KBLiterature search strategy in Embase.
DOCX File , 12 KBLiterature search strategy in Web of Science.
DOCX File , 11 KBMeta-analysis results for the C-index of predictive models of in-hospital cardiac arrest risk.
DOCX File , 16 KBMeta-analysis results for the sensitivity and specificity of predictive models for in-hospital cardiac arrest risk.
DOCX File , 16 KBForest plot of the C-index meta-analysis of prediction models of in-hospital cardiac arrest risk in the training set.
DOCX File , 46 KBForest plot of the C-index meta-analysis of prediction models of in-hospital cardiac arrest risk in the validation set.
DOCX File , 43 KBForest plot of the sensitivity meta-analysis of predictive models of in-hospital cardiac arrest risk in the training set.
DOCX File , 43 KBForest plot of the sensitivity meta-analysis of predictive models of in-hospital cardiac arrest risk in the validation set.
DOCX File , 41 KBForest plot of the specificity meta-analysis of predictive models of in-hospital cardiac arrest risk in the training set.
DOCX File , 42 KBForest plot of the specificity meta-analysis of predictive models of in-hospital cardiac arrest risk in the validation set.
DOCX File , 41 KBSubgroup analysis for predicting cardiac arrest occurrence.
DOCX File , 11 KBMeta-analysis results for the C-index of predictive models of in-hospital cardiac arrest risk in balanced datasets.
DOCX File , 15 KBMeta-analysis results for the sensitivity and specificity of risk prediction models for in-hospital cardiac arrest in balanced datasets.
DOCX File , 14 KBMeta-analysis results for the C-index of prediction models for in-hospital cardiac arrest risk in imbalanced datasets.
DOCX File , 17 KBMeta-analysis results for the sensitivity and specificity of risk prediction models for in-hospital cardiac arrest in imbalanced datasets.
DOCX File , 15 KBMeta-analysis results for the C-index of predictive models of favorable neurological function (good cerebral performance category score 1-2).
DOCX File , 16 KBTables/figures/subgroup analysis results.
DOCX File , 331 KBReferences
- Hirsch KG, Abella BS, Amorim E, Bader MK, Barletta JF, Berg K, et al. Critical care management of patients after cardiac arrest: a scientific statement from the American Heart Association and Neurocritical Care Society. Circulation. Jan 09, 2024;149(2):e168-e200. [FREE Full text] [CrossRef] [Medline]
- Gräsner JT, Herlitz J, Tjelmeland IB, Wnent J, Masterson S, Lilja G, et al. European Resuscitation Council Guidelines 2021: epidemiology of cardiac arrest in Europe. Resuscitation. Apr 2021;161:61-79. [CrossRef] [Medline]
- Andersen LW, Holmberg MJ, Berg KM, Donnino MW, Granfeldt A. In-hospital cardiac arrest: a review. JAMA. Mar 26, 2019;321(12):1200-1210. [FREE Full text] [CrossRef] [Medline]
- Feng X, Yuguo C. Summary of the report on cardiac arrest and cardiopulmonary resuscitation in China (2022 edition). Chin Circul J. 2023;38(10). [CrossRef]
- Zheng J, Lv C, Zheng W, Zhang G, Tan H, Ma Y, et al. Incidence, process of care, and outcomes of out-of-hospital cardiac arrest in China: a prospective study of the BASIC-OHCA registry. The Lancet Public Health. Dec 2023;8(12):e923-e932. [CrossRef]
- Stanger DE, Fordyce CB. The cost of care for cardiac arrest. Resuscitation. Oct 2018;131:A7-A8. [CrossRef] [Medline]
- Donnino MW, Rittenberger JC, Gaieski D, Cocchi MN, Giberson B, Peberdy MA, et al. The development and implementation of cardiac arrest centers. Resuscitation. Aug 2011;82(8):974-978. [CrossRef] [Medline]
- Scholz KH, Andresen D, Böttiger BW, Busch HJ, Fischer M, Frey N, et al. [Quality indicators and structural requirements for Cardiac Arrest Centers-German Resuscitation Council (GRC)]. Anaesthesist. May 4, 2017;66(5):360-362. [CrossRef] [Medline]
- Drennan IR, Lin S, Thorpe KE, Morrison LJ. The effect of time to defibrillation and targeted temperature management on functional survival after out-of-hospital cardiac arrest. Resuscitation. Nov 2014;85(11):1623-1628. [CrossRef] [Medline]
- Kleinman ME, Brennan EE, Goldberger ZD, Swor RA, Terry M, Bobrow BJ, et al. Part 5: adult basic life support and cardiopulmonary resuscitation quality. Circulation. Nov 03, 2015;132(18_suppl_2):1623-1628. [CrossRef]
- Panchal AR, Bartos JA, Cabañas JG, Donnino MW, Drennan IR, Hirsch KG, et al. Part 3: adult basic and advanced life support: 2020 American Heart Association guidelines for cardiopulmonary resuscitation and emergency cardiovascular care. Circulation. Oct 20, 2020;142(16_suppl_2):S366-S468. [FREE Full text] [CrossRef] [Medline]
- Choi RY, Coyner AS, Kalpathy-Cramer J, Chiang MF, Campbell JP. Introduction to machine learning, neural networks, and deep learning. Transl Vis Sci Technol. Mar 27, 2020;9(2):14. [FREE Full text] [CrossRef] [Medline]
- Hatib F, Jian Z, Buddi S, Lee C, Settels J, Sibert K, et al. Machine-learning algorithm to predict hypotension based on high-fidelity arterial pressure waveform analysis. Anesthesiology. Oct 2018;129(4):663-674. [CrossRef] [Medline]
- Wijnberge M, Geerts BF, Hol L, Lemmers N, Mulder MP, Berge P, et al. Effect of a machine learning-derived early warning system for intraoperative hypotension vs standard care on depth and duration of intraoperative hypotension during elective noncardiac surgery: the HYPE randomized clinical trial. JAMA. Mar 17, 2020;323(11):1052-1060. [FREE Full text] [CrossRef] [Medline]
- Rajkomar A, Dean J, Kohane I. Machine learning in medicine. N Engl J Med. Apr 04, 2019;380(14):1347-1358. [CrossRef]
- Swanson K, Wu E, Zhang A, Alizadeh AA, Zou J. From patterns to patients: advances in clinical machine learning for cancer diagnosis, prognosis, and treatment. Cell. Apr 13, 2023;186(8):1772-1791. [FREE Full text] [CrossRef] [Medline]
- Sem M, Mastrangelo E, Lightfoot D, Aves T, Lin S, Mohindra R. The ability of machine learning algorithms to predict defibrillation success during cardiac arrest: a systematic review. Resuscitation. Apr 2023;185:109755. [CrossRef] [Medline]
- Chen CC, Massey SL, Kirschen MP, Yuan I, Padiyath A, Simpao AF, et al. Electroencephalogram-based machine learning models to predict neurologic outcome after cardiac arrest: a systematic review. Resuscitation. Jan 2024;194:110049. [FREE Full text] [CrossRef] [Medline]
- Debray TP, Damen JA, Riley RD, Snell K, Reitsma JB, Hooft L, et al. A framework for meta-analysis of prediction model studies with binary and time-to-event outcomes. Stat Methods Med Res. Sep 2019;28(9):2768-2786. [FREE Full text] [CrossRef] [Medline]
- Wang YM, Chiu IM, Chuang YP, Cheng CY, Lin CF, Cheng FJ, et al. RAPID-ED: a predictive model for risk assessment of patient's early in-hospital deterioration from emergency department. Resusc Plus. Mar 2024;17:100570. [FREE Full text] [CrossRef] [Medline]
- Raheem A, Waheed S, Karim M, Khan NU, Jawed R. Prediction of major adverse cardiac events in the emergency department using an artificial neural network with a systematic grid search. Int J Emerg Med. Jan 04, 2024;17(1):4. [FREE Full text] [CrossRef] [Medline]
- Amacher SA, Arpagaus A, Sahmer C, Becker C, Gross S, Urben T, et al. Prediction of outcomes after cardiac arrest by a generative artificial intelligence model. Resusc Plus. Jun 2024;18:100587. [FREE Full text] [CrossRef] [Medline]
- Cho KJ, Kim KH, Choi J, Yoo D, Kim J. External validation of deep learning-based cardiac arrest risk management system for predicting in-hospital cardiac arrest in patients admitted to general wards based on rapid response system operating and nonoperating periods: a single-center study. Crit Care Med. Mar 01, 2024;52(3):e110-e120. [FREE Full text] [CrossRef] [Medline]
- Shin Y, Cho KJ, Chang M, Youk H, Kim YJ, Park JY, et al. The development and validation of a novel deep-learning algorithm to predict in-hospital cardiac arrest in ED-ICU (emergency department-based intensive care units): a single center retrospective cohort study. Signa Vitae. 2024;20(4):83-98. [CrossRef]
- Ding G, Kuang A, Zhou Z, Lin Y, Chen Y. Development of prognostic models for predicting 90-day neurological function and mortality after cardiac arrest. Am J Emerg Med. May 2024;79:172-182. [CrossRef] [Medline]
- Nishioka N, Yamada T, Nakao S, Yoshiya K, Park C, Nishimura T, et al. External validation of updated prediction models for neurological outcomes at 90 days in patients with out-of-hospital cardiac arrest. J Am Heart Assoc. May 07, 2024;13(9):e033824. [FREE Full text] [CrossRef] [Medline]
- Kajino K, Daya MR, Onoe A, Nakamura F, Nakajima M, Sakuramoto K, et al. Development and validation of a prehospital termination of resuscitation (TOR) rule for out - of hospital cardiac arrest (OHCA) cases using general purpose artificial intelligence (AI). Resuscitation. Apr 2024;197:110165. [CrossRef] [Medline]
- Pham HN, Holmstrom L, Chugh H, Uy-Evanado A, Nakamura K, Zhang Z, et al. Dynamic electrocardiogram changes are a novel risk marker for sudden cardiac death. Eur Heart J. Mar 07, 2024;45(10):809-819. [FREE Full text] [CrossRef] [Medline]
- Rahadian RE, Okada Y, Shahidah N, Hong D, Ng YY, Chia MY, et al. Machine learning prediction of refractory ventricular fibrillation in out-of-hospital cardiac arrest using features available to EMS. Resusc Plus. Jun 2024;18:100606. [FREE Full text] [CrossRef] [Medline]
- Wang SA, Chang CJ, Do Shin S, Chu SE, Huang CY, Hsu LM, et al. Development of a prediction model for emergency medical service witnessed traumatic out-of-hospital cardiac arrest: a multicenter cohort study. J Formos Med Assoc. Jan 2024;123(1):23-35. [FREE Full text] [CrossRef] [Medline]
- Tsai H, Chi CY, Wang LW, Su YJ, Chen YF, Tsai MS, et al. Outcome prediction of cardiac arrest with automatically computed gray-white matter ratio on computed tomography images. Crit Care. Apr 09, 2024;28(1):118. [FREE Full text] [CrossRef] [Medline]
- Schweiger V, Hiller P, Utters R, Fenice A, Cammann VL, Di Vece D, et al. A novel score to predict in-hospital mortality for patients with acute coronary syndrome and out-of-hospital cardiac arrest: the FACTOR study. Clin Res Cardiol. Apr 08, 2024;113(4):591-601. [FREE Full text] [CrossRef] [Medline]
- Caputo ML, Baldi E, Burkart R, Wilmes A, Cresta R, Benvenuti C, et al. Validation of Utstein-Based score to predict return of spontaneous circulation (UB-ROSC) in patients with out-of-hospital cardiac arrest. Resuscitation. Apr 2024;197:110113. [FREE Full text] [CrossRef] [Medline]
- Lu TC, Wang CH, Chou FY, Sun JT, Chou EH, Huang EP, et al. Machine learning to predict in-hospital cardiac arrest from patients presenting to the emergency department. Intern Emerg Med. Mar 2023;18(2):595-605. [CrossRef] [Medline]
- Dünser MW, Hirschl D, Weh B, Meier J, Tschoellitsch T. The value of a machine learning algorithm to predict adverse short-term outcome during resuscitation of patients with in-hospital cardiac arrest: a retrospective study. Eur J Emerg Med. Aug 01, 2023;30(4):252-259. [CrossRef] [Medline]
- Bang HJ, Oh SH, Jeong WJ, Cha K, Park KN, Youn CS, et al. A novel cardiac arrest severity score for the early prediction of hypoxic-ischemic brain injury and in-hospital death. Am J Emerg Med. Apr 2023;66:22-30. [CrossRef] [Medline]
- Zhang Y, Rao C, Ran X, Hu H, Jing L, Peng S, et al. How to predict the death risk after an in-hospital cardiac arrest (IHCA) in intensive care unit? A retrospective double-centre cohort study from a tertiary hospital in China. BMJ Open. Oct 05, 2023;13(10):e074214. [FREE Full text] [CrossRef] [Medline]
- Li Z, Xing J. A model for predicting return of spontaneous circulation and neurological outcomes in adults after in-hospital cardiac arrest: development and evaluation. Front Neurol. 2023;14:1323721. [FREE Full text] [CrossRef] [Medline]
- Ding X, Wang Y, Ma W, Peng Y, Huang J, Wang M, et al. Development of early prediction model of in-hospital cardiac arrest based on laboratory parameters. Biomed Eng Online. Dec 06, 2023;22(1):116. [FREE Full text] [CrossRef] [Medline]
- Uehara K, Tagami T, Hyodo H, Ohara T, Sakurai A, Kitamura N, et al. Prehospital ABC (Age, Bystander and Cardiogram) scoring system to predict neurological outcomes of cardiopulmonary arrest on arrival: post hoc analysis of a multicentre prospective observational study. Emerg Med J. Jan 2023;40(1):42-47. [CrossRef] [Medline]
- Shin SJ, Bae HS, Moon HJ, Kim GW, Cho YS, Lee DW, et al. Evaluation of optimal scene time interval for out-of-hospital cardiac arrest using a deep neural network. Am J Emerg Med. Jan 2023;63:29-37. [CrossRef] [Medline]
- Kawai Y, Kogeichi Y, Yamamoto K, Miyazaki K, Asai H, Fukushima H. Explainable artificial intelligence-based prediction of poor neurological outcome from head computed tomography in the immediate post-resuscitation phase. Sci Rep. Apr 08, 2023;13(1):5759. [FREE Full text] [CrossRef] [Medline]
- Imamura S, Miyata M, Tagata K, Yokomine T, Ohmure K, Kawasoe M, et al. Prognostic predictors in patients with cardiopulmonary arrest: a novel equation for evaluating the 30-day mortality. J Cardiol. Aug 2023;82(2):146-152. [FREE Full text] [CrossRef] [Medline]
- Hessulf F, Bhatt DL, Engdahl J, Lundgren P, Omerovic E, Rawshani A, et al. Predicting survival and neurological outcome in out-of-hospital cardiac arrest using machine learning: the SCARS model. EBioMedicine. Mar 2023;89:104464. [FREE Full text] [CrossRef] [Medline]
- Yoon JA, Kang C, Park JS, You Y, Min JH, In YN, et al. Quantitative analysis of early apparent diffusion coefficient values from MRIs for predicting neurological prognosis in survivors of out-of-hospital cardiac arrest: an observational study. Crit Care. Oct 25, 2023;27(1):407. [FREE Full text] [CrossRef] [Medline]
- Chang H, Kim JW, Jung W, Heo S, Lee SU, Kim T, et al. Machine learning pre-hospital real-time cardiac arrest outcome prediction (PReCAP) using time-adaptive cohort model based on the Pan-Asian Resuscitation Outcome Study. Sci Rep. Nov 21, 2023;13(1):20344. [FREE Full text] [CrossRef] [Medline]
- Wang JJ, Zhou Q, Huang ZH, Han Y, Qin CZ, Chen ZQ, et al. Establishment of a prediction model for prehospital return of spontaneous circulation in out-of-hospital patients with cardiac arrest. World J Cardiol. Oct 26, 2023;15(10):508-517. [FREE Full text] [CrossRef] [Medline]
- Shinada K, Matsuoka A, Koami H, Sakamoto Y. Bayesian network predicted variables for good neurological outcomes in patients with out-of-hospital cardiac arrest. PLoS One. Sep 28, 2023;18(9):e0291258. [FREE Full text] [CrossRef] [Medline]
- Xu Y, Zhang H, Zhao Z, Wen K, Tian C, Zhai Q, et al. A retrospective study: quick scoring of symptoms to estimate the risk of cardiac arrest in the emergency department. Emerg Med Int. Nov 18, 2022;2022:6889237. [FREE Full text] [CrossRef] [Medline]
- Tsai CL, Lu TC, Fang CC, Wang CH, Lin JY, Chen WJ, et al. Development and validation of a novel triage tool for predicting cardiac arrest in the emergency department. West J Emerg Med. Mar 23, 2022;23(2):258-267. [FREE Full text] [CrossRef] [Medline]
- Tang Q, Cen X, Pan C. Explainable and efficient deep early warning system for cardiac arrest prediction from electronic health records. Math Biosci Eng. Jul 08, 2022;19(10):9825-9841. [FREE Full text] [CrossRef] [Medline]
- Kim JH, Choi A, Kim MJ, Hyun H, Kim S, Chang HJ. Development of a machine-learning algorithm to predict in-hospital cardiac arrest for emergency department patients using a nationwide database. Sci Rep. Dec 16, 2022;12(1):21797. [FREE Full text] [CrossRef] [Medline]
- Chae M, Gil HW, Cho NJ, Lee H. Machine learning-based cardiac arrest prediction for early warning system. Mathematics. Jun 13, 2022;10(12):2049. [CrossRef]
- Sun JT, Chang CC, Lu TC, Lin JC, Wang CH, Fang CC, et al. External validation of a triage tool for predicting cardiac arrest in the emergency department. Sci Rep. May 24, 2022;12(1):8779. [FREE Full text] [CrossRef] [Medline]
- Wong XY, Ang YK, Li K, Chin YH, Lam SS, Tan KB, et al. Development and validation of the SARICA score to predict survival after return of spontaneous circulation in out of hospital cardiac arrest using an interpretable machine learning framework. Resuscitation. Jan 2022;170:126-133. [CrossRef] [Medline]
- Tran AT, Hart AJ, Spertus JA, Jones PG, McNally BF, Malik AO, et al. A risk-adjustment model for patients presenting to hospitals with out-of-hospital cardiac arrest and ST-elevation myocardial infarction. Resuscitation. Mar 2022;171:41-47. [FREE Full text] [CrossRef] [Medline]
- Rajendram MF, Zarisfi F, Xie F, Shahidah N, Pek PP, Yeo JW, et al. External validation of the Survival After ROSC in Cardiac Arrest (SARICA) score for predicting survival after return of spontaneous circulation using multinational pan-Asian cohorts. Front Med (Lausanne). Sep 8, 2022;9:930226. [FREE Full text] [CrossRef] [Medline]
- Rafi S, Gangloff C, Paulhet E, Grimault O, Soulat L, Bouzillé G, et al. Out-of-hospital cardiac arrest detection by machine learning based on the phonetic characteristics of the caller's voice. Stud Health Technol Inform. May 25, 2022;294:445-449. [CrossRef] [Medline]
- Liu N, Liu M, Chen X, Ning Y, Lee JW, Siddiqui FJ, et al. Development and validation of an interpretable prehospital return of spontaneous circulation (P-ROSC) score for patients with out-of-hospital cardiac arrest using machine learning: a retrospective study. EClinicalMedicine. Jun 2022;48:101422. [FREE Full text] [CrossRef] [Medline]
- Lin WC, Huang CH, Chien LT, Tseng HJ, Ng CJ, Hsu KH, et al. Tree-based algorithms and association rule mining for predicting patients’ neurological outcomes after first-aid treatment for an out-of-hospital cardiac arrest during COVID-19 pandemic: application of data mining. Int J Gen Med. Sep 2022;Volume 15:7395-7405. [CrossRef]
- Kawai Y, Okuda H, Kinoshita A, Yamamoto K, Miyazaki K, Takano K, et al. Visual assessment of interactions among resuscitation activity factors in out-of-hospital cardiopulmonary arrest using a machine learning model. PLoS One. 2022;17(9):e0273787. [FREE Full text] [CrossRef] [Medline]
- Itagaki Y, Hayakawa M, Maekawa K, Kodate A, Moriki K, Takahashi Y, et al. Early prediction model of brain death in out-of-hospital cardiac arrest patients: a single-center retrospective and internal validation analysis. BMC Emerg Med. Nov 04, 2022;22(1):177. [FREE Full text] [CrossRef] [Medline]
- Harris M, Crowe RP, Anders J, D'Acunto S, Adelgais KM, Fishe JN. Identification of factors associated with return of spontaneous circulation after pediatric out-of-hospital cardiac arrest using natural language processing. Prehosp Emerg Care. 2023;27(5):687-694. [CrossRef] [Medline]
- Harford S, Del Rios M, Heinert S, Weber J, Markul E, Tataris K, et al. A machine learning approach for modeling decisions in the out of hospital cardiac arrest care workflow. BMC Med Inform Decis Mak. Jan 25, 2022;22(1):21. [FREE Full text] [CrossRef] [Medline]
- Harford S, Darabi H, Heinert S, Weber J, Campbell T, Kotini-Shah P, et al. Utilizing community level factors to improve prediction of out of hospital cardiac arrest outcome using machine learning. Resuscitation. Sep 2022;178:78-84. [FREE Full text] [CrossRef] [Medline]
- Chung CC, Chiu WT, Huang YH, Chan L, Hong CT, Chiu HW. Identifying prognostic factors and developing accurate outcome predictions for in-hospital cardiac arrest by using artificial neural networks. J Neurol Sci. Jun 15, 2021;425:117445. [CrossRef] [Medline]
- Chi CY, Ao S, Winkler A, Fu KC, Xu J, Ho YL, et al. Predicting the mortality and readmission of in-hospital cardiac arrest patients with electronic health records: a machine learning approach. J Med Internet Res. Sep 13, 2021;23(9):e27798. [FREE Full text] [CrossRef] [Medline]
- Wang G, Zhang Z, Xu X, Sun Q, Yang H, Zhang J. Derivation and validation of the CANP scoring model for predicting the neurological outcome in post-cardiac arrest patients. Neurosciences (Riyadh). Oct 2021;26(4):372-378. [FREE Full text] [CrossRef] [Medline]
- Bae DH, Lee HY, Jung YH, Jeung KW, Lee BK, Youn CS, et al. PROLOGUE (PROgnostication using LOGistic regression model for Unselected adult cardiac arrest patients in the Early stages): development and validation of a scoring system for early prognostication in unselected adult cardiac arrest patients. Resuscitation. Mar 2021;159:60-68. [CrossRef] [Medline]
- Mueller M, Grafeneder J, Schoergenhofer C, Schwameis M, Schriefl C, Poppe M, et al. Initial blood pH, lactate and base deficit add no value to peri-arrest factors in prognostication of neurological outcome after out-of-hospital cardiac arrest. Front Med (Lausanne). 2021;8:697906. [FREE Full text] [CrossRef] [Medline]
- Lee YJ, Cho KJ, Kwon O, Park H, Lee Y, Kwon JM, et al. A multicentre validation study of the deep learning-based early warning score for predicting in-hospital cardiac arrest in patients admitted to general wards. Resuscitation. Apr 22, 2021;163:78-85. [FREE Full text] [CrossRef] [Medline]
- Lim HJ, Ro YS, Kim KH, Park JH, Hong KJ, Song KJ, et al. The ED-PLANN score: a simple risk stratification tool for out-of-hospital cardiac arrests derived from emergency departments in Korea. J Clin Med. Dec 29, 2021;11(1):174. [FREE Full text] [CrossRef] [Medline]
- Lo YH, Siu YC. Predicting survived events in nontraumatic out-of-hospital cardiac arrest: a comparison study on machine learning and regression models. J Emerg Med. Dec 2021;61(6):683-694. [CrossRef] [Medline]
- Lonsain WS, De Lausnay L, Wauters L, Desruelles D, Dewolf P. The prognostic value of early lactate clearance for survival after out-of-hospital cardiac arrest. Am J Emerg Med. Aug 2021;46:56-62. [CrossRef] [Medline]
- Nishioka N, Kobayashi D, Kiguchi T, Irisawa T, Yamada T, Yoshiya K, et al. Development and validation of early prediction for neurological outcome at 90 days after return of spontaneous circulation in out-of-hospital cardiac arrest. Resuscitation. Nov 2021;168:142-150. [CrossRef] [Medline]
- Beom JH, Park I, You JS, Roh YH, Kim MJ, Park YS. Predictive model of good clinical outcomes in patients undergoing coronary angiography after out-of-hospital cardiac arrest: a prospective, multicenter observational study conducted by the Korean Cardiac Arrest Research Consortium. J Clin Med. Aug 20, 2021;10(16):3695. [FREE Full text] [CrossRef] [Medline]
- Cheng CY, Chiu IM, Zeng WH, Tsai CM, Lin CH. Machine learning models for survival and neurological outcome prediction of out-of-hospital cardiac arrest patients. Biomed Res Int. Sep 17, 2021;2021:9590131. [FREE Full text] [CrossRef] [Medline]
- Kim JW, Ha J, Kim T, Yoon H, Hwang SY, Jo IJ, et al. Developing a time-adaptive prediction model for out-of-hospital cardiac arrest: nationwide cohort study in Korea. J Med Internet Res. Jul 05, 2021;23(7):e28361. [FREE Full text] [CrossRef] [Medline]
- Seo DW, Yi H, Bae HJ, Kim YJ, Sohn CH, Ahn S, et al. Prediction of neurologically intact survival in cardiac arrest patients without pre-hospital return of spontaneous circulation: machine learning approach. J Clin Med. Mar 05, 2021;10(5):1089. [FREE Full text] [CrossRef] [Medline]
- Song HG, Park JS, You Y, Ahn HJ, Yoo I, Kim SW, et al. Using out-of-hospital cardiac arrest (OHCA) and cardiac arrest hospital prognosis (CAHP) scores with modified objective data to improve neurological prognostic performance for out-of-hospital cardiac arrest survivors. J Clin Med. Apr 22, 2021;10(9):1825. [FREE Full text] [CrossRef] [Medline]
- Sun KF, Poon KM, Lui CT, Tsui KL. Clinical prediction rule of termination of resuscitation for out-of-hospital cardiac arrest patient with pre-hospital defibrillation given. Am J Emerg Med. Dec 2021;50:733-738. [CrossRef] [Medline]
- Youn CS, Yi H, Kim YJ, Song H, Kim N, Kim WY. Early identification of resuscitated patients with a significant coronary disease in out-of-hospital cardiac arrest survivors without ST-segment elevation. J Clin Med. Dec 02, 2021;10(23):5688. [FREE Full text] [CrossRef] [Medline]
- Heo JH, Kim T, Shin J, Suh GJ, Kim J, Jung YS, et al. Prediction of neurological outcomes in out-of-hospital cardiac arrest survivors immediately after return of spontaneous circulation: ensemble technique with four machine learning models. J Korean Med Sci. Jul 19, 2021;36(28):e187. [FREE Full text] [CrossRef] [Medline]
- Wang H, Tang L, Zhang L, Zhang ZL, Pei HH. Development a clinical prediction model of the neurological outcome for patients with coma and survived 24 hours after cardiopulmonary resuscitation. Clin Cardiol. Sep 2020;43(9):1024-1031. [FREE Full text] [CrossRef] [Medline]
- Hong S, Lee S, Lee J, Cha WC, Kim K. Prediction of cardiac arrest in the emergency department based on machine learning and sequential characteristics: model development and retrospective clinical validation study. JMIR Med Inform. Aug 04, 2020;8(8):e15932. [FREE Full text] [CrossRef] [Medline]
- Cho KJ, Kwon O, Kwon JM, Lee Y, Park H, Jeon KH, et al. Detecting patient deterioration using artificial intelligence in a rapid response system. Crit Care Med. Apr 2020;48(4):e285-e289. [CrossRef] [Medline]
- Hirano Y, Kondo Y, Sueyoshi K, Okamoto K, Tanaka H. Early outcome prediction for out-of-hospital cardiac arrest with initial shockable rhythm using machine learning models. Resuscitation. Jan 2021;158:49-56. [FREE Full text] [CrossRef] [Medline]
- Okada Y, Kiguchi T, Irisawa T, Yamada T, Yoshiya K, Park C, et al. Development and validation of a clinical score to predict neurological outcomes in patients with out-of-hospital cardiac arrest treated with extracorporeal cardiopulmonary resuscitation. JAMA Netw Open. Nov 02, 2020;3(11):e2022920. [CrossRef] [Medline]
- Liu N, Ho AF, Pek PP, Lu TC, Khruekarnchana P, Song KJ, et al. Prediction of ROSC after cardiac arrest using machine learning. Stud Health Technol Inform. Jun 16, 2020;270:1357-1358. [CrossRef] [Medline]
- Elola A, Aramendi E, Rueda E, Irusta U, Wang H, Idris A. Towards the prediction of rearrest during out-of-hospital cardiac arrest. Entropy (Basel). Jul 09, 2020;22(7):758. [FREE Full text] [CrossRef] [Medline]
- Hsieh MJ, Chiang WC, Sun JT, Chang WT, Chien YC, Wang YC, et al. A prediction model for patients with emergency medical service witnessed out-of-hospital cardiac arrest. J Formos Med Assoc. May 2021;120(5):1229-1236. [FREE Full text] [CrossRef] [Medline]
- Baldi E, Caputo ML, Savastano S, Burkart R, Klersy C, Benvenuti C, et al. An Utstein-based model score to predict survival to hospital admission: the UB-ROSC score. Int J Cardiol. Jun 01, 2020;308:84-89. [FREE Full text] [CrossRef] [Medline]
- Li H, Wu TT, Yang DL, Guo YS, Liu PC, Chen Y, et al. Decision tree model for predicting in-hospital cardiac arrest among patients admitted with acute coronary syndrome. Clin Cardiol. Nov 2019;42(11):1087-1093. [FREE Full text] [CrossRef] [Medline]
- Srivilaithon W, Amnuaypattanapon K, Limjindaporn C, Imsuwan I, Daorattanachai K, Dasanadeba I, et al. Predictors of in-hospital cardiac arrest within 24 h after emergency department triage: a case-control study in urban Thailand. Emerg Med Australas. Oct 2019;31(5):843-850. [CrossRef] [Medline]
- Lee HY, Jung YH, Jeung KW, Lee BK, Youn CS, Mamadjonov N, et al. Ion shift index as a promising prognostic indicator in adult patients resuscitated from cardiac arrest. Resuscitation. Apr 2019;137:116-123. [CrossRef] [Medline]
- Liu JH, Chang HK, Wu CT, Lim WS, Wang HC, Jang JS. Machine learning based early detection system of cardiac arrest. In: Proceedings of the International Conference on Technologies and Applications of Artificial Intelligence. 2019. Presented at: TAAI 2019; November 21-23, 2019; Kaohsiung, Taiwan. [CrossRef]
- Jang DH, Kim J, Jo YH, Lee JH, Hwang JE, Park SM, et al. Developing neural network models for early detection of cardiac arrest in emergency department. Am J Emerg Med. Jan 2020;38(1):43-49. [CrossRef] [Medline]
- Seki T, Tamura T, Suzuki M, SOS-KANTO 2012 Study Group. Outcome prediction of out-of-hospital cardiac arrest with presumed cardiac aetiology using an advanced machine learning technique. Resuscitation. Aug 2019;141:128-135. [CrossRef] [Medline]
- Park JH, Shin SD, Song KJ, Hong KJ, Ro YS, Choi JW, et al. Prediction of good neurological recovery after out-of-hospital cardiac arrest: a machine learning analysis. Resuscitation. Sep 2019;142:127-135. [CrossRef] [Medline]
- Kwon JM, Jeon KH, Kim HM, Kim MJ, Lim S, Kim KH, et al. Deep-learning-based out-of-hospital cardiac arrest prognostic system to predict clinical outcomes. Resuscitation. Jun 2019;139:84-91. [CrossRef] [Medline]
- Kong T, Chung SP, Lee HS, Kim S, Lee J, Hwang SO, et al. The prognostic usefulness of the lactate/albumin ratio for predicting clinical outcomes in out-of-hospital cardiac arrest: a prospective, multicenter observational study (koCARC) study. Shock. Apr 2020;53(4):442-451. [CrossRef] [Medline]
- Harford S, Darabi H, Del Rios M, Majumdar S, Karim F, Vanden Hoek T, et al. A machine learning based model for out of hospital cardiac arrest outcome classification and sensitivity analysis. Resuscitation. May 2019;138:134-140. [CrossRef] [Medline]
- Kwon JM, Lee Y, Lee Y, Lee S, Park J. An algorithm based on deep learning for predicting in-hospital cardiac arrest. J Am Heart Assoc. Jun 26, 2018;7(13):e008678. [FREE Full text] [CrossRef] [Medline]
- Chang HK, Wu CT, Liu JH, Jang JS. Using machine learning algorithms in medication for cardiac arrest early warning system construction and forecasting. In: Proceedings of the Conference on Technologies and Applications of Artificial Intelligence. 2018. Presented at: TAAI 2018; November 30-December 2, 2018; Taichung, Taiwan. [CrossRef]
- Shin SM, Kim KS, Suh GJ, Kim K, Kwon WY, Shin J, et al. Prediction of neurological outcomes following the return of spontaneous circulation in patients with out-of-hospital cardiac arrest: retrospective fast-and-frugal tree analysis. Resuscitation. Dec 2018;133:65-70. [CrossRef] [Medline]
- Lee SW, Han KS, Park JS, Lee JS, Kim SJ. Prognostic indicators of survival and survival prediction model following extracorporeal cardiopulmonary resuscitation in patients with sudden refractory cardiac arrest. Ann Intensive Care. Aug 30, 2017;7(1):87. [FREE Full text] [CrossRef] [Medline]
- Liu T, Lin Z, Ong ME, Koh ZX, Pek PP, Yeo YK, et al. Manifold ranking based scoring system with its application to cardiac arrest prediction: a retrospective study in emergency department patients. Comput Biol Med. Dec 01, 2015;67:74-82. [CrossRef] [Medline]
- Goto Y, Maeda T, Nakatsu-Goto Y. Decision tree model for predicting long-term outcomes in children with out-of-hospital cardiac arrest: a nationwide, population-based observational study. Crit Care. Jul 27, 2014;18(3):R133. [CrossRef] [Medline]
- Goto Y, Maeda T, Goto Y. Decision-tree model for predicting outcomes after out-of-hospital cardiac arrest in the emergency department. Crit Care. Jul 11, 2013;17(4):R133. [FREE Full text] [CrossRef] [Medline]
- Hock Ong ME, Lee Ng CH, Goh K, Liu N, Koh ZX, Shahidah N, et al. Prediction of cardiac arrest in critically ill patients presenting to the emergency department using a machine learning score incorporating heart rate variability compared with the modified early warning score. Crit Care. Jun 21, 2012;16(3):R108. [FREE Full text] [CrossRef] [Medline]
- Hayakawa K, Tasaki O, Hamasaki T, Sakai T, Shiozaki T, Nakagawa Y, et al. Prognostic indicators and outcome prediction model for patients with return of spontaneous circulation from cardiopulmonary arrest: the Utstein Osaka Project. Resuscitation. Jul 2011;82(7):874-880. [CrossRef] [Medline]
- Churpek MM, Yuen TC, Edelson DP. Risk stratification of hospitalized patients on the wards. Chest. Jul 2013;143(6):1758-1765. [FREE Full text] [CrossRef] [Medline]
- Carrick RT, Park JG, McGinnes HL, Lundquist C, Brown KD, Janes WA, et al. Clinical predictive models of sudden cardiac arrest: a survey of the current science and analysis of model performances. JAHA. Aug 18, 2020;9(16). [CrossRef]
- Gräsner JT, Meybohm P, Lefering R, Wnent J, Bahr J, Messelken M, et al. ROSC after cardiac arrest--the RACA score to predict outcome after out-of-hospital cardiac arrest. Eur Heart J. Jul 2011;32(13):1649-1656. [CrossRef] [Medline]
- Liu N, Ong ME, Ho AF, Pek PP, Lu TC, Khruekarnchana P, et al. Validation of the ROSC after cardiac arrest (RACA) score in Pan-Asian out-of-hospital cardiac arrest patients. Resuscitation. May 2020;149:53-59. [CrossRef] [Medline]
- Bartkowiak B, Snyder AM, Benjamin A, Schneider A, Twu NM, Churpek MM, et al. Validating the electronic cardiac arrest risk triage (eCART) score for risk stratification of surgical inpatients in the postoperative setting: retrospective cohort study. Ann Surg. Jul 2019;269(6):1059-1063. [FREE Full text] [CrossRef] [Medline]
- Green M, Lander H, Snyder A, Hudson P, Churpek M, Edelson D. Comparison of the between the flags calling criteria to the MEWS, NEWS and the electronic Cardiac Arrest Risk Triage (eCART) score for the identification of deteriorating ward patients. Resuscitation. Mar 2018;123:86-91. [FREE Full text] [CrossRef] [Medline]
- Lascarrou JB, Merdji H, Le Gouge A, Colin G, Grillet G, Girardie PF, et al. Targeted temperature management for cardiac arrest with nonshockable rhythm. N Engl J Med. Dec 12, 2019;381(24):2327-2337. [CrossRef] [Medline]
- Sandroni C, Nolan J, Cavallaro F, Antonelli M. In-hospital cardiac arrest: incidence, prognosis and possible measures to improve survival. Intensive Care Med. Mar 2007;33(2):237-245. [CrossRef] [Medline]
- Dey D, Slomka PJ, Leeson P, Comaniciu D, Shrestha S, Sengupta PP, et al. Artificial intelligence in cardiovascular imaging: JACC state-of-the-art review. J Am Coll Cardiol. Mar 26, 2019;73(11):1317-1335. [FREE Full text] [CrossRef] [Medline]
- Thorén A, Rawshani A, Herlitz J, Engdahl J, Kahan T, Gustafsson L, et al. ECG-monitoring of in-hospital cardiac arrest and factors associated with survival. Resuscitation. May 2020;150:130-138. [CrossRef] [Medline]
- Teran F, Prats MI, Nelson BP, Kessler R, Blaivas M, Peberdy MA, et al. Focused transesophageal echocardiography during cardiac arrest resuscitation: JACC review topic of the week. J Am Coll Cardiol. Aug 11, 2020;76(6):745-754. [FREE Full text] [CrossRef] [Medline]
- Marzano L, Darwich AS, Jayanth R, Sven L, Falk N, Bodeby P, et al. Diagnosing an overcrowded emergency department from its electronic health records. Sci Rep. May 30, 2024;14(1):9955. [FREE Full text] [CrossRef] [Medline]
- Gross TK, Lane NE, Timm NL, Committee on Pediatric Emergency Medicine. Crowding in the emergency department: challenges and best practices for the care of children. Pediatrics. Mar 01, 2023;151(3):e2022060972. [CrossRef] [Medline]
- Austin EE, Blakely B, Tufanaru C, Selwood A, Braithwaite J, Clay-Williams R. Strategies to measure and improve emergency department performance: a scoping review. Scand J Trauma Resusc Emerg Med. Jul 15, 2020;28(1):55. [FREE Full text] [CrossRef] [Medline]
- Ngiam KY, Khor IW. Big data and machine learning algorithms for health-care delivery. Lancet Oncol. May 2019;20(5):e262-e273. [CrossRef] [Medline]
- Richardson SA, Anderson D, Burrell AJ, Byrne T, Coull J, Diehl A, et al. Pre-hospital ECPR in an Australian metropolitan setting: a single-arm feasibility assessment-the CPR, pre-hospital ECPR and early reperfusion (CHEER3) study. Scand J Trauma Resusc Emerg Med. Dec 13, 2023;31(1):100. [FREE Full text] [CrossRef] [Medline]
- Low CJ, Ling RR, Ramanathan K, Chen Y, Rochwerg B, Kitamura T, et al. Extracorporeal cardiopulmonary resuscitation versus conventional CPR in cardiac arrest: an updated meta-analysis and trial sequential analysis. Crit Care. Mar 21, 2024;28(1):57. [FREE Full text] [CrossRef] [Medline]
- Banerjee P, Dehnbostel FO, Preissner R. Prediction is a balancing act: importance of sampling methods to balance sensitivity and specificity of predictive models based on imbalanced chemical data sets. Front Chem. 2018;6:362. [FREE Full text] [CrossRef] [Medline]
- Luu J, Borisenko E, Przekop V, Patil A, Forrester JD, Choi J. Practical guide to building machine learning-based clinical prediction models using imbalanced datasets. Trauma Surg Acute Care Open. 2024;9(1):e001222. [FREE Full text] [CrossRef] [Medline]
- Efthimiou O, Seo M, Chalkou K, Debray T, Egger M, Salanti G. Developing clinical prediction models: a step-by-step guide. BMJ. Oct 03, 2024;386:e078276. [FREE Full text] [CrossRef] [Medline]
- Hinton G. Deep learning-a technology with the potential to transform health care. JAMA. Oct 18, 2018;320(11):1101-1102. [CrossRef] [Medline]
- Rajkomar A, Hardt M, Howell MD, Corrado G, Chin MH. Ensuring fairness in machine learning to advance health equity. Ann Intern Med. Dec 18, 2018;169(12):866-872. [FREE Full text] [CrossRef] [Medline]
- Staffa SJ, Zurakowski D. Statistical development and validation of clinical prediction models. Anesthesiology. Oct 01, 2021;135(3):396-405. [CrossRef] [Medline]
- Luxton DD. Recommendations for the ethical use and design of artificial intelligent care providers. Artif Intell Med. Oct 2014;62(1):1-10. [CrossRef] [Medline]
- Murphy K, Di Ruggiero E, Upshur R, Willison DJ, Malhotra N, Cai JC, et al. Artificial intelligence for good health: a scoping review of the ethics literature. BMC Med Ethics. Mar 15, 2021;22(1):14. [FREE Full text] [CrossRef] [Medline]
- Maris MT, Koçar A, Willems DL, Pols J, Tan HL, Lindinger GL, et al. Ethical use of artificial intelligence to prevent sudden cardiac death: an interview study of patient perspectives. BMC Med Ethics. May 04, 2024;25(1):42. [FREE Full text] [CrossRef] [Medline]
- Goldfarb MJ, Saylor MA, Bozkurt B, Code J, Di Palo KE, Durante A, et al. Patient-centered adult cardiovascular care: a scientific statement from the American Heart Association. Circulation. May 14, 2024;149(20):e1176-e1188. [FREE Full text] [CrossRef] [Medline]
- Dennison Himmelfarb CR, Beckie TM, Allen LA, Commodore-Mensah Y, Davidson PM, Lin G, et al. Shared decision-making and cardiovascular health: a scientific statement from the American Heart Association. Circulation. Oct 12, 2023;148(11):912-931. [CrossRef] [Medline]
- Tjoa E, Guan C. A survey on explainable artificial intelligence (XAI): toward medical XAI. IEEE Trans Neural Netw Learn Syst. Dec 2021;32(11):4793-4813. [CrossRef] [Medline]
- Martin SA, Townend FJ, Barkhof F, Cole JH. Interpretable machine learning for dementia: a systematic review. Alzheimers Dement. May 2023;19(5):2135-2149. [FREE Full text] [CrossRef] [Medline]
- Saqib K, Khan AF, Butt ZA. Machine learning methods for predicting postpartum depression: scoping review. JMIR Ment Health. Dec 24, 2021;8(11):e29838. [FREE Full text] [CrossRef] [Medline]
- Ferreira-Santos D, Amorim P, Silva Martins T, Monteiro-Soares M, Pereira Rodrigues P. Enabling early obstructive sleep apnea diagnosis with machine learning: systematic review. J Med Internet Res. Oct 30, 2022;24(9):e39452. [FREE Full text] [CrossRef] [Medline]
- International Bladder Cancer Nomogram Consortium, Bochner BH, Kattan MW, Vora KC. Postoperative nomogram predicting risk of recurrence after radical cystectomy for bladder cancer. J Clin Oncol. Aug 20, 2006;24(24):3967-3972. [CrossRef] [Medline]
- Papadimitroulas P, Brocki L, Christopher Chung N, Marchadour W, Vermet F, Gaubert L, et al. Artificial intelligence: deep learning in oncological radiomics and challenges of interpretability and data harmonization. Phys Med. Mar 2021;83:108-121. [FREE Full text] [CrossRef] [Medline]
- Chen RJ, Wang JJ, Williamson DF, Chen TY, Lipkova J, Lu MY, et al. Algorithmic fairness in artificial intelligence for medicine and healthcare. Nat Biomed Eng. Jul 28, 2023;7(6):719-742. [FREE Full text] [CrossRef] [Medline]
- Huang J, Galal G, Etemadi M, Vaidyanathan M. Evaluation and mitigation of racial bias in clinical machine learning models: scoping review. JMIR Med Inform. May 31, 2022;10(5):e36388. [FREE Full text] [CrossRef] [Medline]
- Vorisek CN, Stellmach C, Mayer PJ, Klopfenstein SA, Bures DM, Diehl A, et al. Artificial intelligence bias in health care: web-based survey. J Med Internet Res. Jul 22, 2023;25:e41089. [FREE Full text] [CrossRef] [Medline]
- McCradden MD, Joshi S, Mazwi M, Anderson JA. Ethical limitations of algorithmic fairness solutions in health care machine learning. Lancet Digit Health. May 2020;2(5):e221-e223. [FREE Full text] [CrossRef] [Medline]
- Char DS, Shah NH, Magnus D. Implementing machine learning in health care - addressing ethical challenges. N Engl J Med. Mar 15, 2018;378(11):981-983. [FREE Full text] [CrossRef] [Medline]
Abbreviations
AI: artificial intelligence |
CA: cardiac arrest |
CART: Cardiac Arrest Risk Triage |
CPC 1-2: good cerebral performance category score 1 to 2 |
DT: decision tree |
EMS: emergency medical services |
GCS: Glasgow Coma Scale |
IHCA: in-hospital cardiac arrest |
LR: logistic regression |
ML: machine learning |
OHCA: out-of-hospital cardiac arrest |
PRISMA: Preferred Reporting Items for Systematic Reviews and Meta-Analyses |
PROBAST: Prediction Model Risk of Bias Assessment Tool |
RF: random forest |
ROSC: return of spontaneous circulation |
SVM: support vector machine |
Edited by A Mavragani; submitted 23.10.24; peer-reviewed by S Netherton, M Bak; comments to author 02.12.24; revised version received 19.12.24; accepted 16.01.25; published 10.03.25.
Copyright©Shengfeng Wei, Xiangjian Guo, Shilin He, Chunhua Zhang, Zhizhuan Chen, Jianmei Chen, Yanmei Huang, Fan Zhang, Qiangqiang Liu. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 10.03.2025.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research (ISSN 1438-8871), is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.