Abstract
Emergency toxicology is a complex field requiring rapid and precise decision-making to manage acute poisonings effectively. Toxic exposures are often unpredictable, and the constraints of time and resources often challenge conventional diagnostic and treatment approaches. Artificial intelligence (AI) has emerged as a valuable tool in emergency medicine, offering the potential to enhance diagnostic accuracy, predict clinical outcomes and improve clinical decision support systems. Despite the increasing focus of AI in medicine, its applications in emergency toxicology are still underexplored. This viewpoint aims to provide perspectives on AI applications in emergency toxicology by highlighting key advancements, challenges, and future directions. While AI has demonstrated significant potential in improving toxicological predictions through various applications, challenges such as data quality, regulatory concerns, and implementation barriers are still hurdles to its use. Further research, regulatory frameworks, and integration strategies are needed to ensure effective and ethical implementation in clinical practice.
J Med Internet Res 2025;27:e73121doi:10.2196/73121
Keywords
Introduction
Acute poisonings and chronic exposure to chemicals represent a significant global health care burden, with an estimated 2 million lives and 53 million disability-adjusted life-years lost in 2019 []. Poisoning is a major cause of emergency department visits globally, and studies have shown that children and young adults are commonly affected [,]. Data from the United States Poison Control Centers reflect a concerning trend of rising numbers of severe toxic exposures leading to more serious outcomes []. Emergency care providers operate in a high-stakes environment where they are constantly challenged by the vast and everchanging landscape of toxic exposures. These exposures can originate from a variety of sources, including prescription and over-the-counter pharmaceuticals; illicit drugs; industrial chemicals used in manufacturing and other industries; household products such as cleaning agents, pesticides, and paints; and natural toxins. The sheer diversity of potential toxins, coupled with the often-unpredictable nature of their effects, makes the diagnosis and treatment of toxic exposures a particularly demanding aspect of emergency medicine.
A detailed risk assessment—considering the agent, dose, time of ingestion, route of exposure, drug formulation, and the individual’s underlying health conditions—is important to aid management decisions. Poisoning symptoms are often nonspecific, and the window for decontamination and targeted antidotal therapy is narrow, necessitating rapid diagnosis and intervention. However, the clinical variability of such ‘toxidromes’ can further complicate the diagnostic process and delay the initiation of appropriate treatment, leading to morbidity or mortality. Emergency care providers must therefore possess a broad knowledge base and be able to quickly assess and manage patients with a wide range of toxic exposures, often with limited information and under significant time pressure. In addition to the challenges posed by the diversity and unpredictability of toxic exposures, other challenges in the field of emergency toxicology include the limited availability of specialized expertise and resources, particularly in rural or under-resourced environments, and the difficulty in obtaining accurate medical histories in a timely manner, especially in cases of unwitnessed exposures or obtundation.
Conventional approaches, while effective in many cases, rely heavily on the clinician’s expertise and access to clinical toxicology consultation or toxicology databases, which can vary widely across health care settings. Artificial intelligence (AI) has emerged as a potential solution in many areas of health care, with proven potential to enhance diagnostic precision, predictive analytics, and clinical decision support systems across a variety of disciplines [,]. In emergency toxicology, AI holds the promise for improving and expediting patient care by facilitating the rapid identification of toxins, predicting clinical trajectories, and recommending tailored treatment strategies. By using AI-driven tools such as machine learning (ML) algorithms and natural language processing (NLP), clinicians can address gaps inherent in traditional toxicology workflows, ultimately improving patient outcomes and operational efficiency. This paper explores the applications, challenges, and future directions of AI in emergency medicine toxicology, and its potential to redefine the management of acute poisonings in both high-resource and austere settings.
AI Applications in Emergency Toxicology
In the fast-paced environment of emergency medicine, clinicians are often faced with the challenge of managing complex and unpredictable toxic exposures under significant time and resource constraints. AI tools offer the opportunity to enhance diagnostic accuracy, predict outcomes, and guide treatment decision. The following section highlights the diverse applications of AI in emergency toxicology, demonstrating its potential in clinical practice and patient care, as shown in .

Improving Diagnostic Accuracy
Poison Prediction
Obtaining accurate exposure history in patients with acute poisoning can be challenging, as patients are often unable to convey a verbal history to attending health care providers. Diagnosis is therefore dependent on a constellation of clinical symptoms (toxidrome-based approach), and the emergency physician’s experience and clinical judgment. AI-based applications have demonstrated capabilities in identifying the causative drug in acutely poisoned patients. Early attempts by Chary et al [] in using probabilistic logic networks to mimic knowledge representation and decision-making of clinicians in classifying toxidromes showed comparable performance on easy-to-intermediate-difficulty synthetic case scenarios. However, the logic network performed worse when compared against two human medical toxicologists on challenging cases.
More recently, using the United States’s National Poison Data System (NPDS) of 201,031 entries, Mehrpour et al developed machine learning (ML) and deep neural network (DNN)–based models to distinguish between single-agent poisonings with eight drugs: acetaminophen, diphenhydramine, aspirin, calcium channel blockers (CCBs), sulfonylureas, benzodiazepines, bupropion, and lithium. Their ML model demonstrated an overall specificity of >92%, with >99% specificity for sulfonylureas, CCBs, lithium, and aspirin []. Meanwhile, the DNN models built with PyTorch and Keras showed specificity of 97% and 98%, respectively [].
The development of ToxNet by Zellner et al [] from the Technical University of Munich marks a further advancement in the use of AI-based tools in poison prediction. The ToxNet architecture comprises a literature-matching network and graph convolutional network functioning in parallel, optimized using inductive graph attention networks. This computer-aided diagnosis system, trained using data from 781,278 recorded calls, showed superior performance when compared against other algorithmic models, and, more critically, when compared against clinicians experienced in clinical toxicology.
Vision Models for Vector Recognition
AI models have also demonstrated use cases in vector recognition. Vision language models and convolutional neural networks have the capabilities to recognize and classify objects, and have been shown to aid in diagnosis in other medical fields such as dermatology []. In the field of emergency toxicology, these new technologies have demonstrated utility in snakebite and toxic plant identification, which has the potential to aid emergency toxicologists immeasurably. In 2019, de Castañeda et al [] published a commentary in Lancet Digital Health calling for the empowerment of neglected communities and health care providers by embracing the AI revolution for snakebite identification. This call to action was heeded by groups such as Bolon et al [], who in 2022 developed an AI model based on vision transformer architecture, trained on a dataset of 386,006 snake photos, to identify snakes from across the world. The model achieved an unprecedented macro-averaged F1-score of 92.2%, with accuracy at the species and genus level of 96.0% and 99.0%, respectively []. In 2023, Zhang et al conducted a systematic review of AI use in snakebite identification, concluding that AI–based methods had the capability to quickly and accurately distinguish between venomous and nonvenomous snake species [].
In 2021, Wagner et al [] described methods for mushroom data creation and curation to support classification tasks. Such groundwork appears to have borne fruit, as evinced in a 2024 case report on the use of Google’s Gemini AI to accurately identify a toxic plant species (Datura stramonium) in a patient presenting to a Aksaray Training and Research Hospital in Turkey with restlessness, altered mental state, and hallucinations that occurred 2 hours after consumption of an herbal tea []. No botanist was available at the time, and successful treatment was administered based on Gemini’s identification of the plant seed. Later consultation with a botanist confirmed the identity of the seed in question.
Signal Data
Point-of-care tests used in the emergency department, such as electrocardiographs (ECGs), contain a wealth of signal data that may be interpretable by AI models. Chang et al [] used deep learning (DL) methods to train an AI system to detect digoxin toxicity using ECG data alone. The area under the curve (AUC) of the model in a human-machine comparison test was 0.929, demonstrating noninferiority compared with experienced emergency and cardiovascular staff members and an emergency chief resident.
AI–driven tools are reshaping the diagnostic landscape in emergency toxicology by enhancing the accuracy and efficiency of poison identification and vector recognition. These tools have the capacity to reduce diagnostic uncertainty and expedite decision-making in high-stakes environments, laying the groundwork for more consistent and effective interventions in emergency toxicology cases [].
Predictive Analytics
Triage
Early and accurate identification of patients at risk of severe outcomes using AI–based predictive models has the potential to change current triage workflows. Moulaei et al [] compared DL and ML models for prediction of intubation necessity in methanol-poisoned patients using a dataset of 897 cases. Their Long Short-Term Memory (LSTM) model from the DL group, and Random Forest (RF) and Extreme Gradient Boost (XGB) models from the ML group, demonstrated high specificity and sensitivity of up to 99% and 100%, respectively [].
Prognosticating Clinical Outcomes
Similarly, several studies have investigated the utility of AI–based methods to forecast the clinical trajectory of patients with acute poisoning. RF models have yielded high predictive accuracy in carbon monoxide [], acetaminophen [], and diquat poisonings [], whereas other ML methods such as XGB and Support Vector Machine (SVM) have been applied to methadone [] and metformin poisonings []. A summary of studies investigating AI–based models for prognosis in acute poisonings is shown in .
Laboratory Analysis
AI-based tools for laboratory analyses may offer enhanced diagnostic precision in toxicology patients. Chen and Hu from the Wenzhou Medical University investigated the use of SVMs for the prognosis of paraquat-poisoned patients, using arterial blood gas and complete blood count indexes. Using a particle swarm optimization algorithm to optimize SVM parameters, they achieved a maximum accuracy of 76% using arterial blood gas indexes []. Separately, using complete blood count indexes yielded an accuracy of 85.2% and a specificity of 95.1% [].
Clinical Decision Support Systems
The next step beyond predictive analytics involves the deployment of AI–based models as clinical decision support systems, by integrating datasets and providing evidence-based recommendations. Prior studies in pharmacology have demonstrated the utility of AI models in therapeutic drug monitoring and model-informed precision dosing across a wide range of medication classes []. In the field of emergency toxicology, Mohtarami et al [] developed a XGB model to predict the maintenance dose and duration of administration of naloxone in opioid toxicity cases, achieving an AUC of 0.97 .
Toxicovigilance
Advancements in NLP and the advent of large language models have opened possibilities for toxicovigilance via monitoring of real-time data from diverse sources such as social media. Sato et al [] from Keio University used NLP techniques to analyze 30,203 social media posts on Twitter (subsequently rebranded X), to identify trends and patterns in drug misuse, including mentions of overdoses on medications such as codeine and pregabalin. Such monitoring could facilitate early identification of emerging threats, and inform preventive strategies. Shah-Mohammadi and Finklestein [] applied retrieval-augmented generation models integrated with GPT-4 to improve the extraction of substance use data from clinical notes, flagging patients with risk factors for substance use disorders.
In addition to analyzing social media and clinical data, toxicovigilance efforts have benefited from predictive models aimed at identifying populations at risk of exposure to toxic agents. Potash et al [] developed an ML model using random forests to predict childhood lead poisoning by analyzing spatiotemporal data, housing characteristics, and blood lead level surveillance data, showing superior predictive performance compared to regression modeling []. Such targeted prediction facilitates the allocation of public health resources, enabling proactive rather than purely reactive interventions.
Patient Education
Large language models are a form of AI pretrained on a vast corpus of text, image, or video data. These generative AI models such as ChatGPT are capable of producing human-like text and visual output and have demonstrated efficacy in providing clear and accurate explanations regarding poisoning symptoms, treatments, and preventive measures. In studies comparing AI-generated content to that of clinical toxicologists, ChatGPT’s responses were often indistinguishable from expert-generated content and were rated highly for readability and relevance [,]. The flexibility of generative AI-based systems allows for the tailoring of educational materials to individual needs, potentially improving health literacy and empowering patients to make informed decisions.
Limitations and Challenges
Data Quality and Bias
One of the most significant challenges in using AI in emergency toxicology is the quality and bias inherent in the data. Clinical datasets often suffer from incomplete records, inconsistent documentation, and noise, which can reduce the reliability of AI-based models []. Self-reported cases may lack precise details about the timing, dose, and the substances involved, introducing variability that compromises model performance. Additionally, bias in data collection—such as overrepresentation of certain demographic groups or geographic regions—can limit the generalizability of AI models []. For instance, a model trained predominantly on datasets from urban hospitals may underperform in rural or underserved settings where patterns of poisoning and health care infrastructure differ [].
Regulatory and Ethical Issues
The application of AI in emergency toxicology also raises regulatory and ethical concerns. Current regulations governing medical AI tools are evolving but remain inconsistent globally []. Validation, certification, and approval processes can be lengthy, potentially delaying the deployment of such tools. Ethical issues are also pronounced in this field, given the relative novelty of these tools. For example, AI models that make treatment recommendations could inadvertently exacerbate health inequities if trained on biased datasets. Additionally, explainability remains a critical issue, as many ML models operate as “black boxes,” making it difficult for clinicians to trust and act upon AI–driven predictions without a clear understanding of their rationale []. This will also impede regulatory acceptance.
Clinician Trust and Usability
Clinician trust in AI applications involves the willingness of providers to depend on automated systems, reflecting the perceived reliability, accuracy, and relevance of AI recommendations []. Such trust must also be bidirectional, with clinicians trusting AI outputs, and systems designed in ways that trust human inputs []. AI solutions must thus demonstrate algorithmic transparency, robustness, and sound alignment with clinical reasoning, in order to establish trust between system and user [,].
Usability of AI tools also affects clinician adoption and sustained use. Poorly designed AI interfaces contribute to cognitive overload, impede rapid decision-making, and ultimately undermines trust; conversely, intuitive and explainable interfaces that integrate into clinical workflows facilitate acceptance [].
Implementation Barriers
There are also multiple practical barriers to deploying AI solutions in clinical settings. First, integration into existing electronic health record systems can be technically challenging and resource intensive. Second, clinicians may lack the training or confidence to use AI tools effectively, leading to underutilization []. Third, the real-time nature of emergency medicine demands rapid and reliable AI predictions, which may be hindered by inadequate computational infrastructure or connectivity issues at inference time in low-resource settings. Finally, cost constraints may limit access to advanced AI tools in settings where they are most needed.
Necessity of AI Models
A critical question in the adoption of AI-based models in toxicology is whether their increased complexity is always necessary, or useful. Several studies included in this review demonstrated that simpler statistical methods, such as logistic regression, often match or even outperform more advanced ML algorithms in certain scenarios. For example, Behnoush et al [] demonstrated that despite reasonable performance of ML models in predicting seizures in tramadol poisoning, a logistic regression model in fact had superior predictive performance with an equal number of important variables (AUC, 0.77 vs Naive-Bayes AUC, 0.71).
When compared against AI models, traditional regression models, given performance parity, have significant advantages that may render them preferable for certain applications. They are more computationally efficient and easily interpretable, and their simplicity allows clinicians to understand the contribution of individual predictors, fostering trust and facilitating integration into clinical workflows. Additionally, the scalability of AI and ML methods often comes with considerable resource requirements, including extensive computational infrastructure and large datasets. These prerequisites may not always be feasible in resource-constrained healthcare settings, limiting the practical application of such models. Regression models, on the other hand, can function effectively with smaller datasets and minimal computational demands, making them a robust option in many contexts.
While AI models offer undeniable potential in capturing complex, nonlinear interactions and high-dimensional patterns, their necessity should be evaluated on a case-by-case basis, and advanced methods should be used only when they demonstrably enhance predictive performance.
Future Directions
AI in emergency clinical toxicology continues to develop as new data sources and technological capabilities expand. Looking ahead, two key areas hold significant promise to transform patient assessment and intervention: (1) integrating large-scale, heterogeneous datasets and (2) using data from wearable and Internet of Things (IoT) devices.
Big Data Integration
Integration of large datasets from multiple sources holds promise for enhancing AI applications in emergency toxicology. Combining data from national databases, hospital electronic health records, and research databases can improve the robustness and accuracy of AI models []. Advanced data harmonization techniques and federated learning approaches can enable collaborative analysis, whilst maintaining data privacy []. By incorporating real-world evidence and longitudinal and time-series data, AI models could potentially evolve from static tools to dynamic systems capable of adapting to emerging trends and treatment strategies [].
Wearable and IoT Data
The proliferation of wearable devices has broadened the scope of real-time health monitoring, with direct implications for emergency clinical toxicology. These devices can continuously track vital signs, physical activity, biophysiometric parameters related to illicit drugs [], and even real-time measurements of specific drug concentrations in the body [], potentially detecting early signs of toxic exposure or overdose before the patient presents to the ED. As wearables become more sophisticated, AI algorithms can analyze the continuous data streams to detect anomalies, such as abrupt changes in heart rate or respiratory rate, that may suggest toxicity, thus enabling pre-emptive alerts. Further research is subsequently also necessary to demonstrate the cost-effectiveness and user-friendliness of these wearables and IoT solutions in day-to-day toxicology practice.
Conclusions
The integration of AI in the emergency toxicology field brings about significant potential in improving diagnostic accuracy, patient outcomes and operational efficiency. While there have been promising advancements in the use of AI tools, implementation barriers such as regulatory and ethical considerations must be addressed to enhance its adoption in this field. Future research can also be done to explore the benefits of AI use in acute poisoning cases in terms of time and cost-savings in patient care. Additionally, conducting more prospective trials would be essential to build a robust evidence base to facilitate the use of AI in real-world clinical applications.
Acknowledgments
No funding was received for this work.
Data Availability
All data generated or analyzed during this study are included in this published article and its supplementary information files.
Authors' Contributions
Conceptualization: JYMT, LPXY
Data curation: JYMT, LPXY
Supervision: JZYT
Writing – original draft: AJYN, CKWL, JYMT, LPXY, NMTC, EYN, ZYL
Writing – review & editing: DYZL, GGRS, YB
Conflicts of Interest
None declared.
Summary of studies on artificial intelligence models in acute poisoning outcome prediction.
DOCX File, 44 KBReferences
- Burden of disease from chemicals. World Health Organization. URL: https://www.who.int/teams/environment-climate-change-and-health/chemical-safety-and-health/health-impacts/burden-of-disease-from-chemicals [Accessed 2024-12-28]
- World Report on Child Injury Prevention. World Health Organization; 2008. URL: https://www.who.int/publications/i/item/9789241563574 [Accessed 2025-08-19]
- Gummin DD, Mowry JB, Beuhler MC, et al. 2020 Annual Report of the American Association of Poison Control Centers’ National Poison Data System (NPDS): 38th Annual Report. Clin Toxicol (Phila). Dec 2021;59(12):1282-1501. [CrossRef] [Medline]
- Gummin DD, Mowry JB, Beuhler MC, et al. 2023 Annual Report of the National Poison Data System® (NPDS) from America’s Poison Centers®: 41st Annual Report. Clin Toxicol (Phila). Dec 2024;62(12):793-1027. [CrossRef] [Medline]
- Rajkomar A, Dean J, Kohane I. Machine learning in medicine. N Engl J Med. Apr 4, 2019;380(14):1347-1358. [CrossRef] [Medline]
- Topol EJ. High-performance medicine: the convergence of human and artificial intelligence. Nat Med. Jan 2019;25(1):44-56. [CrossRef] [Medline]
- Chary M, Boyer EW, Burns MM. Diagnosis of acute poisoning using explainable artificial intelligence. Comput Biol Med. Jul 2021;134:104469. [CrossRef] [Medline]
- Mehrpour O, Hoyte C, Delva-Clark H, et al. Classification of acute poisoning exposures with machine learning models derived from the National Poison Data System. Basic Clin Pharmacol Toxicol. Dec 2022;131(6):566-574. [CrossRef] [Medline]
- Mehrpour O, Hoyte C, Al Masud A, et al. Deep learning neural network derivation and testing to distinguish acute poisonings. Expert Opin Drug Metab Toxicol. 2023;19(6):367-380. [CrossRef] [Medline]
- Zellner T, Romanek K, Rabe C, et al. ToxNet: an artificial intelligence designed for decision support for toxin prediction. Clin Toxicol (Phila). Jan 2023;61(1):56-63. [CrossRef] [Medline]
- Salinas MP, Sepúlveda J, Hidalgo L, et al. A systematic review and meta-analysis of artificial intelligence versus clinicians for skin cancer diagnosis. NPJ Digit Med. May 14, 2024;7(1):125. [CrossRef] [Medline]
- de Castañeda RR, Durso AM, Ray N, et al. Snakebite and snake identification: empowering neglected communities and health-care providers with AI. Lancet Digit Health. Sep 2019;1(5):e202-e203. [CrossRef] [Medline]
- Bolon I, Picek L, Durso AM, Alcoba G, Chappuis F, Ruiz de Castañeda R. An artificial intelligence model to identify snakes from across the world: opportunities and challenges for global health and herpetology. PLoS Negl Trop Dis. Aug 2022;16(8):e0010647. [CrossRef] [Medline]
- Zhang J, Chen X, Song A, Li X. Artificial intelligence-based snakebite identification using snake images, snakebite wound images, and other modalities of information: a systematic review. Int J Med Inform. May 2023;173:105024. [CrossRef] [Medline]
- Wagner D, Heider D, Hattab G. Mushroom data creation, curation, and simulation to support classification tasks. Sci Rep. Apr 14, 2021;11(1):8134. [CrossRef] [Medline]
- Kokulu K, Sert ET. Artificial intelligence application for identifying toxic plant species: a case of poisoning with Datura stramonium. Toxicon. Nov 28, 2024;251:108129. [CrossRef] [Medline]
- Chang DW, Lin CS, Tsao TP, et al. Detecting digoxin toxicity by artificial intelligence-assisted electrocardiography. Int J Environ Res Public Health. Apr 6, 2021;18(7):3839. [CrossRef] [Medline]
- Moulaei K, Afrash MR, Parvin M, et al. Explainable artificial intelligence (XAI) for predicting the need for intubation in methanol-poisoned patients: a study comparing deep and machine learning models. Sci Rep. Jul 8, 2024;14(1):15751. [CrossRef] [Medline]
- Chan MJ, Hu CC, Huang WH, Hsu CW, Yen TH, Weng CH. An artificial intelligence algorithm for analyzing globus pallidus necrosis after carbon monoxide intoxication. Hum Exp Toxicol. 2023;42:9603271231190906. [CrossRef] [Medline]
- Yen JS, Hu CC, Huang WH, Hsu CW, Yen TH, Weng CH. An artificial intelligence algorithm for analyzing acetaminophen-associated toxic hepatitis. Hum Exp Toxicol. Nov 2021;40(11):1947-1954. [CrossRef] [Medline]
- Li H, Liu Z, Sun W, Li T, Dong X. Interpretable machine learning for the prediction of death risk in patients with acute diquat poisoning. Sci Rep. Jul 12, 2024;14(1). [CrossRef]
- Mehrpour O, Saeedi F, Vohra V, Hoyte C. Outcome prediction of methadone poisoning in the United States: implications of machine learning in the National Poison Data System (NPDS). Drug Chem Toxicol. Sep 2, 2024;47(5):556-563. [CrossRef]
- Mehrpour O, Saeedi F, Hoyte C, Goss F, Shirazi FM. Utility of support vector machine and decision tree to identify the prognosis of metformin poisoning in the united states: analysis of national poisoning data system. BMC Pharmacol Toxicol. 2022. [CrossRef] [Medline]
- Hu L, Lin F, Li H, et al. An intelligent prognostic system for analyzing patients with paraquat poisoning using arterial blood gas indexes. J Pharmacol Toxicol Methods. Mar 2017;84:78-85. [CrossRef]
- Chen H, Hu L, Li H, et al. An effective machine learning approach for prognosis of paraquat poisoning patients using blood routine indexes. Basic Clin Pharma Tox. Jan 2017;120(1):86-96. [CrossRef]
- Poweleit EA, Vinks AA, Mizuno T. Artificial intelligence and machine learning approaches to facilitate therapeutic drug management and model-informed precision dosing. Ther Drug Monit. Apr 1, 2023;45(2):143-150. [CrossRef] [Medline]
- Mohtarami SA, Mostafazadeh B, Shadnia S, et al. Prediction of naloxone dose in opioids toxicity based on machine learning techniques (artificial intelligence). Daru. Dec 2024;32(2):495-513. [CrossRef] [Medline]
- Sato R, Tsuchiya M, Ichiyama R, et al. Analysis of overdose-related posts on social media. Yakugaku Zasshi. 2024;144(12):1125-1135. [CrossRef] [Medline]
- Shah-Mohammadi F, Finkelstein J. Utilizing RAG and GPT-4 for extraction of substance use information from clinical notes. Stud Health Technol Inform. Nov 22, 2024;321:94-98. [CrossRef] [Medline]
- Potash E, Ghani R, Walsh J, et al. Validation of a machine learning model to predict childhood lead poisoning. JAMA Netw Open. Sep 1, 2020;3(9):e2012734. [CrossRef] [Medline]
- Nogué-Xarau S, Ríos-Guillermo J, Amigó-Tadín M, Grupo de Trabajo de Toxicología de la Societat Catalana de Medicina d’Urgències i Emergències (SoCMUETox). Comparing answers of artificial intelligence systems and clinical toxicologists to questions about poisoning: can their answers be distinguished? Emergencias. Oct 2024;36(5):351-358. [CrossRef] [Medline]
- Matsler N, Pepin L, Banerji S, Hoyte C, Heard K. Use of large language models to optimize poison center charting. Clin Toxicol (Phila). Jun 2024;62(6):385-390. [CrossRef] [Medline]
- Alowais SA, Alghamdi SS, Alsuhebany N, et al. Revolutionizing healthcare: the role of artificial intelligence in clinical practice. BMC Med Educ. Sep 22, 2023;23(1):689. [CrossRef] [Medline]
- Celi LA, Cellini J, Charpignon ML, et al. Sources of bias in artificial intelligence that perpetuate healthcare disparities-A global review. PLOS Digit Health. Mar 2022;1(3):e0000022. [CrossRef] [Medline]
- Abràmoff MD, Tarver ME, Loyo-Berrios N, et al. Foundational principles of Ophthalmic Imaging and Algorithmic Interpretation Working Group of the Collaborative Community for Ophthalmic Imaging Foundation. NPJ Digit Med. Sep 12, 2023;6(1). [CrossRef]
- Palaniappan K, Lin EYT, Vogel S. Global regulatory frameworks for the use of artificial intelligence (AI) in the healthcare services sector. Healthcare (Basel). Feb 28, 2024;12(5):562. [CrossRef] [Medline]
- Yang G, Ye Q, Xia J. Unbox the Black-Box for the Medical Explainable AI via Multi-Modal and Multi-Centre Data Fusion: A Mini-Review, Two Showcases and Beyond. Vol 77. Inf Fusion Elsevier BV; 2022:29-52. [CrossRef]
- Steerling E, Siira E, Nilsen P, Svedberg P, Nygren J. Implementing AI in healthcare-the relevance of trust: a scoping review. Front Health Serv. 2023;3:1211150. [CrossRef] [Medline]
- Sagona M, Dai T, Macis M, Darden M. Trust in AI-assisted health systems and AI’s trust in humans. npj Health Syst. Mar 28, 2025;2(1). [CrossRef]
- Starke G, Gille F, Termine A, et al. Finding consensus on trust in AI in health care: recommendations from a panel of international experts. J Med Internet Res. Feb 19, 2025;27:e56306. [CrossRef] [Medline]
- Okada Y, Ning Y, Ong MEH. Explainable artificial intelligence in emergency medicine: an overview. Clin Exp Emerg Med. Dec 2023;10(4):354-362. [CrossRef] [Medline]
- Oh S, Kim JH, Choi SW, Lee HJ, Hong J, Kwon SH. Physician confidence in artificial intelligence: an online mobile survey. J Med Internet Res. Mar 25, 2019;21(3):e12422. [CrossRef] [Medline]
- Behnoush B, Bazmi E, Nazari SH, Khodakarim S, Looha MA, Soori H. Machine learning algorithms to predict seizure due to acute tramadol poisoning. Hum Exp Toxicol. Aug 2021;40(8):1225-1233. [CrossRef] [Medline]
- Wang SY, Pershing S, Lee AY, AAO Taskforce on AI and AAO Medical Information Technology Committee. Big data requirements for artificial intelligence. Curr Opin Ophthalmol. Sep 2020;31(5):318-323. [CrossRef] [Medline]
- Xu J, Glicksberg BS, Su C, Walker P, Bian J, Wang F. Federated learning for healthcare informatics. J Healthc Inform Res. 2021;5(1):1-19. [CrossRef] [Medline]
- Moor M, Banerjee O, Abad ZSH, et al. Foundation models for generalist medical artificial intelligence. Nature New Biol. Apr 13, 2023;616(7956):259-265. [CrossRef]
- Carreiro S, Smelson D, Ranney M, et al. Real-Time mobile detection of drug use with wearable biosensors: a pilot study. J Med Toxicol. Mar 2015;11(1):73-79. [CrossRef]
- Liu Y, Li J, Xiao S, et al. Revolutionizing precision medicine: exploring wearable sensors for therapeutic drug monitoring and personalized therapy. Biosensors. Jul 12, 2023;13(7):726. [CrossRef]
Abbreviations
| AI: artificial intelligence |
| AUC: area under the curve |
| ML: machine learning |
| NLP: natural language processing |
| NPDS: National Poison Data System |
| DNN: deep neural network |
| CCB: calcium channel blockers |
| ECG: electrocardiograph |
| DL: deep learning |
| RF: Random forest |
| XGB: Extreme Gradient Boost |
| SVM: Support Vector Machine |
| PSO: particle swarm optimization |
| EHR: electronic health record |
| IoT: Internet of Things |
Edited by Taiane de Azevedo Cardoso; submitted 25.02.25; peer-reviewed by Francesk Mulita, Samah Ibrahim, Sridevi Wagle; final revised version received 08.07.25; accepted 30.07.25; published 22.08.25.
Copyright© Lorraine Pei Xian Yong, Joshua Yi Min Tung, Nicole Mun Teng Cheung, Zi Yao Lee, Ee Yang Ng, Alexander Jet Yue Ng, Clement Kee Woon Lim, Yuru Boon, Daniel Yan Zheng Lim, Gerald Gui Ren Sng, Jonathan Zhe Ying Tang. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 22.8.2025.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research (ISSN 1438-8871), is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.

