Abstract
Background: Generative artificial intelligence (GenAI) systems like Anthropic’s Claude and OpenAI’s ChatGPT are rapidly being adopted in various sectors, including health care, offering potential benefits for clinical support, administrative efficiency, and patient information access. However, real-world adoption patterns and the extent to which GenAI is used for health care–related tasks remain poorly understood and distinct from performance benchmarks in controlled settings. Understanding these organic usage patterns is key for assessing GenAI’s impact on health care delivery and patient-provider dynamics.
Objective: This study aimed to quantify the real-world frequency and scope of health care–related tasks performed using Anthropic’s Claude GenAI. We sought to (1) measure the proportion of Claude interactions related to health care tasks versus other domains; (2) identify specific health care occupations (as per O*NET classifications) with high associated interaction volumes; (3) assess the breadth of task adoption within roles using a “digital adoption rate”; and (4) interpret these findings considering the inherent ambiguity regarding user identity (ie, professionals vs public) in the dataset.
Methods: We performed a cross-sectional analysis of more than 4 million anonymized user conversations with Claude (ie, including both free and pro subscribers) from December 2024 to January 2025, using a publicly available dataset from Anthropic’s Economic Index research. Interactions were preclassified by Anthropic’s proprietary Clio model into standardized occupational tasks mapped to the US Department of Labor’s O*NET database. The dataset did not allow differentiation between health care professionals and the general public as users. We focused on interactions mapped to O*NET Healthcare Practitioners and Technical Occupations. Main outcomes included the proportion of interactions per health care occupation, proportion of overall health care interaction versus other categories, and the digital adoption rate (ie, distinct tasks performed via GenAI divided by the total possible tasks per occupation).
Results: Health care–related tasks accounted for 2.58% of total analyzed GenAI conversations, significantly lower than domains such as computing (37.22%). Within health care, interaction frequency varied notably by role. Occupations emphasizing patient education and guidance exhibited the highest proportion, including dietitians and nutritionists (6.61% of health care conversations), nurse practitioners (5.63%), music therapists (4.54%), and clinical nurse specialists (4.53%). Digital adoption rates (task breadth) ranged widely across top health care roles (13.33%‐65%), averaging 16.92%, below the global average (21.13%). Tasks associated with medical records and health information technicians had the highest adoption rate (65.0%).
Conclusions: GenAI tools are being adopted for a measurable subset of health care–related tasks, with usage concentrated in specific, often patient-facing roles. The critical limitation of user anonymity prevents definitive conclusions regarding whether usage primarily reflects patient information–seeking behavior (potentially driven by access needs) or professional workflow assistance. This ambiguity necessitates caution when interpreting current GenAI adoption. Our findings emphasize the urgent need for strategies addressing potential impacts on clinical workflows, patient decision-making, information quality, and health equity. Future research must aim to differentiate user types, while stakeholders should develop targeted guidance for both safe patient use and responsible professional integration.
doi:10.2196/73918
Keywords
Introduction
Generative artificial intelligence (GenAI) systems, developed by organizations such as Anthropic (Claude), OpenAI (ChatGPT), Google (Gemini), Microsoft (Copilot), and Meta (Llama), represent a significant technological shift and are rapidly being adopted in diverse sectors globally at an unprecedented speed [
]. Health care represents a domain where GenAI holds considerable transformative potential. Proposed applications range from enhancing clinical decision-making support [ , ] and potentially automating high-skilled cognitive tasks distinct from previous software capabilities [ ], to streamlining burdensome administrative workflows and empowering patients with more accessible, tailored health information [ , ]. The rapid proliferation and increasing sophistication of these tools necessitate a clear understanding of their current role and impact within the health care ecosystem.Initial research has started to explore the capabilities of GenAI in health care contexts, often demonstrating performance on par with, or even exceeding that of human clinicians on specific, well-defined tasks such as diagnostic reasoning or medical knowledge recall [
- ]. For instance, studies have compared GenAI outputs to physician performance in complex diagnostic challenges [ ] and clinical reasoning assessments [ ]. Furthermore, user trust plays a critical role in the adoption of these AI tools [ ], adding another layer to understanding real-world uptake. However, these studies typically occur in controlled or experimental settings, focusing on benchmark performance rather than quantifying the extent and nature of GenAI’s spontaneous, real-world application across the breadth of health care activities.Consequently, a significant gap exists between understanding GenAI’s potential capabilities and characterizing its actual adoption patterns and usage frequency for health care–related tasks by both professionals and the public. Studies have confirmed that rapid AI adoption is occurring across the economy [
, ], highlighting the urgency of understanding its domain-specific implications. This widespread availability and adoption of GenAI tools raise many questions about their real-world integration [ ]. As individuals increasingly turn to free and widely available GenAI models, how frequently are these interactions related to health care? Are certain clinical roles or types of health care tasks experiencing disproportionately high engagement via GenAI? Given these questions, we sought to move beyond capability assessment and provide empirical data on the current landscape of GenAI usage for tasks associated with health care occupations.Using Anthropic’s foundational “Economic Index” research which provided a broad overview of artificial intelligence (AI) usage across various economic sectors [
], this cross-sectional study aimed to conduct a more focused analysis specifically within the health care domain. Leveraging the same large-scale, anonymized dataset of user interactions with Claude, our primary objectives were to (1) quantify the overall proportion of GenAI interactions dedicated to tasks associated with health care occupations, as defined by the US Department of Labor’s [ ] O*NET database [ ]; (2) identify which specific health care roles exhibit the highest frequency of associated task interactions; (3) introduce and assess a novel metric—the “digital adoption rate,”—representing the breadth of GenAI adoption within these roles based on the variety of distinct occupational tasks performed using the AI; and (4) contextualize health care–related usage by comparing it with usage patterns across other major occupational sectors.Methods
Data Source and Study Design
We leveraged a publicly available, privacy-preserving aggregated dataset derived from over 4 million anonymized conversations with Claude from free and pro subscribers, released as part of Anthropic’s “Economic Index” research [
]. This dataset reflects a broad spectrum of real-world user interactions, offering an ecologically valid basis for examining GenAI engagement with professional tasks.Occupational Task Mapping
Our analysis used the premapped dataset created and open-sourced by Anthropic, which contains conversations already mapped to standardized occupational tasks as defined by the US Department of Labor’s O*NET database. Anthropic’s team used their proprietary Clio model to create these mappings and validated them through manual review [
]. Specifically, they used hand validation across 150 examples for task hierarchy classifications, finding that 86% of conversations were judged as correctly assigned at the base O*NET task level, 91.3% at the middle level, and 95.3% at the top level of their hierarchical framework [ ]. We focused our analysis specifically on the subset of this dataset related to tasks and occupations within the health care domain. illustrates the Clio mapping methodology that underpins the dataset in this study.
Metrics and Outcomes
- Occupation-specific GenAI usage: We computed the proportion of total GenAI conversations attributable to each O*NET Healthcare Practitioners and Technical Occupation.
- Global occupational categories: We compared health care’s share of GenAI usage against other occupational categories (eg, computing, arts and media, education) to contextualize overall representation.
- Digital adoption rate: We calculated the ratio of distinct tasks performed with GenAI to total distinct potential tasks for each occupation. For instance, if an occupation had 20 potential tasks in the O*NET database and 5 of them appeared in GenAI conversations, the adoption rate was 25%. This analysis was conducted for the top 25 health care practitioners and technical occupations.
Statistical Analysis
Descriptive statistics summarized GenAI usage proportions and adoption rates. We used simple comparisons (eg, ratio of usage in health care vs other categories) to maximize interpretability while highlighting relative differences. All analyses were performed using R software (version 4.3.1; R Foundation for Statistical Computing) [
].Ethical Considerations
Ethics Committee of The Ohio State University determined that our study did not qualify as human subjects research requiring review as no identifiable personal data were included in the dataset. All conversations were preanonymized and aggregated, and the data were made public by Anthropic on February 10, 2025 [
].Results
Overall Health care Usage
Tasks mapped to Health care Practitioners and Technical Occupations collectively represented 2.58% of all GenAI conversations (
and ). In contrast, computing‐related fields accounted for 37.22%, followed by arts and media (10.28%) and education (9.33%).Occupational categories | Proportion of anonymized Claude conversations (global %) |
Computer and mathematics | 37.22 |
Arts, design, entertainment, sports, and media | 10.28 |
Educational instruction and library | 9.33 |
Office and administrative support | 7.87 |
Life, physical, and social science | 6.38 |
Business and financial operations | 5.93 |
Architecture and engineering | 4.51 |
Management | 4.48 |
Production | 2.93 |
Health care practitioners and technical | 2.58 |
Sales and related | 2.27 |
Community and social service | 2.07 |
Legal | 0.90 |
Installation, maintenance, and repair | 0.72 |
Food preparation and serving-related | 0.53 |
Personal care and service | 0.46 |
Construction and extraction | 0.42 |
Protective service | 0.38 |
Transportation and material moving | 0.28 |
Health care support | 0.27 |
Building and grounds cleaning and maintenance | 0.10 |
Farming, fishing, and forestry | 0.07 |
aAbsolute values (n) are not available due to privacy-preserving preprocessing of the source data that involved filtering and aggregation steps which preclude determination of exact conversation counts.
Within-Health Care Breakdown
In the top 25 list of Health care Practitioners and Technical Occupations by GenAI conversation share, dietitians and nutritionists (6.61%), nurse practitioners (5.63%), music therapists (4.54%), and clinical nurse specialists (4.53%) had the highest usage (
and ). Meanwhile, other occupations such as pediatricians (1.38%) and internists (1.29%) showed more modest representation.Top 25 health care practitioners and technical occupations | Health care–mapped conversations, % (total conversations, %) |
Dietitians and nutritionists | 6.61 (0.17) |
Nurse practitioners | 5.63 (0.15) |
Music therapists | 4.54 (0.12) |
Clinical nurse specialists | 4.53 (0.12) |
Radiologists | 4.16 (0.11) |
Exercise physiologists | 3.91 (0.10) |
Respiratory therapists | 3.86 (0.10) |
Medical records and health information technicians | 3.74 (0.10) |
Critical care nurses | 3.03 (0.08) |
Medical and clinical laboratory technicians | 2.98 (0.08) |
Genetic counselors | 2.83 (0.07) |
Pharmacists | 2.75 (0.07) |
Nuclear medicine physicians | 2.54 (0.07) |
Pathologists | 2.52 (0.07) |
Advanced practice psychiatric nurses | 2.43 (0.06) |
Art therapists | 2.42 (0.06) |
Allergists and immunologists | 2.25 (0.06) |
Dietetic technicians | 2.08 (0.05) |
Nurse midwives | 1.89 (0.05) |
Medical and clinical laboratory technologists | 1.88 (0.05) |
Physical therapists | 1.75 (0.05) |
Naturopathic physicians | 1.69 (0.04) |
Acute care nurses | 1.55 (0.04) |
General pediatricians | 1.38 (0.04) |
General internists | 1.29 (0.03) |
aHealth care practitioners and technical occupations with the highest proportion of health care–related GenAI interactions.
bAbsolute numbers (n) are not available due to privacy-preserving preprocessing of the source data that involved filtering and aggregation steps which preclude determination of exact conversation counts.
cValues show the proportion of each occupation among health care–mapped conversations and corresponding percentages of total conversations across all domains.
Adoption Rates
Digital adoption rates for top health care occupations ranged from 13.33% to 65%, with an average of 16.92%, compared to a global task adoption rate of 21.13% (
). Medical records and health information technicians (65%), nurse midwives (50%), and pediatricians (43.75%) had the highest task adoption rate.Top 25 health care practitioners and technical occupation adoption rates | Adoption rate, %, (affected tasks/total tasks) |
Medical records and health information technicians | 65.00 (13/20) |
Nurse midwives | 50.00 (10/20) |
General pediatricians | 43.75 (7/16) |
Nurse practitioners | 37.04 (10/27) |
Dietitians and nutritionists | 34.78 (8/23) |
Dietetic technicians | 33.33 (4/12) |
Exercise physiologists | 32.00 (8/25) |
General internists | 31.58 (6/19) |
Genetic counselors | 31.58 (6/19) |
Clinical nurse specialists | 31.03 (9/29) |
Pharmacists | 30.00 (6/20) |
Radiologists | 27.59 (8/29) |
Music therapists | 26.67 (8/30) |
Pathologists | 26.32 (5/19) |
Acute care nurses | 25.93 (7/27) |
Physical therapists | 25.00 (6/24) |
Naturopathic physicians | 25.00 (5/20) |
Medical and clinical laboratory technologists | 25.00 (4/16) |
Allergists and immunologists | 25.00 (4/16) |
Advanced practice psychiatric nurses | 25.00 (6/24) |
Art therapists | 24.00 (6/25) |
Critical care nurses | 17.24 (5/29) |
Nuclear medicine physicians | 15.38 (4/26) |
Respiratory therapists | 13.64 (3/22) |
Medical and clinical laboratory technicians | 13.33 (2/15) |
Discussion
Interpreting GenAI Usage Patterns in Health care Tasks Through Multiple Perspectives
Our analysis reveals significant GenAI interaction volume related to health care tasks, yet concentrated within specific roles, particularly those involving substantial patient interaction, education, and guidance (eg, dietitians, nurse practitioners, therapists). These findings point toward an emerging, complex integration of GenAI into the health care ecosystem. However, the anonymized nature of the data necessitates consideration of multiple interpretations about the users driving this usage and their underlying reasons.
Interpretation 1: Patient-Driven Information Seeking and Self-Management
One plausible interpretation is that the observed patterns reflect significant usage by the public. Patients may be turning to GenAI such as Claude, for accessible health information, clarification of medical terminology, lifestyle advice, or even emotional support related to health concerns—tasks commonly associated with roles such as dietitians or nurse practitioners. Such information-seeking behavior aligns with pre-existing trends of patients consulting “Dr. Google” [
] and reflects a potential desire for readily available information [ ]. Observed patterns among ChatGPT users seeking health information, who tend to be younger, report poorer health status, and more frequently use transient care settings [ ], may further point to this behavior, particularly when timely access to health care professionals is limited [ ]. If patients are indeed primary users, this trend could signify a shift toward greater patient engagement and potential empowerment through readily accessible, personalized information. However, this interpretation also carries significant risks: the potential for consuming misinformation or AI “hallucinations,” leading to misguided self-management decisions, delayed consultation with qualified professionals, or increased health anxiety [ ]. These risks have been highlighted in studies examining user trust and reliance on chatbots for health advice [ ]. The accessibility that drives usage could simultaneously introduce harm if not accompanied by subject matter expertise, digital health literacy, and critical appraisal skills.Interpretation 2: Professional Workflow Assistance and Efficiency
Alternatively, the concentration of usage in these roles could indicate adoption by health care professionals themselves. Dietitians, nurse practitioners, clinical nurse specialists, and others might be leveraging GenAI to enhance their workflow–drafting patient education materials, summarizing complex information, exploring differential diagnoses, generating notes, or staying abreast of new research relevant to patient counseling. These uses suggest potential automation of specific cognitive tasks previously resistant to software solutions [
]. From this perspective, GenAI acts as an efficiency tool, potentially helping overloaded professionals manage demanding workloads within a strained health care system. Indeed, workers in exposed occupations often perceive substantial productivity potential in GenAI tools like ChatGPT [ ]. This interpretation suggests a pathway towards augmenting professional capabilities, potentially improving the consistency or breadth of patient communication. Yet, risks exist here as well, including over-reliance on AI outputs that potentially contain inaccuracies or biases, deskilling in core communication or critical thinking tasks through cognitive offloading, and ethical implications of using AI to generate content in patient care without adequate oversight or personalization.The Underlying Influence of Health care System Pressures
Regardless of whether patients or professionals are the primary users, the observed patterns may also indicate broader systemic issues within health care. When professional support is not easily or quickly accessible, individuals—both patients and potentially time-pressed professionals—are likely to turn to readily available tools such as GenAI. Findings that users seeking health information from ChatGPT tend to report poorer health and greater use of transient care settings lend credence to this possibility [
]. This suggests that GenAI adoption in health care contexts might be partially driven by unmet needs and access gaps within the current system. Furthermore, even when productivity benefits are perceived, adoption may be hindered by factors such as employer restrictions or the need for training [ ]. This “compensatory” use stresses the importance for health care systems to adapt, not only by integrating AI safely, but by addressing the root causes that drive patients or professionals toward potentially unverified AI tools out of necessity.Implications for the Patient-Provider Dynamic and Information Hierarchy
The convergence of these possibilities, patients potentially arriving at appointments armed with AI-generated information, and professionals using GenAI to prepare for those encounters, points toward a fundamental reshaping of information hierarchies and the patient-provider relationship. The dynamic shifts from a traditionally asymmetric flow of information to one where both parties may engage with AI-derived knowledge. This shift demands new communication strategies, enhanced critical appraisal skills from both sides, and a shared understanding of GenAI’s current capabilities and limitations [
]. This is particularly relevant given that a significant portion of patients report acting on AI-generated health information, such as changing medications or seeking referrals, and often subsequently discussing the AI output with their physicians [ ]. Addressing the potential for GenAI to exacerbate existing health inequities, especially those based on digital access or literacy, is also paramount.Future Directions
Health care organizations and regulatory bodies should develop comprehensive frameworks for responsible AI integration. These should include creating evidence-based guidelines for clinical practice integration and establishing rigorous standards for the accuracy of AI-generated health information. Ethical considerations surrounding data privacy, training bias, and the potential for exacerbating health inequities must also be central to any framework for responsible GenAI integration. Future research should investigate whether using GenAI translates into improved health outcomes and how shifts in digital literacy affect patient engagement. Importantly, developing methodologies to better differentiate between professional and lay user interactions with health-related GenAI queries will be essential for accurately interpreting adoption trends and tailoring interventions at scale.
Limitations
First, we relied on anonymized, aggregated conversations; thus information on user credentials and demographics is not available. Second, the mapping from conversation text to occupational tasks may be imperfect, particularly when laypersons seek professional-grade information. Third, our cross-sectional design provides a single snapshot in time rather than longitudinal trends. Fourth, data from conversations were limited to Claude since Anthropic is the only company that has open-sourced data for analysis. The demographics of Claude users, along with the reasons why users engage with Claude, may or may not be representative across other GenAI tools which could fundamentally bias usage patterns. Finally, we do not know what users did with the information they received from GenAI or how it impacted health outcomes. Future analyses should account for ongoing changes in GenAI capabilities and usage patterns.
Conclusion
Generative AI is making tangible inroads into health care–related tasks, yet its adoption appears selective, with high interaction volumes associated with patient-facing roles such as dietitians and nurse practitioners. Our analysis, based on interactions with Claude, highlights this emerging pattern but calls attention to a critical ambiguity: whether this usage is primarily driven by patients seeking information and navigating access challenges, by professionals seeking workflow efficiencies, or by a combination of both.
Regardless of the primary user, this integration demands immediate attention. The observed patterns necessitate a shift beyond simply acknowledging GenAI’s potential, toward implementing proactive strategies. Future research must focus on identifying who uses these tools for health care–related tasks and understanding their motivations, potentially through methods that more accurately infer user intent or role. Concurrently, stakeholders including clinicians, educators, developers, and policymakers must develop distinct guidance—resources promoting digital health literacy and critical appraisal for patients, and evidence-based best practices for safe and effective integration into professional clinical workflows. Addressing how systemic health care pressures may be influencing this adoption is also crucial. Ensuring GenAI enhances, rather than compromises, health care delivery requires targeted action now to shape its responsible use before usage patterns solidify.
Acknowledgments
We used GenAI (OpenAI’s ChatGPT o3-mini) to improve the conciseness and clarity of our manuscript to meet word count recommendations. We take full responsibility for the integrity of the content generated.
Authors' Contributions
Conceptualization: CCQY, CEQ, GA, JC
Methodology: GA
Formal analysis: GA
Supervision: CEQ
Visualization: ES, GA
Writing – original draft: GA
Writing – review & editing: CCQY, CEQ, ES, JC
Conflicts of Interest
None declared.
Global percentage of GenAI conversations by occupational categories.
PNG File, 92 KBTop 25 health care practitioners and technical occupational titles by GenAI conversation share. Values in parentheses show each occupation's share of total global GenAI conversations, while the primary percentage represents the share within the health care practitioner group.
PNG File, 98 KBGenAI task adoption rate for top 25 health care practitioners and technical occupations. Values show each occupation's adoption rate as a percentage of possible tasks (total adopted tasks/total available tasks), with the raw task counts shown in parentheses.
PNG File, 114 KBReferences
- Bick A, Blandin A, Deming DJ. The rapid adoption of generative AI. National Bureau of Economic Research. Feb 2025. [CrossRef]
- Li YH, Li YL, Wei MY, Li GY. Innovation and challenges of artificial intelligence technology in personalized healthcare. Sci Rep. Aug 16, 2024;14(1):18994. [CrossRef] [Medline]
- Takita H, Kabata D, Walston SL, et al. Diagnostic performance comparison between generative AI and physicians: a systematic review and meta-analysis. Health Informatics. Preprint posted online on 2024. [CrossRef]
- Webb M. The impact of artificial intelligence on the labor market. SSRN Journal. URL: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3482150 [Accessed 2025-05-27]
- Cabral S, Restrepo D, Kanjee Z, et al. Clinical reasoning of a generative artificial intelligence model compared with physicians. JAMA Intern Med. May 1, 2024;184(5):581-583. [CrossRef] [Medline]
- Kanjee Z, Crowe B, Rodman A. Accuracy of a generative artificial intelligence model in a complex diagnostic challenge. JAMA. Jul 3, 2023;330(1):78-80. [CrossRef] [Medline]
- Brodeur PG, Buckley TA, Kanjee Z, et al. Superhuman performance of a large language model on the reasoning tasks of a physician. arXiv. Preprint posted online on Dec 14, 2024. [CrossRef]
- Choudhury A, Shamszare H. Investigating the impact of user trust on the adoption and use of ChatGPT: survey analysis. J Med Internet Res. Jun 14, 2023;25:e47184. [CrossRef] [Medline]
- Humlum A, Vestergaard E. The adoption of chatgpt (forthcoming). SSRN Journal. 2024. URL: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4827166 [Accessed 2025-05-27]
- Roth E. ChatGPT now has over 300 million weekly users. The Verge. URL: https://www.theverge.com/2024/12/4/24313097/chatgpt-300-million-weekly-users [Accessed 2024-04-12]
- Handa K, Tamkin A, McCain M, et al. Which economic tasks are performed with AI? Evidence from millions of Claude conversations. arXiv. Preprint posted online on Feb 11, 2025. [CrossRef]
- Occupational employment and wage statistics. US Bureau of Labor Statistics. 2025. URL: https://www.bls.gov/oes/ [Accessed 2025-05-27]
- O*NET online. US Department of Labor. 2025. URL: https://www.onetonline.org/ [Accessed 2025-05-27]
- Tamkin A, McCain M, Handa K, et al. Clio: privacy-preserving insights into real-world AI use. arXiv. Preprint posted online on Dec 18, 2024. [CrossRef]
- R Core Team. A language and environment for statistical computing version 4.3.1. R Foundation for Statistical Computing. 2023. URL: https://www.r-project.org/ [Accessed 2025-05-27]
- Anthropic. Anthropic economic index [dataset]. Hugging face. Feb 10, 2025. URL: https://huggingface.co/datasets/Anthropic/EconomicIndex [Accessed 2025-02-12]
- Stukus DR. How Dr Google Is impacting parental medical decision making. Immunol Allergy Clin North Am. Nov 2019;39(4):583-591. [CrossRef]
- Lee K, Hoti K, Hughes JD, Emmerton LM. Consumer use of “Dr Google”: a survey on health information-seeking behaviors and navigational needs. J Med Internet Res. Dec 29, 2015;17(12):e288. [CrossRef] [Medline]
- Ayo-Ajibola O, Davis RJ, Lin ME, Riddell J, Kravitz RL. Characterizing the adoption and experiences of users of artificial intelligence-generated health information in the United States: cross-sectional questionnaire study. J Med Internet Res. Aug 14, 2024;26:e55138. [CrossRef] [Medline]
- Feyman Y, Asfaw DA, Griffith KN. Geographic variation in appointment wait times for US military veterans. JAMA Netw Open. Aug 1, 2022;5(8):e2228783. [CrossRef] [Medline]
Abbreviations
AI: artificial intelligence |
GenAI: Generative Artificial Intelligence |
Edited by Javad Sarvestan; submitted 13.03.25; peer-reviewed by Adeola Bakare, Chiamaka Pamela Agu, Odumbo Oluwole, Oluranti Akinsola, Varun Varma Sangaraju; final revised version received 30.04.25; accepted 01.05.25; published 30.05.25.
Copyright© Gabriel Alain, James Crick, Ella Snead, Catherine C Quatman-Yates, Carmen E Quatman. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 30.5.2025.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research (ISSN 1438-8871), is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.