Published on in Vol 27 (2025)

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/71418, first published .
Perceptions and Attitudes of Chinese Oncologists Toward Endorsing AI-Driven Chatbots for Health Information Seeking Among Patients with Cancer: Phenomenological Qualitative Study

Perceptions and Attitudes of Chinese Oncologists Toward Endorsing AI-Driven Chatbots for Health Information Seeking Among Patients with Cancer: Phenomenological Qualitative Study

Perceptions and Attitudes of Chinese Oncologists Toward Endorsing AI-Driven Chatbots for Health Information Seeking Among Patients with Cancer: Phenomenological Qualitative Study

1Division of Abdominal Tumor Multimodality Treatment, Cancer Center, West China Hospital, Sichuan University, Chengdu, China

2Department of Gynecology and Obstetrics Nursing, West China Second University Hospital, Sichuan University, Chengdu, China

3West China School of Nursing, Sichuan University, Chengdu, China

4Key Laboratory of Birth Defects and Related Disease of Women and Children, Sichuan University, Chengdu, China

5Division of Internal Medicine, Institute of Integrated Traditional Chinese and Western Medicine, West China Hospital, Sichuan University, Chengdu, China

6Department of Radiation Oncology, Hainan Affiliated Hospital of Hainan Medical University (Hainan General Hospital), #19 Xiuhua Road, Xiuying District, Haikou, China

Corresponding Author:

Zhaojun Li, MD


Background: Chatbots driven by large language model artificial intelligence (AI) have emerged as potential tools to enhance health information access for patients with cancer. However, their integration into patient education raises concerns among oncologists. Limited literature has examined the perceptions and attitudes of oncologists in terms of endorsing AI-driven chatbots for health information.

Objective: This study aims to explore the perceptions and attitudes of Chinese oncologists toward endorsing AI-driven chatbots to patients with cancer.

Methods: In this phenomenological qualitative study, we purposively sampled oncologists from 4 hospitals in Southwest and East China and conducted semistructured interviews with 24 participants between November 19, 2024, and December 21, 2024. The data saturation principle was observed to determine the end point of data collection. Data were analyzed using the Colaizzi method.

Results: The participants were aged 42.0 (range 29‐53) years on average, including 9 (37%) female and 15 (62%) male participants. The participants had an average of 8.8 (range 1‐25) years in oncology. Of the participants, 7 (29%) had recommended AI chatbots to patients. Three key themes were revealed from analysis of interview transcriptions, including perceived benefits, significant concerns, and impacts on doctor-patient dynamics. Benefits included enhanced accessibility and potential support for chronic condition management. Concerns centered on liability, misinformation, lack of personalization, privacy and data security risks, and patient readiness and education. Oncologists stressed a dual impact of AI chatbots on doctor-patient dynamics, recognizing the potential for improved communication and risks of trust erosion due to overreliance on AI.

Conclusions: While recognizing the potential of AI-driven chatbots to enhance accessibility of health information and chronic disease management, Chinese oncologists report significant concerns, including liability, misinformation, lack of personalization, privacy and data security risks, and patient readiness. Addressing the challenges requires comprehensive solutions, such as clear policies and guidelines, rigorous testing and validation, institutional endorsement, and robust patient and provider education. Future efforts should focus on resolving the barriers while leveraging the strengths of AI technology to support patient-centered care in a safe, effective, and ethical manner.

J Med Internet Res 2025;27:e71418

doi:10.2196/71418

Keywords



Chatbots driven by large language model (LLM) artificial intelligence (AI) have emerged as innovative tools in health care [1-3]. They leverage advanced natural language processing to interact with users in real-time. They simulate human conversation to provide information, answer queries, and offer support based on a vast repository of data [4]. Equipped with LLMs, they are capable of understanding complex questions and delivering contextually relevant responses. This makes them valuable assets for health information dissemination [3].

The integration of AI-driven chatbots in health care has been growing rapidly, given their potential to bridge gaps in patient education and support. The tools can assist individuals in understanding medical conditions, exploring treatment options, and navigating health systems. Their ability to operate 24/7 offers a unique advantage in addressing patients’ nonurgent concerns and reducing reliance on in-person consultations for routine information [5-7]. Additionally, they hold promise in alleviating the burden on health care providers by streamlining communication and enhancing patient engagement [8,9]. Globally, widely recognized AI-driven chatbots, such as ChatGPT, have gained prominence for their versatility in responding to health-related inquiries [10]. In China, domestic tools such as Kimichat have been developed to cater to local linguistic and cultural contexts.

Accessible and reliable health information is crucial for patients with cancer. It enables them to make informed decisions about their treatment, manage symptoms, and improve their overall quality of life. In cancer care, patients often face complex medical decisions that require a clear understanding of their condition, treatment options, potential side effects, and prognosis [11,12]. Reliable information can also help patients and their families cope with the emotional and psychological challenges associated with a cancer diagnosis and foster a sense of control and preparedness [13,14]. However, patients with cancer frequently encounter significant challenges in accessing relevant information. One major issue is the prevalence of misinformation, particularly from unverified web-based sources. Patients who turn to search engines or social media platforms for answers may encounter inaccurate, incomplete, or overly generalized information that can mislead them or exacerbate their anxiety [15,16]. Additionally, limited access to health care resources further complicates information-seeking efforts. Oftentimes, face-to-face consultations with oncology specialists may be infrequent or unavailable. Patients may struggle to attain timely answers, which leaves gaps in their understanding of their disease and its management. Even in urban centers, the high patient-to-doctor ratio often results in brief consultations, leaving little time for detailed explanations of medical conditions or treatments. As a result, innovative solutions, such as AI-driven chatbots, can provide reliable and easily accessible health information to support patients with cancer, which could complement traditional health care delivery.

According to current reports, LLM AI chatbots, such as ChatGPT, have significantly impacted patient education by providing accessible health information and personalized support. They have been used to interpret complex medical data, generate patient-friendly educational materials, and answer health-related queries, which has enhanced patient engagement and health literacy [17]. However, evaluations of the chatbots reveal mixed results regarding accuracy and reliability. For instance, studies assessing ChatGPT’s responses to medical queries have noted variability in accuracy, safety, relevance, and readability. Challenges such as the risk of disseminating misinformation, lack of personalization, and ethical concerns related to patient privacy have been noted [18-20].

Oncologists are at the forefront of cancer care. They serve as clinical decision makers and trusted advisors in patient education. Their role has evolved to include guiding patients through increasingly complex treatment options and emerging digital health tools. In today’s rapidly changing health care landscape, the endorsement of innovative technologies, such as AI-driven chatbots, is critical, as it can significantly influence patient trust and engagement. Despite their central impact on patient care and information dissemination, the unique perspectives of oncologists on these digital tools have received limited attention. This study seeks to bridge that gap by exploring Chinese oncologists’ perceptions and attitudes toward endorsing AI-driven chatbots for the health information seeking of patients with cancer. By using a phenomenological qualitative approach, our findings may capture their lived experiences, illuminate the pivotal role they play in integrating digital innovations into clinical practice, and identify both the facilitators and barriers to adoption.


Study Setting

The study was conducted among oncologists working in 4 major hospitals in Southwest and East China. The primary investigator (PI; first author) is from West China Hospital, which is one of the country’s largest comprehensive hospitals and a leading regional center for cancer care. The other 3 hospitals included 2 major comprehensive hospitals and one specialized cancer hospital, all recognized for their strong oncology services. The hospitals were chosen to ensure diverse representation of oncologists’ experiences and perspectives, given a wide range of institutional settings and patient populations.

Study Design

We used a phenomenological qualitative design to explore the perceptions and attitudes of oncologists toward endorsing AI-driven chatbots to their patients for health information seeking. The phenomenological approach was chosen for its ability to capture and interpret the subjective experiences and insights of participants and provide a deeper understanding of the study topic [21]. The design was deemed appropriate for investigating the attitudes and concerns of oncologists in the context of integrating AI technologies into patient education and care.

Sampling Strategy

A purposive sampling strategy was used, which ensured the selection of participants who had appropriate backgrounds and could provide relevant, rich, and detailed insights [22].

Participant Enrollment

Licensed oncologists actively practicing at the selected hospitals, with at least 2 years of clinical experience, were recruited for the study. Participants were required to have experience in providing patient education and using AI-driven chatbots, including ChatGPT or similar Chinese chatbots. Those who expressed interest in participating, after being given detailed study information, were enrolled.

Notably, we set a minimum requirement of 2 years of clinical experience to ensure that participants had adequate exposure to the complexities of real-world clinical practice, as well as exposure to the AI chatbots emerging in the last 2 years. The threshold was meant to ensure that the collected data are grounded in experienced perspectives, thereby enhancing the credibility and validity of the findings by relying on the judgments of practitioners who have encountered realistic clinical scenarios and patient interactions.

To recruit participants, invitations were extended via phone calls through the PI’s professional network. Given the PI’s senior standing in oncology and long-standing relationships with a wide range of practitioners across different hospital settings in Southwest and East China, the purposive sampling strategy enabled the inclusion of oncologists with diverse demographic backgrounds, clinical experiences, and levels of exposure to AI-driven chatbots. This targeted approach ensured a comprehensive representation of perspectives, thereby mitigating potential recruitment bias related to network-based sampling. A 20‐30 minute semistructured interview was conducted either immediately or at a later time convenient for the participant.

Data Collection

Data were collected through semistructured phone interviews. An interview guide was developed and pilot-tested with 3 oncologists to refine the questions and flow (Multimedia Appendix 1). The pilot interviews and transcripts were excluded from the final analysis. Each interview phone call was audio-recorded with permission. Notes were also taken to capture key observations. Data collection continued until data saturation was achieved, as determined by the point where no new analytic information emerged in 3 consecutive interviews [23]. Participants’ demographic data were collected before interviews using a standard demographic information form, which was compiled and desensitized for subsequent analysis.

Data Analysis

Data were analyzed using the Colaizzi method, a rigorous approach that systematically guides researchers through extracting and organizing significant statements, formulating meanings, and clustering themes [24]. We chose the Colaizzi method over other phenomenological techniques, such as Giorgi or van Manen approaches, for several reasons. First, unlike the other 2 methods, the practice of returning to participants for validation of the identified meanings enhances the credibility and confirmability of our findings. This was especially important as our findings might be used to support sensitive clinical decisions. Second, the Colaizzi method is particularly suited for capturing the in-depth details of lived experiences. This aligns with our objective of exploring the complex perceptions and attitudes of oncologists. It could ensure a comprehensive interpretation of the data while maintaining comprehensive transparency throughout the analysis, which strengthens the study’s methodological rigor. The process involved multiple steps. The researchers first read the transcripts at least 3 times to gain familiarity and then extracted significant statements and formulated meanings. The meanings were clustered into themes and subthemes, which were returned to participants for validation. Any discrepancies were discussed until consensus was achieved.

Study Rigor

To ensure the trustworthiness of the study, we implemented several strategies, including member checking for ensuring credibility, where participants reviewed and validated the findings; an audit trail documenting key decisions in the study process for dependability; regular reflexivity discussions among the research team to acknowledge and mitigate potential biases due to their personal and professional backgrounds and assumptions; and a reflexivity statement for outlining the researchers’ positions, potential influences on the study, and steps taken to minimize bias throughout the research process (Multimedia Appendix 2) [25-27]. Although we did not formally assess intercoder reliability or conduct external validation of the coding process, multiple researchers independently coded the data and subsequently discussed discrepancies until consensus was reached.

Ethical Considerations

The study was ethically approved by the Ethics Committee of West China Hospital, Sichuan University (HXLL0751). Verbal informed consent was obtained from each participant at the beginning of their interview, including explicit permission for the interview to be recorded. The consent process was documented as part of the audio recording. The study was conducted in accordance with the principles of the Declaration of Helsinki and relevant regulatory codes and guidelines for human subject protection. No compensation of any form was provided to participants for their participation.

All identifiers were anonymized. Participants were assigned unique identifiers (eg, P1 and P2). Audio recordings and transcripts were stored securely on a password-protected flash drive, which was kept by the PI and was accessible only to the research team. Consent forms and other sensitive documents were stored in a locked drawer in the PI’s office. Participants were informed about their right to withdraw from the study at any time without any repercussions.


Overview

A total of 29 candidate oncologists were contacted, with 5 (17%) of them declining participation due to unavailability for interview. Eventually, 24 oncologists were interviewed between November 19, 2024, and December 21, 2024, by which time data saturation was achieved. The interviews lasted 21.7 (range 16‐25) minutes on average. Notably, although the interviews were relatively short given the phenomenological nature of this study, the depth and richness of the data were not compromised as the busy schedules and direct communication style of Chinese oncologists enabled them to convey focused, meaningful insights efficiently. The participants were aged 42.0 (range 29‐53) years on average, including 9 (37%) female and 15 male (62%) participants. The participants had an average of 8.8 (range 1‐25) years in oncology. Of the participants, 7 (29%) had recommended AI chatbots to patients. Table 1 summarizes the demographic characteristics of participants.

Table 1. Summary of participant demographic characteristics.
CharacteristicsValue (N=24), n (%)
Sex
Female9 (37)
Male15 (62)
Age (range in years)
<356 (25)
35‐459 (37)
>459 (37)
Prior exposure to AIa chatbots
Chinese chatbots7 (29)
ChatGPT and Chinese chatbots17 (71)
Ever recommended an AI chatbot to a patient
Yes7 (29)
No17 (71)
Expressed reluctance to use an AI chatbot in clinical practice
Yes19 (79)
No5 (21)

aAI: artificial intelligence.

Key Themes

Three overarching themes with 7 subthemes emerged from the interviews, including the perceived benefits of AI chatbots, 5 significant concerns such as liability, misinformation, lack of personalization, patient privacy, and patient readiness, and the impact of chatbots on the doctor-patient relationship.

Theme 1: Perceived Benefits of AI Chatbots

Participants cited 2 main potential benefits of AI chatbots in oncology care regarding their ability to enhance accessibility and support chronic disease management. While most participants saw them generally as supplementary to professional care, some recognized their value in improving patient education and engagement, especially for routine queries or long-term disease management.

Subtheme 1.1: Accessibility and Convenience

The 24/7 availability of AI chatbots was the most frequently mentioned advantage. Participants noted that the tools could help bridge gaps in access to health information. By addressing nonurgent queries, they were believed to be an effective asset to reduce the workload of health care providers while enabling patients to seek basic information independently.

One biggest advantage [of the AI chatbots] is that they are readily available. Patients can access them at any time, even late at night. For minor concerns, such as looking up symptoms or side effects, patients don’t have to wait to see a doctor.
[P2]
It’s not feasible for patients to call a hospital or doctor for small queries. The AI [chatbots] is particularly useful in this situation. Patients can get immediate answers to their questions.
[P7]
Subtheme 1.2: Potential for Chronic Condition Management

Participants also recognized the potential for AI chatbots to support patients managing chronic conditions, as a result of their high availability. By providing timely answers when patients have concerns or questions in their chronic disease management, the tools could help them adhere to treatment plans more effectively.

Patients with cancer have many questions about their care and rehabilitation on a daily basis. If the questions are not answered timely, they might lose track of how to manage their care effectively. The AI [chatbots] can fill this gap. They can provide instant answers, for example about new symptoms, medications, side effects, or even dietary recommendations.
[P10]

Theme 2: Concerns

Participants expressed significant concerns about integrating AI chatbots into real-world oncology care. The concerns primarily revolved around issues of liability, accuracy, lack of personalization, patient privacy and data security, and the readiness of patients to use the tools effectively. Some participants even emphasized that the challenges must be addressed before chatbots can be widely recommended or trusted in clinical practice.

Subtheme 2.1: Liability Issues

The question of accountability was a recurring concern among most participants, even those who had already recommended the tools to patients. Oncologists were uncertain about who would bear responsibility if patients experienced adverse outcomes after following chatbot recommendations. This led to hesitation in endorsing or recommending them in clinical practice.

For certain, liability is the biggest concern. If a patient follows advice given by AI and something goes wrong, who will be held responsible? As a doctor, I cannot recommend a tool to patients unless I myself am certain whether it’s safe and reliable and know that I’m safe from the liabilities. Too much risk at this point.
[P6]
Patients are not [medical] professionals. They are likely to assume that the information from a tool recommended by a doctor or hospital is trustworthy. But what if the AI [chatbot] provides wrong advice? such as incorrect source material or a programming error? It’s not just the chatbot developers who will face troubles. We doctors, too. What if something goes wrong with the patient after they follow the AI [chatbot’s] advice, for example, taking a wrong medication? Who will be held responsible?
[P19]
Subtheme 2.2: Misinformation Risk

Another frequently raised concern was the risk of chatbots providing inaccurate or misleading information. This was particularly worrisome for patients with low health literacy, who might misinterpret chatbot responses or take them at face value without consulting a health care professional.

I have always been skeptical about the AI [chatbot’s] responses. Can they be such as outdated or incorrect because of wrong source data.
[P11]
Sometimes, AI [chatbots] can oversimplify complex medical information. For example, they might give a generic explanation of symptoms that doesn’t account for the patient’s conditions. It’s fine if the patient doesn’t act on it, but what if the patient doesn’t really understand it and becomes unwilling to see a doctor because they already have the “answer.”
[P24]
Subtheme 2.3: Lack of Personalization

Several participants indicated concern about the inability of chatbots to provide personalized recommendations. They emphasized that oncology care often requires advice tailored to individual patient needs that cannot be generalized.

Every patient is unique, especially in oncology. I have tried the AI [chatbots] myself. Most of them don’t functionalities to enter information on patient’s personal conditions. The responses are generic... Yes, you can manually enter a patient’s condition, but it takes a lot of work and you have to enter it every time in a new chat session.
[P5]
Subtheme 2.4: Patient Privacy and Data Security

Patient privacy and data security concerns were another significant issue raised by participants. Oncologists were wary about how patient data would be collected, stored, and used by AI systems and the companies behind them, especially given the sensitive nature of medical data.

The state is very serious about protecting patient privacy and medical data security. We are completely blind when it comes to the AI [chatbots]. Even if the patient is willing to enter their medical data, how do you know what the AI or the company would do with it? What if the information is leaked? Will I be held responsible because I recommended it to the patient?
[P5]
Subtheme 2.5: Patient Readiness and Education

Participants emphasized that not all patients are equipped to use AI chatbots effectively. Factors such as age, technological literacy, and familiarity with digital tools could limit the accessibility of the technologies for certain populations.

Some older patients don’t even know how to use their smartphones. It could be even more difficult for them to AI.
[P7]
There is this possible usability issue. Some patients, for example older patients or those who are not familiar with digital products, they may not be able to use the AI [chatbots] effectively.
[P22]

Theme 3: Impact on Doctor-Patient Dynamics

Participants discussed how chatbots could influence the dynamics of doctor-patient interactions. While some believed chatbots might enhance communication by helping patients prepare better for consultations.

AI [chatbots] can help patients better prepare for subsequent consultations. For example, they may already have asked basic questions and gotten answers. This allows them to focus on more complex issues during the consultation.
[P12]

On the other hand, some others expressed concern that overreliance on the tools could lead to conflicts or misunderstandings.

There is a risk. Some patients might trust the chatbot more than the doctor. If the AI’s advice contradicts the doctor’s, this could create unnecessary tension, even undermine trust in us doctors.
[P16]

Principal Findings

In this qualitative study of Chinese oncologists, we investigated their perceptions and attitudes toward endorsing AI-driven chatbots to patients with cancer. Three key themes were revealed from analysis of interview transcriptions, including perceived benefits, significant concerns, and impacts on doctor-patient dynamics. Benefits included enhanced accessibility and potential support for chronic condition management. Concerns centered on liability, misinformation, lack of personalization, privacy and data security risks, and patient readiness and education. Oncologists stressed a dual impact of AI chatbots on doctor-patient dynamics, recognizing potential for improved communication and risks of trust erosion due to overreliance on AI.

Comparison to Prior Work

According to our findings, fewer than a third of participants had recommended the tools to patients, despite all having used chatbots themselves. This hesitation reflects a cautious approach among oncologists, shaped by a balance between recognizing the potential benefits of chatbots, such as enhancing accessibility and supporting chronic disease management, and grappling with significant concerns, including liability, misinformation, lack of personalization, and privacy and data security concerns. The finding is consistent with prior reports on Chinese oncologists’ perceptions and attitudes [28,29]. There seems to be a gap between personal experience with AI tools and professional endorsement. This suggests significant challenges in integrating such technologies into real-world oncology care.

In addition to their high availability, a widely recognized strength of LLM AIs in current literature [30], participants emphasized the potential of AI chatbots to support chronic disease management, particularly in cancer care. Their 24/7 accessibility enables patients to address concerns promptly, which is essential for maintaining adherence to complex treatment plans. Participants noted that timely responses from chatbots could prevent patients from feeling neglected or uncertain. This may enhance their ability to manage their care independently. The finding is consistent with existing studies that LLM AIs improve aspects of chronic disease management [31-34]. AI chatbots are therefore promising as supplementary tools in oncology care, with the potential to improve patient engagement, treatment adherence, and overall outcomes.

As the main findings of this study, participants expressed substantial concerns about recommending AI chatbots to patients in real-world practice, including liability, misinformation, lack of personalization, patient privacy and data security, and the readiness of patients. The issue of liability was particularly prominent. In particular, many participants questioned who would be held accountable if patients experience adverse outcomes after following chatbot advice. Even those who had previously recommended chatbots were hesitant, citing liability concerns as a major barrier. Liability concerns are a common challenge when integrating new technologies into formal or professional processes.

In health care, previous studies have documented similar apprehensions with the adoption of telemedicine platforms and wearable health devices. For instance, in telemedicine implementation, questions arose regarding who would be held responsible if a remote consultation resulted in a misdiagnosis due to technological limitations [35]. Similarly, the use of wearable health monitors, such as fitness trackers or medical-grade devices such as continuous glucose monitors, has raised concerns about the accuracy of the data and the potential consequences of acting on incorrect or incomplete information [36]. Therefore, liability concerns are not unique to AI chatbots but are a recurring theme when technology intersects with professional accountability.

To address this issue, solutions at multiple levels are necessary. At the policy level, clear regulations and guidelines must be established to define the appropriate use of AI chatbots in clinical practice and delineate accountability in cases of adverse outcomes. Such policies should include standards for chatbot development, data validation, and evidence-based responses to ensure their reliability. Additionally, the involvement of professional regulatory bodies to formally approve and monitor the use of these tools can provide an added layer of accountability and confidence for both patients and providers.

At the institutional level, hospitals and health care organizations must play a pivotal role in facilitating the safe integration of AI chatbots. Formal endorsement by institutions, including selecting validated tools, development of hospital-endorsed chatbot guidelines, provision of AI literacy training for care providers and patients, and integrating them into existing clinical workflows, can help establish trust. Additionally, providing training for health care providers on how to endorse and monitor chatbot use and ensuring that patients are aware of the chatbot’s capabilities and limitations are essential steps. Such institutional involvement would shift some of the accountability burden from individual practitioners to a broader, system-level responsibility, thereby reducing hesitancy among health care providers.

Misinformation is a frequently reported concern with LLM AI chatbots, including issues related to accuracy, the quality of training datasets, the reliability of the source data, and so-called “hallucinations” [37]. LLMs might rely on vast and unfiltered internet data to generate responses, which can contain inaccuracies, biases, and outdated information [38,39]. Hallucinations are where the chatbot generates plausible-sounding but factually incorrect or irrelevant responses [40]. Additionally, the inherent variance in responses to similar prompts could further undermine their reliability [41,42]. These are realistic risks for patients seeking consistent and accurate information.

To mitigate the concerns, it is critical to test and validate AI chatbots rigorously before recommending them to patients. Health care institutions should select chatbots based on performance in controlled validation studies, focusing on accuracy, consistency, and their ability to provide evidence-based responses. If existing LLMs fail to meet clinical needs, institutions could consider customizing specialized LLM agents trained on validated medical datasets and updated guidelines specific to oncology. Regular updates and testing are also essential to ensure that chatbots remain aligned with current medical knowledge and best practices.

User feedback is another valuable tool for addressing misinformation. Implementing processes to gather and analyze feedback from both patients and health care providers can help identify and rectify inaccuracies. Improving the prompting skills can also enhance the accuracy of chatbot responses [43]. Training users to frame their queries effectively, such as including relevant context and specific details, can reduce ambiguities and improve the relevance of the chatbot’s answers. Providing tutorial materials or conducting workshops on effective questioning techniques would be a practical way to enhance user interactions with chatbots. Similarly, physicians should be equipped to guide patients on how to use chatbots responsibly and effectively. Finally, clear instructions on the limitations of AI chatbots should accompany their implementation. Patients must understand that chatbots are supplementary tools, not substitutes for professional medical advice.

The concern over the lack of personalization in AI chatbots may not be as substantial as perceived by participants. Improved prompting skills, where users provide specific details about their health conditions or concerns, can significantly enhance the relevance and accuracy of responses. For example, entering contextual information such as medications, symptoms, or treatment history can allow chatbots to tailor recommendations more effectively. Training both patients and health care providers to use precise and structured prompts can help bridge the gap in personalization.

Moreover, some chatbots are already equipped with functionalities to integrate personal health data to deliver more customized responses. For instance, iFlyhealth, a Chinese AI chatbot that allows users to input their medical records, health checkup reports, and other personal information, demonstrates the potential to provide contextually relevant guidance [44]. The functionalities enable chatbots to adapt their advice to the unique needs of individual users, especially in oncology care, where personalized care is critical.

However, the integration of personal health data introduces a parallel concern: patient privacy and data security. Participants in this study expressed concerns about the risk of sensitive health information being mishandled or accessed without consent. For chatbots to achieve meaningful personalization without compromising privacy, robust data security measures must be in place, such as end-to-end encryption, secure storage systems, and strict access controls. Additionally, transparent communication with users about how their data will be collected, stored, and used is essential to build trust and mitigate privacy concerns [45-47]. Balancing the benefits of personalization with the need for stringent privacy protections is a critical challenge for the adoption of AI chatbots in oncology care. While personalization enhances the utility, addressing privacy concerns is pivotal in ensuring their acceptance and widespread use. Future research should explore ways to achieve this balance, including the development of secure, locally hosted AI models with minimized data exposure.

Patient readiness and education are important considerations when adopting novel technologies such as AI chatbots [48,49]. However, this should not be overstressed to the detriment of broader implementation. While some patients, particularly older adults or those less familiar with technology, may face challenges in using chatbots, the principle of adoption should focus on serving the majority. Most patients are likely to adapt quickly. Those who are less ready should not be a cause to hinder the rollout of the tools. Instead, targeted efforts, such as simplified chatbot designs, caregiver assistance, or personalized training, can address their needs.

The integration of AI-driven chatbots in oncology care presents a dual impact on doctor-patient dynamics. On one hand, the tools offer enhanced accessibility and convenience, potentially improving patient preparation for consultations by addressing routine inquiries and freeing up time for more complex discussions. This can streamline communication and enhance the efficiency of health care delivery. On the other hand, there are concerns that patients might overrely on chatbots, potentially leading to trust issues if their advice contradicts that of health care providers. In the Chinese context, cultural factors may further influence how patients perceive and interact with the technologies. This finding aligns with prior reports [50-52]. Clear communication about the role and limitations of chatbots is essential, in addition to developing guidelines to ensure that such tools complement rather than replace human interaction. A balanced approach is crucial to harness the benefits of AI while preserving the integrity of the doctor-patient relationship in oncology care.

Limitations and Future Directions

This study has several limitations. First, conducting semistructured interviews via phone calls inherently limits the collection of nonverbal cues, which may hinder a full understanding of the emotional context behind responses. Although the interviewer initiated a brief ice-breaking conversation to foster rapport, the absence of visual feedback remains a constraint. Second, the requirement of a minimum of 2 years of clinical experience ensured that participants had sufficient professional and technical exposure. However, it may have excluded newer practitioners who could offer innovative perspectives on the integration of digital health technologies. Future research should consider broadening the participant criteria to enhance representativeness. Third, we did not assess intercoder reliability or externally validate the coding process, but used independent coding and consensus among researchers. A more stringent process may further enhance the rigor of future studies. Fourth, as a qualitative study focused on a specific group of Chinese oncologists from selected hospitals, the findings should be interpreted and adopted with caution in other clinical settings or cultural contexts. For example, the dynamics of patient challenges to an oncologist’s judgment or issues of physician liability in endorsing an AI chatbot may differ across regions. Finally, as participants were known professionally to the interviewer, this familiarity might have inhibited candid discussion of controversial opinions and confined recruitment to those willing to engage in interviews, thereby narrowing the range of perspectives captured. These limitations should be taken into account when interpreting and applying the study’s findings. Follow-up qualitative and further quantitative studies may evaluate the long-term evolution of oncologists’ perceptions and attitudes, as well as the impact of AI-driven chatbots on patient outcomes and care efficiency.

Implications for Practice and Policy Making

Clinicians should be guided by clear, evidence-based protocols and institutional policies that address key concerns such as liability, accuracy, and data privacy. Training both health care providers and patients on the appropriate use of these tools is critical, particularly to ensure that chatbots are leveraged as supportive adjuncts rather than replacements for professional advice. Moreover, a collaborative framework involving regulatory bodies can foster the development of robust validation processes and real-time monitoring systems, ultimately ensuring that chatbot apps enhance, rather than compromise, the integrity of the doctor-patient relationship and patient safety in oncology practice.

Conclusions

While recognizing the potential of AI-driven chatbots to enhance accessibility of health information and chronic disease management, Chinese oncologists report significant concerns, including liability, misinformation, lack of personalization, privacy and data security risks, and patient readiness. Addressing the challenges requires comprehensive solutions, such as clear policies and guidelines, rigorous testing and validation, institutional endorsement, and robust patient and provider education. Future efforts should focus on resolving the barriers while leveraging the strengths of AI technology to support patient-centered care in a safe, effective, and ethical manner.

Data Availability

The datasets generated or analyzed during this study are not publicly available due to institutional privacy policy but are available from the corresponding author on reasonable request.

Conflicts of Interest

None declared.

Multimedia Appendix 1

Interview guide.

PDF File, 212 KB

Multimedia Appendix 2

Reflexivity statement.

PDF File, 127 KB

  1. Cung M, Sosa B, Yang HS, et al. The performance of artificial intelligence chatbot large language models to address skeletal biology and bone health queries. J Bone Miner Res. Mar 22, 2024;39(2):106-115. [CrossRef] [Medline]
  2. Schukow C, Smith SC, Landgrebe E, et al. Application of ChatGPT in routine diagnostic pathology: promises, pitfalls, and potential future directions. Adv Anat Pathol. Jan 1, 2024;31(1):15-21. [CrossRef] [Medline]
  3. Kurniawan MH, Handiyani H, Nuraini T, Hariyati RTS, Sutrisno S. A systematic review of artificial intelligence-powered (AI-powered) chatbot intervention for managing chronic illness. Ann Med. Dec 2024;56(1):2302980. [CrossRef] [Medline]
  4. Mhatre A, R. Warhade S, Pawar O, Kokate S, Jain S, M E. Leveraging LLM: implementing an advanced AI chatbot for healthcare. Int J Innovative Sci Res Technol. 2024:3144-3151. [CrossRef]
  5. Athota L, Shukla VK, Pandey N, Rana A. Chatbot for healthcare system using artificial intelligence. Presented at: 2020 8th International Conference on Reliability, Infocom Technologies and Optimization (Trends and Future Directions) (ICRITO); Jun 4-5, 2020; Noida, India. [CrossRef]
  6. Tustumi F, Andreollo NA, Aguilar-Nascimento JD. Future of the language models in healthcare: the role of Chatgpt. Arq Bras Cir Dig. 2023;36:e1727. [CrossRef] [Medline]
  7. Shinde NV, Akhade A, Bagad P, Bhavsar H, Wagh SK, Kamble A. Healthcare chatbot system using artificial intelligence. Presented at: 2021 5th International Conference on Trends in Electronics and Informatics (ICOEI); Jun 3-5, 2021; Tirunelveli, India. [CrossRef]
  8. Wen B, Norel R, Liu J, Stappenbeck T, Zulkernine F, Chen H. Leveraging large language models for patient engagement. Presented at: The Power of Conversational AI in Digital Health 2024 IEEE International Conference on Digital Health (ICDH); Jul 7-13, 2024; Shenzhen, China. [CrossRef]
  9. Jelić G, Tartalja DM. How can AI-powered solutions improve communication in healthcare? Presented at: 7th International Scientific Conference ITEMA Recent Advances in Information Technology, Tourism, Economics, Management and Agriculture; Oct 26, 2023; Croatia. [CrossRef]
  10. Hamidi A, Roberts K. Evaluation of AI chatbots for patient-specific EHR questions. arXiv. Preprint posted online on Jun 5, 2023. [CrossRef]
  11. Ferraris G, Monzani D, Coppini V, et al. Barriers to and facilitators of online health information-seeking behaviours among cancer patients: a systematic review. Digital Health. 2023;9:20552076231210663. [CrossRef] [Medline]
  12. Grace JG, Schweers L, Anazodo A, Freyer DR. Evaluating and providing quality health information for adolescents and young adults with cancer. Pediatr Blood Cancer. Oct 2019;66(10):e27931. [CrossRef] [Medline]
  13. Ghorbani F, Zare M, Nabavi F, Vashani H, Bari A. Effect of education and telephone counseling on caregiver strain and unmet needs in family caregivers and self-care behaviors in patients with cancer: a randomized clinical trial. Evidence Based Care. 2020;10:51-60. [CrossRef]
  14. Martínez-Miranda P, Casuso-Holgado MJ, Jesús Jiménez-Rejano J. Effect of patient education on quality-of-life, pain and fatigue in breast cancer survivors: a systematic review and meta-analysis. Clin Rehabil. Dec 2021;35(12):1722-1742. [CrossRef] [Medline]
  15. Teplinsky E, Ponce SB, Drake EK, et al. Online medical misinformation in cancer: distinguishing fact from fiction. JCO Oncol Pract. Aug 2022;18(8):584-589. [CrossRef] [Medline]
  16. Loeb S, Langford AT, Bragg MA, Sherman R, Chan JM. Cancer misinformation on social media. CA Cancer J Clin. 2024;74(5):453-464. [CrossRef] [Medline]
  17. Sallam M. ChatGPT utility in healthcare education, research, and practice: systematic review on the promising perspectives and valid concerns. Healthcare (Basel). Mar 19, 2023;11(6):887. [CrossRef] [Medline]
  18. Toiv A, Saleh Z, Ishak A, et al. Digesting digital health: a study of appropriateness and readability of ChatGPT-generated gastroenterological information. Clin Transl Gastroenterol. Nov 1, 2024;15(11):e00765. [CrossRef] [Medline]
  19. Wei Q, Yao Z, Cui Y, Wei B, Jin Z, Xu X. Evaluation of ChatGPT-generated medical responses: a systematic review and meta-analysis. J Biomed Inform. Mar 2024;151:104620. [CrossRef] [Medline]
  20. Park KU, Lipsitz S, Dominici LS, et al. Generative artificial intelligence as a source of breast cancer information for patients: proceed with caution. Cancer. Jan 1, 2025;131(1):e35521. [CrossRef] [Medline]
  21. Frechette J, Bitzas V, Aubry M, Kilpatrick K, Lavoie-Tremblay M. Capturing lived experience: methodological considerations for interpretive phenomenological inquiry. Int J Qual Methods. Jan 1, 2020;19. [CrossRef]
  22. Campbell S, Greenwood M, Prior S, et al. Purposive sampling: complex or simple? Research case examples. J Res Nurs. Dec 2020;25(8):652-661. [CrossRef] [Medline]
  23. Alam M. A systematic qualitative case study: questions, data collection, NVivo analysis and saturation. QROM. Aug 20, 2020;16(1):1-31. [CrossRef]
  24. Praveena KR, Sasikumar S. Application of Colaizzi’s method of data analysis in phenomenological research. Medico Legal Update. 2021. [CrossRef]
  25. Johnson JL, Adkins D, Chauvin S. A review of the quality indicators of rigor in qualitative research. Am J Pharm Educ. Jan 2020;84(1):7120. [CrossRef] [Medline]
  26. Harley B, Cornelissen J. Rigor with or without templates? The pursuit of methodological rigor in qualitative research. Organ Res Methods. Apr 2022;25(2):239-261. [CrossRef]
  27. Dyar KL. Qualitative inquiry in nursing: creating rigor. Nurs Forum. Jan 2022;57(1):187-200. [CrossRef] [Medline]
  28. Li M, Xiong X, Xu B, Dickson C. Chinese oncologists’ perspectives on integrating AI into clinical practice: cross-sectional survey study. JMIR Form Res. Jun 5, 2024;8:e53918. [CrossRef] [Medline]
  29. Li M, Xiong X, Xu B. Attitudes and perceptions of Chinese oncologists towards artificial intelligence in healthcare: a cross-sectional survey. Front Digital Health. 2024;6:1371302. [CrossRef] [Medline]
  30. Khan N, Khan Z, Koubaa A, Khan MK, Salleh RB. Global insights and the impact of generative AI-ChatGPT on multidisciplinary: a systematic review and bibliometric analysis. Conn Sci. Dec 31, 2024;36(1). [CrossRef]
  31. Dave P. Using AI to increase medication adherence. Asian J Dental Health Sci. 2024;4(2):38-43. [CrossRef]
  32. Babel A, Taneja R, Malvestiti FM, Monaco A, Donde S. Artificial intelligence solutions to increase medication adherence in patients with non-communicable diseases. Front Digital Health. 2021;3:669869. [CrossRef] [Medline]
  33. Pavlopoulos A, Rachiotis T, Maglogiannis I. An overview of tools and technologies for anxiety and depression management using AI. Appl Sci (Basel). 2024;14(19):9068. [CrossRef]
  34. Singh A. Empowering patients with AI-driven personalized medicine: a paradigm shift in chronic disease management. IJAR. 2024;12(8):1031-1038. [CrossRef]
  35. Nittari G, Khuman R, Baldoni S, et al. Telemedicine practice: review of the current ethical and legal challenges. Telemed e-Health. Dec 2020;26(12):1427-1437. [CrossRef] [Medline]
  36. Shajari S, Kuruvinashetti K, Komeili A, Sundararaj U. The emergence of AI-based wearable sensors for digital health technology: a review. Sensors (Basel). Nov 29, 2023;23(23):9498. [CrossRef] [Medline]
  37. Williamson SM, Prybutok V. The era of artificial intelligence deception: unraveling the complexities of false realities and emerging threats of misinformation. Information. 2024;15(6):299. [CrossRef]
  38. Mousavi SM, Alghisi S, Riccardi G. DyKnow: dynamically verifying time-sensitive factual knowledge in llms. 2024. Presented at: Findings of the Association for Computational Linguistics; Aug 11-16, 2024; Bangkok, Thailand. [CrossRef]
  39. Perelkiewicz M, Poswiata R. A review of the challenges with massive web-mined corpora used in large language models pre-training. arXiv. Preprint posted online on Jul 10, 2024. [CrossRef]
  40. Agarwal V, Jin Y, Chandra M, Choudhury M, Kumar S, Sastry N. MedHalu: hallucinations in responses to healthcare queries by large language models. arXiv. Preprint posted online on Sep 29, 2024. [CrossRef]
  41. Cao B, Cai D, Zhang Z, Zou Y, Lam W. On the worst prompt performance of large language models. arXiv. Preprint posted online on Jun 8, 2024. [CrossRef]
  42. Bonagiri V, Vennam S, Gaur M, Kumaraguru P. Measuring moral inconsistencies in large language models. arXiv. Preprint posted online on Jan 26, 2024. [CrossRef]
  43. Balaji DK, Lokesha A, G C, et al. Optimizing prompt length and specificity for enhanced AI chatbot responses. ijcsrr. 2024;07(9). [CrossRef]
  44. About us. Xunfei Healthcare. URL: https://www.iflyhealth.com/en/about.html [Accessed 2024-12-10]
  45. Yang J, Chen YL, Por LY, Ku CS. A systematic literature review of information security in chatbots. Appl Sci (Basel). 2023;13(11):6355. [CrossRef]
  46. May R, Denecke K. Security, privacy, and healthcare-related conversational agents: a scoping review. Inform Health Soc Care. Apr 3, 2022;47(2):194-210. [CrossRef] [Medline]
  47. Hasal M, Nowaková J, Saghair KA, Abdulla H, Snášel V, Ogiela L. Chatbots: security, privacy, data protection, and social aspects. Concurr Comput. Oct 10, 2021;33(19). [CrossRef]
  48. Smrke U, Špes T, Mlakar I, Musil B, Plohl N. Technophobia mediates the associations between age, education level, and readiness to adopt new (health) technology among aging adults. J Appl Gerontol. Mar 2025;44(3):497-507. [CrossRef] [Medline]
  49. Jagde AK, Shrivastava R, Feine J, Emami E. Patients’ E-readiness to use E-health technologies for oral health. PLoS One. 2021;16(7):e0253922. [CrossRef] [Medline]
  50. Hryciw BN, Fortin Z, Ghossein J, Kyeremanteng K. Doctor-patient interactions in the age of AI: navigating innovation and expertise. Front Med (Lausanne). 2023;10:1241508. [CrossRef] [Medline]
  51. Choudhury A, Chaudhry Z. Large language models and user trust: consequence of self-referential learning loop and the deskilling of health care professionals. J Med Internet Res. Apr 25, 2024;26:e56764. [CrossRef] [Medline]
  52. Sezgin E. Artificial intelligence in healthcare: complementing, not replacing, doctors and healthcare providers. Digital Health. 2023;9:20552076231186520. [CrossRef] [Medline]


AI: artificial intelligence
LLM: large language model
PI: primary investigator


Edited by Andrew Coristine; submitted 17.01.25; peer-reviewed by Moustaq Karim Khan Rony, Ning An; final revised version received 12.06.25; accepted 12.06.25; published 23.07.25.

Copyright

© Lijuan Zeng, Qiaoqi Li, Qiaoqi Li, Ying Zhang, Zhaojun Li. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 23.7.2025.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research (ISSN 1438-8871), is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.