Background: Many potential benefits for the uses of chatbots within the context of health care have been theorized, such as improved patient education and treatment compliance. However, little is known about the perspectives of practicing medical physicians on the use of chatbots in health care, even though these individuals are the traditional benchmark of proper patient care.
Objective: This study aimed to investigate the perceptions of physicians regarding the use of health care chatbots, including their benefits, challenges, and risks to patients.
Methods: A total of 100 practicing physicians across the United States completed a Web-based, self-report survey to examine their opinions of chatbot technology in health care. Descriptive statistics and frequencies were used to examine the characteristics of participants.
Results: A wide variety of positive and negative perspectives were reported on the use of health care chatbots, including the importance to patients for managing their own health and the benefits on physical, psychological, and behavioral health outcomes. More consistent agreement occurred with regard to administrative benefits associated with chatbots; many physicians believed that chatbots would be most beneficial for scheduling doctor appointments (78%, 78/100), locating health clinics (76%, 76/100), or providing medication information (71%, 71/100). Conversely, many physicians believed that chatbots cannot effectively care for all of the patients’ needs (76%, 76/100), cannot display human emotion (72%, 72/100), and cannot provide detailed diagnosis and treatment because of not knowing all of the personal factors associated with the patient (71%, 71/100). Many physicians also stated that health care chatbots could be a risk to patients if they self-diagnose too often (714%, 74/100) and do not accurately understand the diagnoses (74%, 74/100).
Conclusions: Physicians believed in both costs and benefits associated with chatbots, depending on the logistics and specific roles of the technology. Chatbots may have a beneficial role to play in health care to support, motivate, and coach patients as well as for streamlining organizational tasks; in essence, chatbots could become a surrogate for nonmedical caregivers. However, concerns remain on the inability of chatbots to comprehend the emotional state of humans as well as in areas where expert medical knowledge and intelligence is required.
Chatbots, also known as conversational agents, interactive agents, virtual agents, virtual humans, or virtual assistants, are artificial intelligence programs designed to simulate human conversation via text or speech. Many positive viewpoints have been made on the potential uses of health care chatbots within the marketing and business world [- ]; however, little scientific research has examined their effectiveness in real-world patient scenarios, that is, to improve health outcomes [ , ]. Chatbots are commonly used in marketing applications such as to guide consumers through electronic commerce websites, answer questions related to products and services, help troubleshoot problems with internet service, act as a personal concierge, or provide consumer advice. In the context of health care, chatbots or healthbots are intended to provide personalized health and therapy information to patients, provide relevant products and services to patients, as well as suggest diagnoses and recommend treatments based on patient symptoms.
Chatbots in health care may have the potential to provide patients with access to immediate medical information, recommend diagnoses at the first sign of illness, or connect patients with suitable health care providers (HCPs) across their community [, ]. Theoretically, in some instances, chatbots may be better suited to help patient needs than a human physician because they have no biological gender, age, or race and elicit no bias toward patient demographics. Chatbots do not get tired, fatigued, or sick, and they do not need to sleep; they are cost-effective to operate and can run 24 hours a day, which is especially useful for patients who may have medical concerns outside of their doctor’s operating hours. Chatbots can also communicate in multiple different languages to better suit the needs of individual patients.
Early research has demonstrated the benefits of using health care chatbots, such as helping with diagnostic decision support [, ], promoting and increasing physical activity [ ], and cognitive behavioral therapy for psychiatric and somatic disorders [ - ], which provide effective, acceptable, and practical health care with accuracy comparable with that of human physicians. Patients may also feel that chatbots are safer interaction partners than human physicians and are willing to disclose more medical information and report more symptoms to chatbots [ , ]. However, despite the demonstrated efficacy and cost-effectiveness of health care chatbots, the technology is usually associated with poor adoption by physicians and poor adherence by patients [ ]. This may be because of the perceived lack of quality or accountability that is characterized by computerized chatbots as opposed to traditional face-to-face interactions with human physicians.
Although there are some instances showing the effectiveness of health care–related chatbots for certain outcomes, it is still not entirely clear whether this technology is better overall in improving all the various clinical health outcomes of patients and why it is not more highly adopted compared with traditional methods of care, that is, information coming from a human physician. Although chatbot technology for health care is continually advancing, little is known about the perspectives of practicing medical physicians on the use of chatbots in health care. It would thus seem beneficial to have medical expert opinions on the use of this technology that is intended to supplement or even replace specific roles of HCPs. The purpose of this study was to examine the perspectives of practicing medical physicians on the use of health care chatbots for patients. As human physicians have been the traditional benchmark for treating patients for hundreds of years, a crucial objective of investigating the use of chatbots for delivering health care should be to understand the perspective of medical experts who actually practice health care in their daily occupations. As physicians are the primary point of care for patients, their approval is an important gate to the dissemination of chatbots into medical practice. The findings of this research will help to either justify or attenuate enthusiasm for health care chatbot applications as well as direct future work to better align with the needs of HCPs.
A total of 100 participants completed the survey (28 females, 69 males, and 3 preferred not to say; age range=28-73 years; mean age 44.9, SD 12.0). Participants were general practitioners (GPs) with a Doctor of Medicine (MD) degree (years of practice range=2-36; mean years of practice 14.7, SD 9.0). Participants were located across 32 states of the United States, with participants in each of the 4 main interstate regions of Northeast (27/100, 27%; 5 states), Midwest (21/200, 21%; 8 states), South (29/100, 29%; 13 states), and West (23/100, 23%; 6 states).
Participants who took part in the survey were sampled from a large database of physicians who have previously agreed to take part in market research. The survey was administered by Sermo , a private social media network for licensed physicians, who randomly selected registered physicians within their panel across the United States. The Sermo research network comprises over 400,000 registered physicians in the United States, representing roughly 40% of the US physician population [ ]. As this study was the first of its kind and exploratory in nature to study the subjective opinions of physicians, no explicit statistical hypotheses were being evaluated. The sample size of 100 was arbitrarily chosen to gather a preliminary viewpoint of physicians’ perspectives of chatbots in health care and would yield approximately a 9.8% margin of error with a 95% CI of the entire US physician population.
Invitees were sent an email inviting them to complete the survey, accessible via an embedded Web link. The only inclusion criteria included being a GP with an MD degree within the United States; no restrictions on age, gender, or previous use of chatbots were implemented. Exclusion criteria included any nonphysician or specialist physician, which may have had a bias toward the use of health care chatbots for patients; in contrast, GPs are typically the primary point of care for a broad range of family medicine–related health issues, so these participants were targeted for the research. All participants gave informed consent to complete the survey, and the study received full ethics clearance from Advarra Institutional Review Board Services, an independent ethics committee.
Survey and Procedure
Survey questions were designed in consultation with medical scientists, Web developers, data scientists, and technology specialists with expertise in digital medicine. The survey was organized into 5 main sections, including (1) usage of health care chatbots, (2) perceived benefits of health care chatbots to patients, (3) perceived challenges of health care chatbot usage, (4) perceived risks of health care chatbots, and (5) physicians’ perceptions of health care chatbots in the role of a physician.
Participants were asked to take part in a research study involving a Web-based self-report survey that examines physician perceptions of the benefits, challenges, and risks of using chatbots in health care. Participants were asked to provide demographic information and report their opinions on the use of chatbot technology for treating patients in health care. At the beginning of the survey, participants were given an explicit definition of chatbot, to give a baseline description for the context of the questions in the survey. Participants were given the following definition:
Chatbots, also known as conversational agents, interactive agents, virtual agents, virtual humans, or virtual assistants, are computer software applications that run automated tasks or scripts designed to simulate human conversation. Chatbots are artificial intelligence (AI) programs that can generate and retrieve information for the interaction with human users via text or computer voice generation.
Participants were asked to answer all the survey questions for chatbots in the context of health care, referring to the use of chatbots for health-related issues.
Data were analyzed using descriptive statistics and frequencies to examine the characteristics of participant responses to survey items on health care chatbots. Preliminary analyses revealed no major differences across factors of age, gender, or years of practice. Thus, the entire sample was reported holistically.
Usage of Health Care Chatbots
A total of 30% (30/100) of participants indicated that they had direct personal experience with the use of chatbots for health-related issues. Physicians were also given a list of currently available health care chatbots, to examine their familiarity with some of the interfaces that could be potentially accessed by patients.shows physicians’ use of these health care chatbots, which are intended to provide personalized health and therapy information, provide relevant products and services to patients, as well as suggest diagnoses and recommend treatments based on patients’ symptoms. The findings indicated that most of the currently available chatbots were not generally used or heard of by physicians.
Of the 30 participants who have used health care chatbots previously, 4 (13%) were very satisfied, 10 (33%) were somewhat satisfied, 8 (27%) were neither satisfied nor dissatisfied, and 8 (27%) were somewhat dissatisfied with their application. Of all the physicians in the survey, 18% (18/100) stated that their patients use health care chatbots (24%, 24/100, stated that patients did not use them), but the majority (58%, 58/100) were unsure or did not know whether their patients use them.
In total, 42% (42/100) of physicians believed that chatbots are either very important (9%, 9/100) or somewhat important (33%, 33/100) in health care, whereas 26% (26/100) believed that they are somewhat unimportant (18%, 18/100, 18%) or very unimportant (8%, 8/100); 32% (32/100) of physicians believed that they are neither important nor unimportant. Similarly, 44% (44/100) of physicians stated that they would be very likely (9/100, 9%) or somewhat likely (35%, 35/100) to prescribe the use of health care chatbots to their patients within the next 5 years; 34% (34/100) of physicians stated that they would be somewhat unlikely (22/100, 22%) or very unlikely (12/100, 12%) to do so. A total of 40% (40/100) of physicians also indicated that they would be very likely (11/100, 11%) or somewhat likely (29%, 29/100) to recommend the prescription of health care chatbots to their HCP colleagues, whereas 37% (37/100) indicated that they would be somewhat unlikely (25%, 25/100) or very unlikely (12%, 12/100) to do the same.
Perceived Benefits of Health Care Chatbots to Patients
Participants were asked to what extent they thought health care chatbots would benefit patients in specific areas of health management (). An average of 42% (42/100) agreed to some extent in the benefits associated with health care chatbots, whereas an average of 25% (25/100) disagreed to some extent in these same potential benefits. More than half of physicians agreed that health care chatbots could help patients better manage their own health (54/100, 54%), improve access and timeliness to care (53%, 53/100), or reduce travel time to their HCP (52%, 52/100); almost half of physicians believed that health care chatbots could prevent unnecessary visits to HCPs (49/100, 49%) or that patients may disclose more information to chatbots compared with HCPs (41%, 41/100).
In terms of specific health-related outcomes of chatbot use for patients, an average of 45% (45/100) of physicians believed in some type of physical, psychological, or behavioral health benefit to patients (). More than half of physicians believed that health care chatbots could improve nutrition or diet (65%, 65/100), enhance medication or treatment adherence (60%, 60/100), increase activity or exercise (55%, 55/100), or reduce stress (51%, 51/100).
|Currently available health care chatbotsa||Used it, %||Heard of it but never used it, %||Neither heard of it nor used it, %|
aFor simplicity, as there were exactly 100 participants in the study, only percentages have been reported, unless otherwise stated.
|Perceived benefits of health care chatbotsa||Strongly agree, %||Somewhat agree, %||Neither agree nor disagree, %||Somewhat disagree, %||Strongly disagree, %|
|Help patients better manage their own health||7||47||26||17||3|
|Improve quality of patient care||8||27||36||21||8|
|Help provide more personalized treatment||7||21||36||27||9|
|Reduce travel time to health care provider||20||32||30||12||6|
|Prevent unnecessary visits to health care providers||11||38||28||19||4|
|Patients may disclose more information to chatbots compared with health care providers||12||29||36||15||8|
|Increase patient privacy||8||19||40||20||13|
|Improve access and timeliness to care||15||38||32||12||3|
|Average across variables||11||31||33||18||7|
aFor simplicity, as there were exactly 100 participants in the study, only percentages have been reported, unless otherwise stated.
|Perceived health care outcome benefits of using chatbotsa||Yes, %||No, %||Do not know or not sure, %|
|Activity or exercise increase||55||25||20|
|Alcohol consumption reduction||31||36||33|
|Blood glucose reduction||49||26||25|
|Blood pressure reduction||34||32||34|
|Cognitive behavioral therapy||41||30||29|
|Sleep quality or quantity improvement||34||35||31|
|Smoking reduction or cessation||47||24||29|
|Stress reduction or management||51||23||26|
|Psychological well-being increase||48||28||24|
|Weight loss or decrease in body mass index||45||30||25|
|Average across variables||45||29||27|
aFor simplicity, as there were exactly 100 participants in the study, only percentages have been reported, unless otherwise stated.
|Perceived logistical benefits of using chatbotsa||Yes, %||No, %||Do not know or not sure, %|
|Locating health clinics or health care providers in a specific area||76||11||13|
|Scheduling doctor appointments||78||13||9|
|Monitoring patient calls to the reception desk of health clinics||49||22||29|
|Processing medical invoices or bill payments||48||28||24|
|Assessing emergency triage in hospitals||30||48||22|
|Reminders for medication/treatment compliance||76||11||13|
|Renewing medication prescriptions||56||25||19|
|Gathering health insurance information||65||16||19|
|Answering medication frequently asked questions||70||15||15|
|Providing medication side effects and drug interactions||68||15||17|
|Providing medication use or misuse instructions||71||12||17|
|Average across variables||62||20||18|
Regarding logistical benefits of using health care chatbots for patients, the majority of physicians (62%, 62/100) believed in advantages for organization, planning, and management of administrative characteristics associated with health care (). Most notably, many physicians believed that chatbots would be most beneficial for scheduling doctor appointments (78%, 78/100), locating health clinics or HCPs in a specific area (76%, 76/100), administering reminders for medication or treatment compliance (76%, 76/100), providing medication use or misuse instructions (71%, 71/100), or answering medication frequently asked questions (70%, 70/100). In contrast, almost half of physicians believed that chatbots could not assess emergency triage in hospitals (48%, 48/100).
Perceived Challenges Associated With Health Care Chatbots for Patients
Over half of the physicians (53%, 53/100) agreed to some extent in various challenges associated with the use of health care chatbots for patients, whereas an average of only 14% (14/100) disagreed with these traits (). Most notably, 76% (76/100) of physicians believed that chatbots cannot effectively care to the full extent of the patients’ needs, 72% (72/100) believed that chatbots cannot understand or display human emotion, and 58% (58/100) believed that chatbots lack the intelligence or knowledge to accurately assess patients.
|Perceived challenges associated with using health care chatbotsa||Strongly agree, %||Somewhat agree, %||Neither agree nor disagree, %||Somewhat disagree, %||Strongly disagree, %|
|Patient data privacy and confidentiality||17||31||39||11||2|
|Chatbots cannot understand or display human emotion||36||36||23||4||1|
|Chatbots lack the intelligence or knowledge to accurately assess patients||19||39||27||11||4|
|Chatbots offer poor health-related advice||12||28||47||13||0|
|Chatbots cannot effectively care to the full extent of the patients’ needs||43||33||17||7||0|
|Chatbots take too much time to use||6||24||48||20||2|
|Most of my patients do not have access to the necessary technology for chatbots services||15||35||26||18||6|
|Average across variables||21||32||32||12||2|
|Perceived risks associated with using health care chatbots||Strongly agree, %||Somewhat agree, %||Neither agree nor disagree, %||Somewhat disagree, %||Strongly disagree, %|
|Patients may abuse the use of chatbots and self-diagnose too often||21||53||20||5||1|
|Patients will receive lesser quality assessments||24||38||28||8||2|
|Patients may not accurately understand the diagnoses||26||48||22||2||2|
|Chatbots cannot provide detailed clarification on patient assessment||35||36||19||8||2|
|Patients may not feel adequately connected to their health care providers||34||36||22||8||0|
|Chatbots may indirectly harm patients by not knowing all of the personal factors associated with the patient||28||41||24||6||1|
|Average across variables||28||42||23||6||1|
Perceived Risks Associated With Health Care Chatbots for Patients
The great majority of physicians (70%, 70/100) expressed their concern of risks associated with health care chatbots for patients, whereas only 7% (7/100) disagreed with these potential risks (). Over 60% (60/100) of physicians agreed to every type of risk presented to them, including the perception that patients may abuse the use of chatbots and self-diagnose too often (74%, 74/100), patients may not accurately understand the diagnoses (74%, 74/100), and that chatbots cannot provide detailed clarification on patient assessment (71%, 71/100). Physicians also felt that patients may not feel adequately connected to their HCPs (70%, 70/100) or that chatbots may indirectly harm patients by not knowing all of the personal factors associated with the patient (69%, 69/100).
Physicians’ Perceptions of Health Care Chatbots in the Role of a Physician
Physicians were asked how much they believed that health care chatbots would either help them or impede their work in their daily occupational role on a sliding scale from 0%=impede my work to 100%=help me. Physicians responded entirely across the board, which averaged at an approximate neutral point in the middle of the scale (observed range=0-100%, mean 47.4%, median 50%, SD 25.6%).
Finally, physicians were asked how likely it would be, in the future, for health care chatbots to play a more significant role in patients’ health than their HCP. A total of 49% (49/100) expressed that this would be very likely (15%, 15/100) or somewhat likely (34%, 34/100) to happen, whereas 25% (25/100) expressed that this would be somewhat unlikely (15%, 15/100) or very unlikely (10%, 10/100) to happen.
A total of 100 practicing GPs participated in an online research survey that examined their perceived benefits, challenges, and risks of using chatbots in health care. Overall, the findings demonstrated that physicians have a wide variety of perspectives on the use of health care chatbots for patients, with few major skews to one side or the other regarding agreement levels to a variety of characteristics. Almost half of the physicians perceived health care chatbots to be important for patients, especially for helping patients better manage their own health. Almost half of the physicians also stated that they would be likely to prescribe the use of the technology to patients and recommend it to their colleagues. About half of the physicians also agreed that chatbots would benefit the physical, psychological, and behavioral health outcomes of patients, such as diet improvement, medication adherence, exercise frequency, or stress reduction. The other half of physicians was roughly equally divided between being an opponent or having a neutral opinion to the perceived importance and benefits of health care chatbots. In addition, patient privacy did not emerge as a polarizing issue.
With regard to the use of health care chatbots within the occupational role of an HCP, physicians believed that the technology would almost equally help them as well as impede their overall workplace duties. Approximately half of the physicians also believed that health care chatbots would eventually play a more significant role in patients’ health than their HCP. For the most part, these results indicated an almost equal number of supporters for health care chatbots, with the rest being those who are either indifferent or opponents to the technology.
More consistent agreement on the use of health care chatbots was apparent with reference to their potential logistical benefits as well as their challenges and potential risks to patients. For example, the great majority of physicians believed in administrative benefits associated with chatbots, especially for scheduling doctor appointments, locating health clinics or HCPs, administering reminders for medication compliance, providing treatment instructions, and answering commonly asked medication questions. In contrast, the majority of physicians believed that chatbots cannot effectively care for all of the patients’ needs, cannot understand or display human emotion, lack the intelligence to accurately assess patients, cannot provide detailed clarification on patient assessment, cannot assess emergency health situations, or may indirectly harm patients by not knowing all of the personal factors associated with the patient. In addition, many physicians stated that health care chatbots will be associated with the risk that patients may self-diagnose too often, patients may not understand the diagnoses, or that patients may not feel adequately connected to their primary physician.
These findings highlight the perception that chatbot technology may be advantageous to use in less complicated roles, such as administrative and organizational tasks, but more challenges and risks may be associated with their use in complex roles that involve more personalized knowledge of the patient. This suggests that health care duties involving an expert human touch and those that need a high degree of accuracy are perceived to be a poor choice for chatbot use compared with receiving overall treatment from an actual physician.
The many perceived challenges and risks associated with health care chatbots would need to be addressed before the technology is widely endorsed by practicing physicians. These challenges may be because of concerns involving regulation and remuneration to the physicians, which supports other relevant research demonstrating that physicians are less likely to use telemedicine services if they are not adequately compensated for their time and effort [- ]. Addressing the perceived barriers around health care chatbots would, therefore, require cooperation by health care institutions, policy makers, HCPs, and patients alike.
One limitation to this research is that it examined the subjective perceptions of GPs as opposed to specialist physicians, which may have had more experience and different opinions on the use of health care chatbots, depending on their roles in patient health. In addition, all physicians practiced within the United States, which may be associated with a different level of enthusiasm toward digital medicine technology compared with other countries and cultures. Some research and viewpoints on health care chatbots have been published from international researchers around the world [- ]; however, to the best of our knowledge, this was the first study to examine physicians’ perspectives on the direct use of chatbots in their practice. Future research should examine how different samples of HCPs, in different environments, perceive health care chatbots for use with their patients.
Physicians believe in both costs and benefits associated with chatbots, depending on the logistics and specific roles of the technology. The areas where physicians believed chatbots would be most helpful were in the improvement of nutrition, diet, and treatment compliance as well as logistical tasks such as scheduling appointments, locating clinics, and providing medication reminders. The major challenges perceived were an inability of chatbots to understand emotions and address the full extent of a patient’s needs. Physicians also agreed that there were significant risks associated with chatbots including inaccurate medical information. These findings suggest that physicians may be comfortable with using chatbots to automate simple logistical tasks but do not believe that chatbots are advanced enough to replace complex decision-making tasks requiring an expert medical opinion. This is not to say that health care chatbots have a particular stigma associated with them, but rather, this suggests that improvements are needed for future use to overcome the risks and challenges associated with the technology. Nevertheless, nearly half of the physicians believed that health care chatbots could replace a major role of human HCPs sometime in the future. However, chatbots can be best applied to help physicians rather than replace them. Chatbots are cost-effective to run and can automate repetitive administrative tasks, thus freeing time for physicians to provide higher quality, personalized, and empathetic care to their patients. This research lays the foundation for future investigations on the factors influencing physician adoption of chatbots. Providing physicians with evidence-based research on the advantages and disadvantages of this emerging technology will help inform them on the most appropriate use to complement their practice rather than impede their work.
The authors would like to thank all the participants, project members, supporters, and researchers at Klick Inc for the successful development, implementation, and evaluation of this research. The authors would also like to acknowledge Gaurav Baruah and Peter Leimbigler for their helpful comments on the research design and survey. This research was internally funded and received no specific grant from any funding agency in the public, commercial, or not-for-profit sectors.
Conflicts of Interest
- Brooke S. Chatbots Magazine. How chatbots will shape the future of healthcare URL: https://chatbotsmagazine.com/how-chatbots-will-shape-the-future-of-healthcare-fa8e30cebb1c?gi=7238e40fd943 [accessed 2019-02-25] [WebCite Cache]
- Cade J. Medium. No appointment necessary — The rise of bots in healthcare URL: https://medium.com/gene-global/no-appointment-necessary-the-rise-of-bots-in-healthcare-4d42c5539547 [accessed 2019-02-25] [WebCite Cache]
- Farkash Z. Chatbots Life. Chatbot for healthcare: chatbots can be money-savers for hospitals and clinics URL: https://chatbotslife.com/chatbot-for-healthcare-chatbots-can-be-money-savers-for-hospitals-and-clinics-5c0c942b4a7a?gi=7510b540e6d0 [accessed 2019-02-25] [WebCite Cache]
- Luma Health. Medium. Using bots in healthcare URL: https://medium.com/healthfurther/using-bots-in-healthcare-42f2cd1f5e4b [accessed 2019-02-25] [WebCite Cache]
- Bryant M. Healthcare Dive. Hospitals turn to chatbots, AI for care URL: https://www.healthcaredive.com/news/chatbots-ai-healthcare/516047/ [accessed 2019-02-25] [WebCite Cache]
- Carter S. ScienceSoft. How chatbots and AI are changing the healthcare industry URL: https://www.scnsoft.com/blog/how-chatbots-and-ai-are-changing-the-healthcare-industry [accessed 2019-02-25] [WebCite Cache]
- Garvin E. HIT Consultant. 5 use cases for chatbots in healthcare URL: https://hitconsultant.net/2018/03/21/chatbots-in-healthcare-use-cases/ [accessed 2019-02-25] [WebCite Cache]
- ISDDesign. Medium. 5 advantages of chatbots in the healthcare industry URL: https://medium.com/@ISDDesign/5-advantages-of-chatbots-in-the-healthcare-industry-622dd9253384 [accessed 2019-02-25] [WebCite Cache]
- Maruti Techlabs. Chatbots Magazine. Is conversational AI the future of healthcare? URL: https://chatbotsmagazine.com/is-conversational-ai-the-future-of-healthcare-658a3d8e9dd5?gi=80b5350f28f2 [accessed 2019-02-25] [WebCite Cache]
- Parayil M. Chatbots Life. How a chatbot can help your healthcare business URL: https://chatbotslife.com/chatbots-for-healthcare-ab364c8b6e0?gi=e1ea2c1a800a [accessed 2019-02-25] [WebCite Cache]
- Martinez-Martin N, Kreitmair K. Ethical issues for direct-to-consumer digital psychotherapy apps: addressing accountability, data protection, and consent. JMIR Ment Health 2018 Apr 23;5(2):e32 [FREE Full text] [CrossRef] [Medline]
- Steinhubl S, Muse E, Topol E. The emerging field of mobile health. Sci Transl Med 2015 Apr 15;7(283):283rv3 [FREE Full text] [CrossRef] [Medline]
- Amato F, Marrone S, Moscato V, Piantadosi G, Picariello A, Sansone C. CEUR Workshop Proceedings. 2017. Chatbots Meet eHealth: Automatizing Healthcare URL: http://ceur-ws.org/Vol-1982/paper6.pdf [accessed 2019-02-26] [WebCite Cache]
- Morris RR, Kouddous K, Kshirsagar R, Schueller SM. Towards an artificially empathic conversational agent for mental health applications: system design and user perceptions. J Med Internet Res 2018 Jun 26;20(6):e10148 [FREE Full text] [CrossRef] [Medline]
- Feldman MJ, Hoffer EP, Barnett GO, Kim RJ, Famiglietti KT, Chueh HC. Impact of a computer-based diagnostic decision support tool on the differential diagnoses of medicine residents. J Grad Med Educ 2012 Jun;4(2):227-231 [FREE Full text] [CrossRef] [Medline]
- Razzaki S, Baker A, Perov Y, Middleton K, Baxter J, Mullarkey D, et al. arXiv. 2018. A comparative study of artificial intelligence and human doctors for the purpose of triage and diagnosis URL: https://arxiv.org/pdf/1806.10698.pdf [accessed 2019-02-26] [WebCite Cache]
- Kramer J, Künzler F, Mishra V, Presset B, Kotz D, Smith S, et al. Investigating intervention components and exploring states of receptivity for a smartphone app to promote physical activity: protocol of a microrandomized trial. JMIR Res Protoc 2019 Jan 31;8(1):e11540 [FREE Full text] [CrossRef] [Medline]
- Andersson G, Cuijpers P, Carlbring P, Riper H, Hedman E. Guided internet-based vs face-to-face cognitive behavior therapy for psychiatric and somatic disorders: a systematic review and meta-analysis. World Psychiatry 2014 Oct;13(3):288-295 [FREE Full text] [CrossRef] [Medline]
- Chan S, Godwin H, Gonzalez A, Yellowlees PM, Hilty DM. Review of use and integration of mobile apps into psychiatric treatments. Curr Psychiatry Rep 2017 Oct 30;19(12):96. [CrossRef] [Medline]
- Fitzpatrick K, Darcy A, Vierhile M. Delivering cognitive behavior therapy to young adults with symptoms of depression and anxiety using a fully automated conversational agent (woebot): a randomized controlled trial. JMIR Ment Health 2017 Jun 6;4(2):e19 [FREE Full text] [CrossRef] [Medline]
- Fulmer R, Joerin A, Gentile B, Lakerink L, Rauws M. Using psychological artificial intelligence (Tess) to relieve symptoms of depression and anxiety: randomized controlled trial. JMIR Ment Health 2018 Dec 13;5(4):e64 [FREE Full text] [CrossRef] [Medline]
- Hoermann S, McCabe KL, Milne DN, Calvo RA. Application of synchronous text-based dialogue systems in mental health interventions: systematic review. J Med Internet Res 2017 Dec 21;19(8):e267 [FREE Full text] [CrossRef] [Medline]
- Inkster B, Sarda S, Subramanian V. An empathy-driven, conversational artificial intelligence agent (Wysa) for digital mental well-being: real-world data evaluation mixed-methods study. JMIR Mhealth Uhealth 2018 Nov 23;6(11):e12106 [FREE Full text] [CrossRef] [Medline]
- Suganuma S, Sakamoto D, Shimoyama H. An embodied conversational agent for unguided internet-based cognitive behavior therapy in preventative mental health: feasibility and acceptability pilot trial. JMIR Ment Health 2018 Jul 31;5(3):e10454 [FREE Full text] [CrossRef] [Medline]
- Lucas G, Gratch J, King A, Morency L. It’s only a computer: virtual humans increase willingness to disclose. Comp Hum Behav 2014 Aug;37:94-100. [CrossRef]
- Lucas GM, Rizzo A, Gratch J. Reporting mental health symptoms: breaking down barriers to care with virtual human interviewers. Front Robot AI 2017;4:51. [CrossRef]
- Andersson G. Internet-delivered psychological treatments. Annu Rev Clin Psychol 2016;12:157-179. [CrossRef] [Medline]
- Sermo: talk real world medicine. URL: http://www.sermo.com/ [accessed 2019-02-25] [WebCite Cache]
- Pong R, Hogenbirk J. Reimbursing physicians for telehealth practice: issues and policy options. Health Law Rev 2000;9(1):3-13 [FREE Full text]
- de Grood C, Raissi A, Kwon Y, Santana M. Adoption of e-health technology by physicians: a scoping review. J Multidiscip Healthc 2016;9:335-344 [FREE Full text] [CrossRef] [Medline]
- Anderson K, Francis T, Ibanez-Carrasco F, Globerman J. Physician's perceptions of telemedicine in HIV care provision: a cross-sectional web-based survey. JMIR Public Health Surveill 2017 May 30;3(2):e31 [FREE Full text] [CrossRef] [Medline]
|GP: general practitioner|
|HCP: health care provider|
|MD: Doctor of Medicine|
Edited by G Eysenbach; submitted 22.11.18; peer-reviewed by B Davis, R Alkoudmani; comments to author 31.01.19; revised version received 05.02.19; accepted 09.02.19; published 05.04.19Copyright
©Adam Palanica, Peter Flaschner, Anirudh Thommandram, Michael Li, Yan Fossat. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 05.04.2019.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on http://www.jmir.org/, as well as this copyright and license information must be included.