Published on in Vol 21, No 4 (2019): April

Preprints (earlier versions) of this paper are available at, first published .
What Do Patients Say About Doctors Online? A Systematic Review of Studies on Patient Online Reviews

What Do Patients Say About Doctors Online? A Systematic Review of Studies on Patient Online Reviews

What Do Patients Say About Doctors Online? A Systematic Review of Studies on Patient Online Reviews


1Department of Health Administration and Policy, George Mason University, Fairfax, VA, United States

2School of Public Health, Texas A&M University, College Station, TX, United States

3Arnold School of Public Health, University of South Carolina, Columbia, SC, United States

4Department of Health Kinesiology, Texas A&M University, College Station, TX, United States

5Department of Communication, Texas A&M University, College Station, TX, United States

Corresponding Author:

Y Alicia Hong, PhD

Department of Health Administration and Policy

George Mason University

4400 University Drive, MS 1J3

Fairfax, VA, 22030

United States

Phone: 1 703 993 1929


Related ArticleThis is a corrected version. See correction statement in:

Background: The number of patient online reviews (PORs) has grown significantly, and PORs have played an increasingly important role in patients’ choice of health care providers.

Objective: The objective of our study was to systematically review studies on PORs, summarize the major findings and study characteristics, identify literature gaps, and make recommendations for future research.

Methods: A major database search was completed in January 2019. Studies were included if they (1) focused on PORs of physicians and hospitals, (2) reported qualitative or quantitative results from analysis of PORs, and (3) peer-reviewed empirical studies. Study characteristics and major findings were synthesized using predesigned tables.

Results: A total of 63 studies (69 articles) that met the above criteria were included in the review. Most studies (n=48) were conducted in the United States, including Puerto Rico, and the remaining were from Europe, Australia, and China. Earlier studies (published before 2010) used content analysis with small sample sizes; more recent studies retrieved and analyzed larger datasets using machine learning technologies. The number of PORs ranged from fewer than 200 to over 700,000. About 90% of the studies were focused on clinicians, typically specialists such as surgeons; 27% covered health care organizations, typically hospitals; and some studied both. A majority of PORs were positive and patients’ comments on their providers were favorable. Although most studies were descriptive, some compared PORs with traditional surveys of patient experience and found a high degree of correlation and some compared PORs with clinical outcomes but found a low level of correlation.

Conclusions: PORs contain valuable information that can generate insights into quality of care and patient-provider relationship, but it has not been systematically used for studies of health care quality. With the advancement of machine learning and data analysis tools, we anticipate more research on PORs based on testable hypotheses and rigorous analytic methods.

Trial Registration: International Prospective Register of Systematic Reviews (PROSPERO) CRD42018085057; (Archived by WebCite at

J Med Internet Res 2019;21(4):e12521



People have increasingly turned to the internet to share their clinical experience and make comparisons of physicians and medical treatments [1,2]. Hundreds if not thousands of patient online reviews (PORs) appear daily on the crowdsource platforms of patient review websites (PRWs) and carry growing influence in patients’ medical decision making [1-4]. In the earlier debates of PORs, some physicians expressed skepticism; they worried that most PORs were posted by begrudged patients who were not able to assess the technical quality of health care delivery [5]. Furthermore, physicians are unable to refute a negative review without jeopardizing patient confidentiality [6]; and it is nearly impossible to verify if the comments were left by actual patients [3]. Also, even with an increasing number of PORs, most rated physicians average a handful of ratings, which is unlikely to reflect the full range of impressions made by a physician who sees hundreds of patients each year [6]. Proponents of PORs, however, argue that patients are like consumers of other services and therefore have a right to express their opinions about services they pay for, and PORs provide timely and direct customer feedback [3,6,7].

Despite the ongoing debates on whether PORs can improve the quality of care [8,9], the number of PORs has grown exponentially in the past decade [1,10,11]. A recent national survey in the United States revealed that 59% of participants reported PORs were very important or somewhat important when choosing a physician, though PORs were endorsed less frequently than other factors such as word of mouth from family and friends and whether the physician accepted one’s insurance [2].

The proliferation of PORs and popularity of PRWs has happened in 2 somewhat overlapping contexts. Of these, 1 is that the ubiquitous internet access has facilitated online consumer behaviors, featured by “electronic word of mouth” [12]. People go online to rate any product or service they purchase and check online ratings before making any purchase. Health care consumer behaviors, though lagging other consumer behaviors, are rapidly catching up [3]. The other context is the movement of patient empowerment and self-determination of medical care, alongside the more recognized importance of patient experience and patient satisfaction in evaluating health care quality [13,14]. For example, the Center for Medicare and Medicaid Services (CMS) has a set of Core Quality Measures for Healthcare, and “patient experience” is one of the 7 critical domains [15]. Traditional government- or health care organization (HCO)–initiated surveys have incorporated patient-reported outcome measures in their routine questionnaires of quality measures, but it takes years to conduct surveys and analyze the data, and few patients have access to or understand these data [13]. Within such contexts, PORs have become a consumer-driven alternative that can provide almost instant feedback of health care experience.

The increasing weight of PORs in patients’ health care decision making has led to a growing number of research studies on PORs and PRWs [1,11]. Some scholars have advocated for giving more scientific values to PORs [7,16]. Others have evaluated the quality of PRWs and examined public perceptions and use of PRWs [2,10,17,18]. They concluded that the research on and usage of PRWs was limited [17,18]. To date, no systematic review of POR studies was available. Accordingly, we conducted a systematic review with the aims to synthesize existing studies on PORs by summarizing study characteristics, research design, analytical methods, and major findings. We have depicted the trend of POR research, identified literature gaps, and made recommendations for future research.

Inclusion and Exclusion Criteria

On the basis of the research objectives mentioned above, we listed the following search criteria before we started the literature search. The inclusion criteria were as follows: (1) studies that focused on PORs of physicians or hospitals, (2) studies that reported qualitative or quantitative results from analyses of PORs, and (3) peer-reviewed studies written in English. The exclusion criteria were (1) studies that did not report empirical outcomes from analyses of PORs and (2) editorials, reviews, or commentaries. Excluded studies were, for example, focused on physicians’ responses to online reviews [19], reported innovative methods for analyzing PORs without reporting the analytical results [20,21], or focused on characteristics of the patients who had used PRWs without reporting POR-related outcomes [2,22].

Data Sources and Selection

Following the principles of Preferred Reporting Items for Systematic Reviews and Meta-Analyses [23], we searched the major databases of PubMed, EMBASE, CINAHL, and Science Direct in January 2019. Search terms from previously published studies were used [17,18,24], including rating sites (websites), review sites (websites), online reviews (ratings), doctor (physician and hospital) ratings, and patient reviews (ratings). As shown in Figure 1, the initial search identified 2837 articles. After reviewing the titles and abstracts to determine relevance and removing duplicates, 90 articles were further reviewed by reading full texts. A total of 48 articles that met the inclusion and exclusion criteria were identified for detailed review. Next, we searched the reference sections of the 48 articles and consulted experts in the field to identify additional articles by hand search, resulting in 26 additional articles for review. After removing duplicates, we identified 69 articles or 63 studies to include in the review. Articles that reported findings based on the same data source, similar design, and research questions were counted as one study. The systematic review protocol was registered in PROSPERO: International Prospective Register of Systematic Reviews.

Figure 1. Flow chart of the literature search and article retrieval.
View this figure

Data Extraction and Synthesis

A total of 2 researchers independently reviewed all articles and extracted the following information using a predesigned table: authors and publication year, study time and location, PRWs used in the study, type of providers being studied, number of PORs and providers being analyzed, study design and analytical approach, and key findings. The intercoder reliability, calculated by using Cohen kappa, was 0.86. In the key finding analysis, the researchers listed the bullet points of major findings from each study and discussed the discrepancies until a consensus was reached. Owing to the heterogeneity of the studies, no study appraisal was carried out. This review was not focused on a single health outcome; instead, we aimed to identify and synthesize available POR studies, and no meta-analysis was conducted.

Study Time and Location

A total of 63 studies (69 articles) were included in the reviews (Multimedia Appendix 1) [1,3,10,11,25-85]. Although PRWs have been available for more than 2 decades, the earliest study on PORs was published in 2009 [25], and most of the studies (61/63, 96.8%) were published after 2010. Out of 63 studies, 48 were conducted in the United States, including Puerto Rico, 5 in Germany [26-31], 3 in the United Kingdom [32-34], 3 in China [35-38], 3 from the Netherlands [39-41], 1 from Australia [42], and 1 from Canada [86] (Table 1).

Table 1. Study locations.
CountryStatistics, n (%)
United States48 (76.2)
Germany5 (7.9)
United Kingdom3 (4.8)
China3 (4.8)
The Netherlands3 (4.8)
Australia1 (1.6)
Canada1 (1.6)

aOne study was conducted in both China and the United States.

Figure 2. Patient review websites (PRWs) used in the studies.
View this figure

Patient Review Websites

Most studies (36/63, 57.1%) retrieved PORs from multiple PRWs with 27 studies (27/63, 42.9%) using a single PRW for data analysis. Some earlier studies googled “patient review” or providers’ names to retrieve PORs, and the most popular or promoted PRWs emerged through such an online search. The PRWs used in these research studies varied across countries. For example, in the United Kingdom, most physicians and hospitals were rated on the National Health System Choices website, which was a single PRW used in the studies from the United Kingdom [32-34]. The most popular PRW in Germany was Jameda [28-31], whereas HaoDF was used in China [35-38]. In the United States, a large number of PORs have accrued on the most popular PRWs including generic consumer review sites such as Yelp, Yahoo, and Google, as well as specialized PRWs such as RateMDs, HealthGrades, and Vitals (see Figure 2).

Studies that compared multiple PRWs found a low correlation between these sites [43,44]. For example, Nwachukwu et al reported that the correlations (r) between PRWs were 0.32 approximately 0.51, P<.001[43]. Physicians on one PRW were rated differently on other PRWs, whereas no PRW contained all consensus core domains of quality measures [45]. Some studies questioned the reliability of PRWs given that most physicians only have a very small number of ratings [10,46].

Figure 3. Various types of providers reviewed.
View this figure

Types of Providers Reviewed

Out of the 63 studies, 17 (17/63, 27%) reported PORs of HCOs, including hospitals, urgent care centers (emergency departments), and nursing homes [27,32,34,39,40,44,47-53,87,88], and 55 (55/63, 87.3%) were focused on clinicians. Of the 55 studies that reported PORs of clinicians, 14 (14/55, 25.5%) included multi-type physicians (general practitioners and specialists), 2 (2/55, 3.6%) were focused on general practitioners [33,54,55], 1 (1/55, 1.8%) reported physical therapists [88], 2 (2/55, 3.6%) reported dentists [30,88], and the remaining 36 (36/55, 70.6%) were focused on specialists, including surgeons, dermatologists, urologists, Ob/Gyns. Of these 36 studies on specialists, 21 (21/36, 58.3%) were focused on various types of surgeons (see Figure 3).

Number of Providers Reviewed and Number of Ratings

The number of health care providers reviewed ranged from 20 to 212,933 with a median of 600. The number of PORs analyzed ranged from 30 to 2,685,066 with a median of 5439. The number of PORs included in the analyses has grown substantially in the past 9 years.

Not all physicians had an online rating. In Germany, only 37% of physicians were rated online [29]; a recent study in 3 metros of the United States reported that 34% of physicians did not have any PORs and most physicians still had no more than 1 review [10]. Even among those physicians with online reviews, the number of PORs varied significantly across specialties. For example, 96% of cardiologists in the United States were rated online [56]; 25% of the hospitals included in Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) had Yelp ratings [51]. Specialists were twice as likely to be rated online than general practitioners [26]; radiologists and pathologists were least likely to be rated online [42].

Study Design and Analytical Approaches

A considerable number of studies (27/63, 42.9%) were descriptive and reported only frequency analyses, including the average numbers of ratings per provider, the percentages of providers that have been reviewed online, and the mean scores of PORs. Studies that focused on HCOs and specialists typically identified the providers from a directory before searching for their PORs. In contrast, studies that focused on all types of providers typically retrieved PORs directly from PRWs without a preselected list of providers.

A total of 19 (30.2%) studies analyzed the narrative comments of PORs. Previous studies in this regard used traditional qualitative methods to retrieve major themes from these comments [3,25,32,54,55]. Recent studies have applied more advanced techniques such as natural language processing (NLP). For example, topic models, such as Latent Dirichlet Allocation, have been used as an efficient tool to automatically cluster POR comments by topics [35,36,38,50,87]. The use of such advanced analytical methods enabled content analysis of hundreds of thousands of narrative comments.

More than half of the studies (n=38/63, 60.3%) employed a comparative design. They typically compared PORs with (1) traditional surveys of patient experience [27,34], (2) providers’ characteristics [28,29,33,34,57,58], (3) clinic outcomes such as patients’ readmission rates or mortality rates [52,56,59], and (4) traditional “golden standards” of health care quality indicators (eg, HCAHPS structural and quality of care measures) [39,44,45,47,50,53]. Furthermore, 1 study compared PORs between China and the United States [38] and 2 studies compared PORs across different PRWs [43,44].

General Findings of Patient Online Reviews

Most patients made positive comments about their providers and would recommend their providers to families and friends. Out of 63 studies, 27 studies (27/63, 42.9%) reported mean scores of PORs ranging from 2.37 to 4.51 (out of 5) with a median value of 4 and a mean value of 3.89 (unweighted).

The studies that analyzed the patients’ narrative comments found that these comments covered the entire health care encounter of the facility and staff [54,55], including physicians’ demeanor, staff friendliness, empathy, and cost [60,61]; patients also cared about the ease of scheduling, time spent with patients, and wait time [62].

Relationship of Patient Online Reviews and Providers’ Characteristics

The existing studies that compared PORs with characteristics of providers found that physicians with higher ratings had the following characteristics: (1) female, young age [29,43,46]; (2) more online presence [58]; (3) board-certified with extensive training experiences and graduated from a highly rated medical school [1,89]; (4) active status and years in practice [63]; (5) specialties [37,88]; and (6) locations [86]. However, some studies found no interactions between PORs and either genders, regions, or academic proclivity [46,64,65]. Furthermore, 1 study found surgeons with higher volume of procedures had higher POR ratings and better comments [59]. Patient characteristics also affected PORs. For example, female, seniors, and patients covered by private insurance were more likely to provide positive PORs [27,30].

Relationship of Patient Online Reviews and Traditional Patient Surveys

As summarized in Table 2, convergent findings suggested a strong association between PORs and traditional patient satisfaction surveys. For example, several studies found moderate-to-high degrees of correlation between PORs and HCAHPS patient experience measures [31,33,34,53,66] and the Press Ganey Medical Practice Survey for patient satisfaction [67], respectively. Content analysis studies also reported a considerable overlap between the narrative comments of PORs and thematic domains of HCAHPS surveys [47,50,51]. Some of these studies also identified additional domains not included in HCAHPS surveys. [50,51]. Similar findings of correlation tests were reported from studies in Germany [31] and the United Kingdom [33,34]. Furthermore, 2 studies from the Netherlands reported that hospitals under supervision or inspection from authorities had lower POR ratings [39,40].

Relationship of Patient Online Reviews and Clinical Outcomes and Other Quality Measures

Table 2 also includes the summaries of the relationship between PORs and clinical outcomes and other quality measures. Most of the 9 studies on the relationships of PORs and clinical outcomes reported weak or no relationship [11,31,33,52,53,56,66,68,69]. For instance, a study of PORs on cardiologists found no correlation (Spearman ρ=−0.06; P=.13) between PORs and mortality rates following the coronary artery bypass surgery [56]. Similarly, Greaves et al found a weak correlation between PORs and clinical outcomes of providers in the United Kingdom (Spearman ρ=−0.18 approximately 0.18; P<.001) [33]. By contrast, Bardach et al compared PORs of hospitals in Yelp with quality measures from HCAHPS and found that higher scores of PORs were associated with better clinical outcomes, including lower mortality and readmission rates [53]. Studies also reported significant but low degrees of association between PORs and (1) patient likelihood of visiting their primary care physicians within 14 days of discharge [11], (2) cost of care [11,66], (3) 30-day readmission and length of stay [69], and (5) other hospital level CMS quality measures [44].

Table 2. Studies that compare patient online reviews with traditional healthcare quality indicators.
StudyComparator measures (patient surveys, clinical outcomes, or other quality measures)Comparison methods and results
Greaves et al, 2012 [33](1) Mailed-based patient surveys. (2) Clinical outcomes from the National Health Service (NHS) Information Center and NHS Comparators (eg, The proportion of patients with diabetes receiving flu vaccinations, proportion of hypertensive patients with controlled blood pressure, proportion of diabetic patients with controlled HbA1C, percentage of low-cost statin prescribing, cervical screening rate, admission rates for ambulatory care sensitive conditions, and the proportion of achieved clinical Quality and Outcomes Framework (QOF) points from available points. N (POR)=16,592, N (physicians)=4934.(1) ρ =0.37~0.48, P<.001 for Pearson correlation of POR and survey. (2) ρ=–0.18~0.18, P<.001 for the correlation of POR and clinical outcomes.
Greaves et al, 2012 [34]Traditional survey of patient experience. N (POR)=9,9997, N (physicians)=146.ρ=0.13~0.49, P<.001 for Pearson correlation of POR and survey.
Segal et al, 2012 [59]Volume of surgeries. N of POR=588, N of surgeons=600.High volume surgeons have higher mean values of PORs than low-volume surgeons, but effect size was weak.
Bardach et al, 2013 [53](1) Overall hospital ratings on HCAHPS. (2) Hospital individual HCAHPS domain scores (eg, nurse communication, pain control). (3) Hospital 30-day mortality and hospital 30-day readmission rates. N (POR)=3796, N (hospitals)=962.Pearson correlation (n=270), ρ=0.49, P<.001 for 3 out of 4 measures. Higher ratings were associated with lower mortality and readmission rates.
Wallace et al, 2014 [11](1) Likelihood of patient visiting their primary care physician within 14 days of hospital discharge. (2) Health care expenditure. N (POR)=58,110, N (physicians)=19,636.(1) Regression model for sentiment generated from POR comments and the comparator r2=.21, P=.03; (2) Regression model for POR rating combined with topics generated from POR comments r2=.25.
Glover et al, 2015 [52]30-day hospital-wide all-cause unplanned readmission rate (HWR). POR=Facebook comments. POR=Facebook comments, N (hospitals)=136.Independent sample t test (n=315 vs 364), POR=4.15±0.31 vs 4.05±0.41, P<.01 more PORs was associated lower HWR.
Emmert et al, 2015 [31](1) Quality measures on cost of medication, type 2 diabetes-related intermediate outcome measure, and patient/doctor ratio from German Integrated Health Care Network (QuE); (2) German patient satisfaction survey from QuE. N (POR)=1179 on Jameda, N=991 on Weisse Liste. N (physicians)=69.(1) Spearman’s rank correlation (n=991) ρ=0.297~.384, P<.05 for cost per prescription; ρ=0.478, P<.05 for patient with HbA1c-target values; ρ=−0.316~−0.289, P<.05 for patient/doctor ratio on Weisse Liste; (n=1179) ρ=0.298, P<.05 for cost per case, ρ=0.298~386, P<.05 for patient/doctor ratio on Jameda; (2) Spearman’s rank correlation (n=991), ρ=−0.347~−0.372, P<.05 for 3 out of 4 measures on Weisse Liste; (n=1179), ρ=−0.391~0.640, P<.05 for all measures on Jameda.
Okike et al, 2016 [56]Risk-adjusted mortality rate. N of POR NAa, N (surgeons)=590.Pearson’s correlation (n=590), r=−.06, P=.13.
Bardach et al, 2016 [51]Researchers identified HCAHPS domains. N (POR)=244 (narratives), N (hospitals)=193.Content analysis (139/244, 57% of POR comments mentioned HCAHPS domains).
Kilaru et al, 2016 [47]HCAHPS inpatient care surveys. N (POR)=1736, N (Emergency Departments)=100.Content analysis. Considerable overlaps in theme of PORs and HCAHPS domains.
Ranard et al, 2016 [50]Researchers identified HCAHPS domains. N (POR)=16,862, N (hospitals)=1352.Content analysis. POR comments covered 7/11 HCAHPS domains and introduced 12 new domains not existing in HCAHPS.
Emmert et al, 2018 [44]Hospital-level quality measures by the CMS. N (POR)=1000, N (hospitals)=623.(1) Spearman’s correlation ρ=±0.143, P<.05 for 13 of 29 measures; (2) Spearman’s correlation ρ=±0.114, P<0.05 for 7 of 29 measures, indicating weak association.
Trehan et al, 2018 [68]Total knee replacement (TKR) outcomes: infection rate, 30-day readmission rate, 90-day readmission rate, revision surgery. N of POR NA, N (surgeons)=174.Kruskal–Wallis one-way analysis of variance one-way analysis of variance (one-way ANOVA on ranks) showed no correlation.
Campbell et al, 2018 [66]1) HCAHPS patient satisfaction measures; 2) HCAHPS hospital-wide 30-day readmission rate; 3) Medicare spending per beneficiary ratio. N of POR NA, N (hospitals)=136.(1) Multivariable linear regression (n=136), r2=.16~.5, P<.05 for 21 of 23 measures; Pearson’s correlation (n=136), r=.27~.61, P<.005 for 19 of 23 measures; (2) Multivariable linear regression r2=−.58, P<.10 for readmission rate; (3) Multivariable linear regression r2=−.006, P<.731 for Medicare spending per beneficiary. Overall weak association.
Jarari et al, 2018 [71]Nursing Home Compare (NHC) website quality measures.POR rating was significantly different from NHC rating.
Chen et al, 2018 [67]Press Ganey Medical Practice Survey for patient satisfaction. N of POR NA, N (physicians)=200.Pearson’s correlation (n=226), r=.18, P<.001.
Daskivich et al, 2018 [69]Specialty-specific performance scores (adherence to Choosing Wisely measures, 30-day readmissions, length of stay, and adjusted cost of care), primary care physician peer-review scores, and administrator peer-review scores.Multivariable linear regression (n=30) r=−.04, P=.04.

aNA: not available.

To the best of our knowledge, this is the first systematic review of studies on PORs. The 63 studies included in this review reflect a decade of peer-reviewed publications on PORs from 6 countries; the study design and key findings have been summarized. Earlier studies tended to report on characteristics of PORs whereas later studies tended to compare PORs with traditional patient surveys or clinical outcomes.

Principal Findings

Our summaries of the existing 63 studies on PORs revealed that the number of health care providers (including clinicians and HCOs) being reviewed represented only a small number of the total health care workforce. The number of reviews per clinician varied from zero to hundreds, indicating a very skewed distribution in these PORs. As compared with general practitioners, specialists, especially surgeons, were more likely to be reviewed and included in the analyses of PORs. Overall, the online ratings and comments were positive. Only a small number of studies compared the correlations between PORs and patient satisfaction and clinical outcomes. These studies suggested that PORs were highly correlated to the “patient experience” measured by traditional patient surveys. Nevertheless, there were inconclusive findings on whether PORs were inconsistent with traditional measures of clinical outcomes. Notably, reviewed studies have identified several domains of patient experience that were not covered by the traditional patient surveys, for example, HCAHPS [50,51].

The current literature on PORs suggests a relatively new but fast-growing field. The number of published studies was small when compared with the exponential growth of PORs. Therefore, we have made the following recommendations for future studies on PORs.

First, studies with rigorous design, longitudinal nature, and larger samples are needed. POR studies present challenges of data acquisition and processing because of the nature of large and heterogeneous online data. The latest Web crawling techniques have enabled efficient retrieval of large quantities of POR data. Advanced analytical techniques such as machine learning and NLP can be employed to expedite large-scale analysis of PORs.

Second, most existing studies are focused on specialists in metropolitan areas [10,70-72]; more studies are needed to understand other disciplines of health care providers and those who serve in nonmetro areas. Studies that identify consumer-based assessments for underrepresented types of HCOs, such as nursing homes, public health services, and substance treatment centers, are minimal or missing in the literature. There was only 1 study that reported PORs for nursing homes [49]. Many of these HCOs serve vulnerable populations who are not typical PRW users, but their family caregivers and other advocates may also provide valid PORs.

Third, we anticipate more studies that go beyond the simple descriptive analysis and test theory-based hypotheses to provide more clinical and policy implications. In recent years, we have observed emerging studies that compared PORs with traditional measures of patient experience and clinical outcomes. However, the current literature is limited in terms of a lack of consistent POR reporting and insufficient advanced statistical analyses of POR data and their relationship with quality measures. We call for more empirical studies with meaningful hypotheses, rigorous design, and appropriate data analytics.

Finally, we have observed that PORs have begun clustering on a small number of popular PRWs (Figure 2). With the recent announcement of Amazon’s entrance into health care [90], online reviews by health care consumers may become even more clustered. Whether and how the clustering of PORs on the growing dominance of commercial PRWs would affect consumer health behaviors and health care quality remains unstudied.

Policy Implications

The growing body of literature on PORs indicates its increasing importance in patients’ decision making, which provides policy and practice implications for health care providers, patients, PRW owners, and policy makers.

Notably, health care providers should not underestimate the importance of PORs. Instead, they should recognize the importance of PRWs for their “digital brand” and stay aware of the PORs posted to popular PRWs [91]. Physicians can use anonymous PORs for the evaluation of patient satisfaction and assessment of patients’ need. In addition, friendly and personalized responses to PORs may enhance positive patient-provider communication [19].

From a consumer’s perspective, patients need to know that only a small number of physicians have been reviewed online and the average rating score for a physician might not be sufficient for choosing a doctor as assumed, given the tendency of consumers to provide feedback on experiences that are unusually positive or negative. As posting the health care experience becomes more commonplace, we anticipate a “consumer’s guide” to help patients navigate the PORs and make more informed choices [92,93].

For PRW owners, as PORs are often unstructured, not adjusted for risks, and unverifiable, they should take more social responsibilities by adding design components to enable identity authentication, to remove inflammatory or abusive comments, and to assist patients on how to use PRWs to avoid misinformation [3,4,94]. We also call for a consistent rating scheme to facilitate the evaluation of providers using data from various PRWs.

For policy makers, the question of whether PORs can be used as an indicator of health care quality is still controversial; policy makers and health care providers should acknowledge and embrace its increasing importance for patients [7,95]. The PORs can reflect instant feedback of patients’ medical encounters, the context of their ratings, and what they truly value. Some of the constructs of patient experience identified from analyzing PORs can be used to strengthen or complement the current measures of health care quality and to provide rapid recognition of quality perception gaps along with service corrections or other proactive quality interventions when needed [96]. Although we recognize the growing weight of PORs in consumer health behaviors and the potential of applying PORs in improving health care quality, we call for broader collaborations of key stakeholders, including patients, caregivers, health care providers, PRW owners, policy makers, and health services researchers, to engage in conversations and joint efforts to construct a positive patient-provider feedback loop.

Some potential biases need to be noted while interpreting the results from this review. First, this review was focused on the published studies that analyzed PORs, so the findings related to PORs only reflected those published studies but not the whole picture of PORs. Given the vast and ever-growing number of PORs, only a small fraction was studied and published. Second, only a small number of patients would actually provide ratings of their medical encounters. These motivated patients are more likely to be younger, female, living in metropolitans, and spending more time online [4]; thus, there is a potential bias in the existing PORs. These biases are not methodological flaws in conducting the systematic review but require caution when interpreting study findings.


In addition to the above potential biases, we should also note the limitations of the study. Though we tried our best to thoroughly search the major databases, it is possible that some relevant studies were missed. As we concluded the search in January 2019, a few recently accepted papers were not included. Our search was limited to peer-reviewed literature; we may have missed some gray literature that is equally important for the POR research. Additionally, because our review was limited to the literature published in English, the review did not cover articles published in other languages. Finally, because of the heterogeneity in outcome reporting and study design, we did not carry out an appraisal of study quality. The number of PORs ranged from a few dozens to hundreds of thousands and the ratings were based on different scales, so we did not conduct a meta-analysis.


To conclude, the current body of the peer-reviewed literature on PORs is still small but growing rapidly. We found that overall PORs tended to be positive, and the narratives of PORs have provided insights into multiple domains of patient experience and health care quality. We call for more research on PORs using rigorous design and large samples along with better use of POR information by patients, physicians, and policy makers. We also advocate for recommendations or guidelines of POR use to help patients make informed choices and foster the application of PORs for improving health care quality.


The study was supported by the PESCA award and T3 grant from Texas A&M University.

Conflicts of Interest

None declared.

Multimedia Appendix 1

Summaries of published studies on patient online reviews (63 studies consisting of 69 articles).

PDF File (Adobe PDF File), 190KB

  1. Gao GG, McCullough JS, Agarwal R, Jha AK. A changing landscape of physician quality reporting: analysis of patients' online ratings of their physicians over a 5-year period. J Med Internet Res 2012;14(1):e38 [FREE Full text] [CrossRef] [Medline]
  2. Hanauer DA, Zheng K, Singer DC, Gebremariam A, Davis MM. Public awareness, perception, and use of online physician rating sites. J Am Med Assoc 2014 Feb 19;311(7):734-735. [CrossRef] [Medline]
  3. Lagu T, Hannon NS, Rothberg MB, Lindenauer PK. Patients' evaluations of health care providers in the era of social networking: an analysis of physician-rating websites. J Gen Intern Med 2010 Sep;25(9):942-946 [FREE Full text] [CrossRef] [Medline]
  4. Emmert M, Meier F, Pisch F, Sander U. Physician choice making and characteristics associated with using physician-rating websites: cross-sectional study. J Med Internet Res 2013;15(8):e187 [FREE Full text] [CrossRef] [Medline]
  5. Chang JT, Hays RD, Shekelle PG, MacLean CH, Solomon DH, Reuben DB, et al. Patients' global ratings of their health care are not associated with the technical quality of their care. Ann Intern Med 2006 May 2;144(9):665-672. [CrossRef] [Medline]
  6. Jain S. Googling ourselves--what physicians can learn from online rating sites. N Engl J Med 2010 Jan 7;362(1):6-7. [CrossRef] [Medline]
  7. Schlesinger M, Grob R, Shaller D, Martino SC, Parker AM, Finucane ML, et al. Taking patients' narratives about clinicians from anecdote to science. N Engl J Med 2015 Aug 13;373(7):675-679. [CrossRef] [Medline]
  8. Bacon N. Will doctor rating sites improve standards of care? Yes. Br Med J 2009;338:b1030. [Medline]
  9. McCartney M. Will doctor rating sites improve the quality of care? No. Br Med J 2009;338:b1033. [Medline]
  10. Lagu T, Metayer K, Moran M, Ortiz L, Priya A, Goff SL, et al. Website characteristics and physician reviews on commercial physician-rating websites. J Am Med Assoc 2017 Dec 21;317(7):766-768 [FREE Full text] [CrossRef] [Medline]
  11. Wallace BC, Paul MJ, Sarkar U, Trikalinos TA, Dredze M. A large-scale quantitative analysis of latent factors and sentiment in online doctor reviews. J Am Med Inform Assoc 2014;21(6):1098-1103 [FREE Full text] [CrossRef] [Medline]
  12. Hennig-Thurau T, Gwinner KP, Walsh G, Gremler DD. Electronic word-of-mouth via consumer-opinion platforms: What motivates consumers to articulate themselves on the internet? J Interact Market 2004 Jan;18(1):38-52. [CrossRef]
  13. Fung CH, Lim Y, Mattke S, Damberg C, Shekelle PG. Systematic review: the evidence that publishing patient care performance data improves quality of care. Ann Intern Med 2008 Jan 15;148(2):111-123. [Medline]
  14. Giordano LA, Elliott MN, Goldstein E, Lehrman WG, Spencer PA. Development, implementation, and public reporting of the HCAHPS survey. Med Care Res Rev 2010 Feb;67(1):27-37. [CrossRef] [Medline]
  15. Center for Medicare and Medicaid Services. Core Quality Measures   URL: [accessed 2018-10-17] [WebCite Cache]
  16. Lagu T, Lindenauer PK. Putting the public back in public reporting of health care quality. J Am Med Assoc 2010 Oct 20;304(15):1711-1712. [CrossRef] [Medline]
  17. Emmert M, Sander U, Pisch F. Eight questions about physician-rating websites: a systematic review. J Med Internet Res 2013;15(2):e24 [FREE Full text] [CrossRef] [Medline]
  18. Reimann S, Strech D. The representation of patient experience and satisfaction in physician rating sites. A criteria-based analysis of English- and German-language sites. BMC Health Serv Res 2010;10:332 [FREE Full text] [CrossRef] [Medline]
  19. Emmert M, Sauter L, Jablonski L, Sander U, Taheri-Zadeh F. Do physicians respond to web-based patient ratings? An analysis of physicians' responses to more than one million web-based ratings over a six-year period. J Med Internet Res 2017 Jul 26;19(7):e275 [FREE Full text] [CrossRef] [Medline]
  20. Alemi F, Torii M, Clementz L, Aron DC. Feasibility of real-time satisfaction surveys through automated analysis of patients' unstructured comments and sentiments. Qual Manag Health Care 2012;21(1):9-19. [CrossRef] [Medline]
  21. Greaves F, Ramirez-Cano D, Millett C, Darzi A, Donaldson L. Use of sentiment analysis for capturing patient experience from free-text comments posted online. J Med Internet Res 2013;15(11):e239 [FREE Full text] [CrossRef] [Medline]
  22. Terlutter R, Bidmon S, Röttl J. Who uses physician-rating websites? Differences in sociodemographic variables, psychographic variables, and health status of users and nonusers of physician-rating websites. J Med Internet Res 2014;16(3):e97 [FREE Full text] [CrossRef] [Medline]
  23. Moher D, Shamseer L, Clarke M, Ghersi D, Liberati A, Petticrew M, et al. Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015 statement. Syst Rev 2015 Jan;4:1 [FREE Full text] [CrossRef] [Medline]
  24. Verhoef LM, van de Belt TH, Engelen LJ, Schoonhoven L, Kool RB. Social media and rating sites as tools to understanding quality of care: a scoping review. J Med Internet Res 2014;16(2):e56 [FREE Full text] [CrossRef] [Medline]
  25. Black EW, Thompson LA, Saliba H, Dawson K, Black NM. An analysis of healthcare providers' online ratings. Inform Prim Care 2009;17(4):249-253 [FREE Full text] [Medline]
  26. Emmert M, Sander U, Esslinger AS, Maryschok M, Schöffski O. Public reporting in Germany: the content of physician rating websites. Methods Inf Med 2012;51(2):112-120. [CrossRef] [Medline]
  27. Drevs F, Hinz V. Who chooses, who uses, who rates: the impact of agency on electronic word-of-mouth about hospitals stays. Health Care Manage Rev 2014;39(3):223-233. [CrossRef] [Medline]
  28. Emmert M, Meier F, Heider A, Dürr C, Sander U. What do patients say about their physicians? an analysis of 3000 narrative comments posted on a German physician rating website. Health Policy 2014 Oct;118(1):66-73. [CrossRef] [Medline]
  29. Emmert M, Meier F. An analysis of online evaluations on a physician rating website: evidence from a German public reporting instrument. J Med Internet Res 2013;15(8):e157 [FREE Full text] [CrossRef] [Medline]
  30. Emmert M, Halling F, Meier F. Evaluations of dentists on a German physician rating website: an analysis of the ratings. J Med Internet Res 2015;17(1):e15 [FREE Full text] [CrossRef] [Medline]
  31. Emmert M, Adelhardt T, Sander U, Wambach V, Lindenthal J. A cross-sectional study assessing the association between online ratings and structural and quality of care measures: results from two German physician rating websites. BMC Health Serv Res 2015;15(1):414 [FREE Full text] [CrossRef] [Medline]
  32. Lagu T, Goff SL, Hannon NS, Shatz A, Lindenauer PK. A mixed-methods analysis of patient reviews of hospital care in England: implications for public reporting of health care quality data in the United States. Jt Comm J Qual Patient Saf 2013 Jan;39(1):7-15. [Medline]
  33. Greaves F, Pape UJ, Lee H, Smith DM, Darzi A, Majeed A, et al. Patients' ratings of family physician practices on the internet: usage and associations with conventional measures of quality in the English National Health Service. J Med Internet Res 2012;14(5):e146 [FREE Full text] [CrossRef] [Medline]
  34. Greaves F, Pape UJ, King D, Darzi A, Majeed A, Wachter RM, et al. Associations between internet-based patient ratings and conventional surveys of patient experience in the English NHS: an observational study. BMJ Qual Saf 2012 Jul;21(7):600-605. [CrossRef] [Medline]
  35. Hao H, Zhang K. The voice of Chinese health consumers: a text mining approach to web-based physician reviews. J Med Internet Res 2016 May 10;18(5):e108 [FREE Full text] [CrossRef] [Medline]
  36. Hao H. The development of online doctor reviews in China: an analysis of the largest online doctor review website in China. J Med Internet Res 2015 Jun 1;17(6):e134 [FREE Full text] [CrossRef] [Medline]
  37. Zhang W, Deng Z, Hong Z, Evans R, Ma J, Zhang H. Unhappy patients are not alike: content analysis of the negative comments from China's Good Doctor website. J Med Internet Res 2018 Jan 25;20(1):e35 [FREE Full text] [CrossRef] [Medline]
  38. Hao H, Zhang K, Wang W, Gao G. A tale of two countries: international comparison of online doctor reviews between China and the United States. Int J Med Inform 2017 Mar;99:37-44. [CrossRef] [Medline]
  39. Kool RB, Kleefstra SM, Borghans I, Atsma F, van de Belt TH. Influence of intensified supervision by health care inspectorates on online patient ratings of hospitals: a multilevel study of more than 43,000 online ratings. J Med Internet Res 2016 Dec 15;18(7):e198 [FREE Full text] [CrossRef] [Medline]
  40. Hendrikx RJ, Spreeuwenberg MD, Drewes HW, Struijs JN, Ruwaard D, Baan CA. Harvesting the wisdom of the crowd: using online ratings to explore care experiences in regions. BMC Health Serv Res 2018 Oct 20;18(1):801 [FREE Full text] [CrossRef] [Medline]
  41. van de Belt TH, Engelen LJL, Verhoef LM, van der Weide MJ, Schoonhoven L, Kool RB. Using patient experiences on Dutch social media to supervise health care services: exploratory study. J Med Internet Res 2015;17(1):e7 [FREE Full text] [CrossRef] [Medline]
  42. Atkinson S. Current status of online rating of Australian doctors. Aust J Prim Health 2014;20(3):222-223. [CrossRef] [Medline]
  43. Nwachukwu BU, Adjei J, Trehan SK, Chang B, Amoo-Achampong K, Nguyen JT, et al. Rating a Sports Medicine Surgeon's. HSS J 2016 Oct;12(3):272-277 [FREE Full text] [CrossRef] [Medline]
  44. Emmert M, Meszmer N, Schlesinger M. A cross-sectional study assessing the association between online ratings and clinical quality of care measures for US hospitals: results from an observational study. BMC Health Serv Res 2018 Dec 5;18(1):82 [FREE Full text] [CrossRef] [Medline]
  45. Ramkumar PN, Navarro SM, Chughtai M, La T, Fisch E, Mont MA. The patient experience: an analysis of orthopedic surgeon quality on physician-rating sites. J Arthroplasty 2017 Dec;32(9):2905-2910. [CrossRef] [Medline]
  46. Kirkpatrick W, Abboudi J, Kim N, Medina J, Maltenfort M, Seigerman D, et al. An assessment of online reviews of hand surgeons. Arch Bone Jt Surg 2017 May;5(3):139-144 [FREE Full text] [Medline]
  47. Kilaru AS, Meisel ZF, Paciotti B, Ha YP, Smith RJ, Ranard BL, et al. What do patients say about emergency departments in online reviews? A qualitative study. BMJ Qual Saf 2016 Jan;25(1):14-24. [CrossRef] [Medline]
  48. Tran NN, Lee J. Online reviews as health data: examining the association between availability of health care services and patient star ratings exemplified by the Yelp Academic dataset. JMIR Public Health Surveill 2017 Jul 12;3(3):e43 [FREE Full text] [CrossRef] [Medline]
  49. Johari K, Kellogg C, Vazquez K, Irvine K, Rahman A, Enguidanos S. Ratings game: an analysis of Nursing Home Compare and Yelp ratings. BMJ Qual Saf 2018 Aug;27(8):619-624. [CrossRef] [Medline]
  50. Ranard BL, Werner RM, Antanavicius T, Schwartz HA, Smith RJ, Meisel ZF, et al. Yelp reviews of hospital care can supplement and inform traditional surveys of the patient experience of care. Health Aff (Millwood) 2016 Apr;35(4):697-705. [CrossRef] [Medline]
  51. Bardach NS, Lyndon A, Asteria-Peñaloza R, Goldman LE, Lin GA, Dudley RA. From the closest observers of patient care: a thematic analysis of online narrative reviews of hospitals. BMJ Qual Saf 2016 Dec;25(11):889-897 [FREE Full text] [CrossRef] [Medline]
  52. Glover M, Khalilzadeh O, Choy G, Prabhakar AM, Pandharipande PV, Gazelle GS. Hospital evaluations by social media: a comparative analysis of Facebook ratings among performance outliers. J Gen Intern Med 2015 Oct;30(10):1440-1446 [FREE Full text] [CrossRef] [Medline]
  53. Bardach NS, Asteria-Peñaloza R, Boscardin WJ, Dudley RA. The relationship between commercial website ratings and traditional hospital performance measures in the USA. BMJ Qual Saf 2013 Mar;22(3):194-202 [FREE Full text] [CrossRef] [Medline]
  54. López A, Detz A, Ratanawongsa N, Sarkar U. What patients say about their doctors online: a qualitative content analysis. J Gen Intern Med 2012 Jun;27(6):685-692 [FREE Full text] [CrossRef] [Medline]
  55. Detz A, López A, Sarkar U. Long-term doctor-patient relationships: patient perspective from online reviews. J Med Internet Res 2013;15(7):e131 [FREE Full text] [CrossRef] [Medline]
  56. Okike K, Peter-Bibb TK, Xie KC, Okike ON. Association between physician online rating and quality of care. J Med Internet Res 2016 Dec 13;18(12):e324 [FREE Full text] [CrossRef] [Medline]
  57. Riemer C, Doctor M, Dellavalle RP. Analysis of online ratings of dermatologists. JAMA Dermatol 2016 Feb;152(2):218-219. [CrossRef] [Medline]
  58. Trehan SK, DeFrancesco CJ, Nguyen JT, Charalel RA, Daluiski A. Online patient ratings of hand surgeons. J Hand Surg Am 2016 Jan;41(1):98-103. [CrossRef] [Medline]
  59. Segal J, Sacopulos M, Sheets V, Thurston I, Brooks K, Puccia R. Online doctor reviews: do they track surgeon volume, a proxy for quality of care? J Med Internet Res 2012;14(2):e50 [FREE Full text] [CrossRef] [Medline]
  60. Smith RJ, Lipoff JB. Evaluation of dermatology practice online reviews: lessons from qualitative analysis. JAMA Dermatol 2016 Feb;152(2):153-157. [CrossRef] [Medline]
  61. Goshtasbi K, Lehrich BM, Moshtaghi O, Abouzari M, Sahyouni R, Bagheri K, et al. Patients' online perception and ratings of neurotologists. Otol Neurotol 2019 Jan;40(1):139-143. [CrossRef] [Medline]
  62. Bakhsh W, Mesfin A. Online ratings of orthopedic surgeons: analysis of 2185 reviews. Am J Orthop (Belle Mead NJ) 2014 Aug;43(8):359-363. [Medline]
  63. Haglin JM, Eltorai AE, Kalagara S, Kingrey B, Durand WM, Aidlen JP, et al. Patient-rated trust of spine surgeons: influencing factors. Global Spine J 2018 Oct;8(7):728-732 [FREE Full text] [CrossRef] [Medline]
  64. Ellimoottil C, Hart A, Greco K, Quek ML, Farooq A. Online reviews of 500 urologists. J Urol 2013 Jun;189(6):2269-2273. [CrossRef] [Medline]
  65. Skrzypecki J, Przybek J. Physician review portals do not favor highly cited US ophthalmologists. Semin Ophthalmol 2018;33(4):547-551. [CrossRef] [Medline]
  66. Campbell L, Li Y. Are Facebook user ratings associated with hospital cost, quality and patient satisfaction? A cross-sectional analysis of hospitals in New York State. BMJ Qual Saf 2018 Dec;27(2):119-129. [CrossRef] [Medline]
  67. Chen J, Presson A, Zhang C, Ray D, Finlayson S, Glasgow R. Online physician review websites poorly correlate to a validated metric of patient satisfaction. J Surg Res 2018 Jul;227:1-6. [CrossRef] [Medline]
  68. Trehan SK, Nguyen JT, Marx R, Cross MB, Pan TJ, Daluiski A, et al. Online patient ratings are not correlated with total knee replacement surgeon-specific outcomes. HSS J 2018 Jul;14(2):177-180. [CrossRef] [Medline]
  69. Daskivich TJ, Houman J, Fuller G, Black JT, Kim HL, Spiegel B. Online physician ratings fail to predict actual performance on measures of quality, value, and peer review. J Am Med Inform Assoc 2017 Sep 8;25(4):401-407. [CrossRef] [Medline]
  70. Frost C, Mesfin A. Online reviews of orthopedic surgeons: an emerging trend. Orthopedics 2015 Apr;38(4):e257-e262. [CrossRef] [Medline]
  71. Kadry B, Chu LF, Kadry B, Gammas D, Macario A. Analysis of 4999 online physician ratings indicates that most patients give physicians a favorable rating. J Med Internet Res 2011;13(4):e95 [FREE Full text] [CrossRef] [Medline]
  72. Dorfman RG, Purnell C, Qiu C, Ellis MF, Basu CB, Kim JY. Happy and unhappy patients: a quantitative analysis of online plastic surgeon reviews for breast augmentation. Plast Reconstr Surg 2018 Dec;141(5):663e-673e. [CrossRef] [Medline]
  73. Sobin L, Goyal P. Trends of online ratings of otolaryngologists: what do your patients really think of you? JAMA Otolaryngol Head Neck Surg 2014 Jul;140(7):635-638. [CrossRef] [Medline]
  74. Lewis P, Kobayashi E, Gupta S. An online review of plastic surgeons in southern California. Ann Plast Surg 2015 May;74(Suppl 1):S66-S70. [CrossRef] [Medline]
  75. Murphy GP, Awad MA, Osterberg EC, Gaither TW, Chumnarnsongkhroh T, Washington SL, et al. Web-based physician ratings for California physicians on probation. J Med Internet Res 2017 Aug 22;19(8):e254 [FREE Full text] [CrossRef] [Medline]
  76. Donnally CJ, Roth ES, Li DJ, Maguire JA, McCormick JR, Barker GP, et al. Analysis of internet review site comments for spine surgeons: how office staff, physician likeability, and patient outcome are associated with online evaluations. Spine (Phila Pa 1976) 2018 Dec 15;43(24):1725-1730. [CrossRef] [Medline]
  77. Donnally CJ, McCormick JR, Li DJ, Maguire JA, Barker GP, Rush AJ, et al. How do physician demographics, training, social media usage, online presence, and wait times influence online physician review scores for spine surgeons? J Neurosurg Spine 2018 Nov 1:1-10. [CrossRef] [Medline]
  78. Donnally CJ, Li DJ, Maguire JA, Roth ES, Barker GP, McCormick JR, et al. How social media, training, and demographics influence online reviews across three leading review websites for spine surgeons. Spine J 2018 Nov;18(11):2081-2090. [CrossRef] [Medline]
  79. Randhawa S, Viqar A, Strother J, Prabhu AV, Xia F, Heron D, et al. How do patients rate their radiation oncologists in the modern era: an analysis of Cureus 2018 Sep 17;10(9):e3312 [FREE Full text] [CrossRef] [Medline]
  80. Kalagara S, Eltorai AE, DePasse JM, Daniels AH. Predictive factors of positive online patient ratings of spine surgeons. Spine J 2019 Jan;19(1):182-185. [CrossRef] [Medline]
  81. Zhang J, Omar A, Mesfin A. Online ratings of spine surgeons: analysis of 208 surgeons. Spine (Phila Pa 1976) 2018 Dec 15;43(12):E722-E726. [CrossRef] [Medline]
  82. Prabhu AV, Randhawa S, Clump D, Heron DE, Beriwal S. What do patients think about their radiation oncologists? An assessment of online patient reviews on Healthgrades. Cureus 2018 Feb 6;10(2):e2165 [FREE Full text] [CrossRef] [Medline]
  83. Jack RA, Burn MB, McCulloch PC, Liberman SR, Varner KE, Harris JD. Does experience matter? A meta-analysis of physician rating websites of orthopaedic surgeons. Musculoskelet Surg 2018 Apr;102(1):63-71. [CrossRef] [Medline]
  84. Daskivich T, Luu M, Noah B, Fuller G, Anger J, Spiegel B. Differences in online consumer ratings of health care providers across medical, surgical, and allied health specialties: observational study of 212,933 providers. J Med Internet Res 2018 May 9;20(5):e176 [FREE Full text] [CrossRef] [Medline]
  85. McGrath RJ, Priestley JL, Zhou Y, Culligan PJ. The validity of online patient ratings of physicians: analysis of physician peer reviews and patient ratings. Interact J Med Res 2018 Apr 9;7(1):e8 [FREE Full text] [CrossRef] [Medline]
  86. Liu JJ, Matelski JJ, Bell CM. Scope, breadth, and differences in online physician ratings related to geography, specialty, and year: observational retrospective study. J Med Internet Res 2018 Mar 7;20(3):e76 [FREE Full text] [CrossRef] [Medline]
  87. Agarwal AK, Mahoney K, Lanza AL, Klinger EV, Asch DA, Fausti N, et al. Online ratings of the patient experience: emergency departments versus urgent care centers. Ann Emerg Med 2018 Nov 1:1-8 (forthcoming). [CrossRef] [Medline]
  88. Geletta S. Measuring patient satisfaction with medical services using social media generated data. Int J Health Care Qual Assur 2018 Mar 12;31(2):96-105. [CrossRef] [Medline]
  89. Cloney M, Hopkins B, Shlobin N, Dahdaleh NS. Online ratings of neurosurgeons: an examination of web data and its implications. Neurosurgery 2018 Apr 3;83(6):1143-1152 PMID: 29618127. [CrossRef] [Medline]
  90. Farr. CNBC. As Amazon moves into health care, here's what we know—and what we suspect—about its plans   URL: [accessed 2018-10-17] [WebCite Cache]
  91. Wald JT, Timimi FK, Kotsenas AL. Managing physicians' medical brand. Mayo Clin Proc 2017 Dec;92(4):685-686. [CrossRef] [Medline]
  92. Yaraghi N, Wang W, Gao GG, Agarwal R. How online quality ratings influence patients' choice of medical providers: controlled experimental survey study. J Med Internet Res 2018 Mar 26;20(3):e99 [FREE Full text] [CrossRef] [Medline]
  93. Guo L, Jin B, Yao C, Yang H, Huang D, Wang F. Which doctor to trust: a recommender system for identifying the right doctors. J Med Internet Res 2016 Dec 7;18(7):e186 [FREE Full text] [CrossRef] [Medline]
  94. Strech D. Ethical principles for physician rating sites. J Med Internet Res 2011;13(4):e113 [FREE Full text] [CrossRef] [Medline]
  95. Fischer S, Emmert M. A review of scientific evidence for public perspectives on online rating websites of healthcare providers. In: In Challenges and Opportunities in Health Care Management 2015. Cham: Springer; 2014:279-290.
  96. Greaves F, Ramirez-Cano D, Millett C, Darzi A, Donaldson L. Harnessing the cloud of patient experience: using social media to detect poor quality healthcare. BMJ Qual Saf 2013 Mar;22(3):251-255. [CrossRef] [Medline]

CMS: Center for Medicare and Medicaid Services
HCO: health care organization
HCAHPS: Hospital Consumer Assessment of Healthcare Providers and Systems
NLP: natural language processing
POR: patient online review
PRW: patient review website

Edited by G Eysenbach; submitted 17.10.18; peer-reviewed by S Gupta, M Paul; comments to author 06.12.18; revised version received 16.12.18; accepted 31.01.19; published 08.04.19


©Y Alicia Hong, Chen Liang, Tiffany A Radcliff, Lisa T Wigfall, Richard L Street. Originally published in the Journal of Medical Internet Research (, 08.04.2019.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on, as well as this copyright and license information must be included.