Background: Physician-rating websites are being increasingly used by patients to help guide physician choice. As such, an understanding of these websites and factors that influence ratings is valuable to physicians.
Objective: We sought to perform a comprehensive analysis of online urology ratings information, with a specific focus on the relationship between number of ratings or comments and overall physician rating.
Methods: We analyzed urologist ratings on the Healthgrades website. The data retrieval focused on physician and staff ratings information. Our analysis included descriptive statistics of physician and staff ratings and correlation analysis between physician or staff performance and overall physician rating. Finally, we performed a best-fit analysis to assess for an association between number of physician ratings and overall rating.
Results: From a total of 9921 urology profiles analyzed, there were 99,959 ratings and 23,492 comments. Most ratings were either 5 (“excellent”) (67.53%, 67,505/99,959) or 1 (“poor”) (24.22%, 24,218/99,959). All physician and staff performance ratings demonstrated a positive and statistically significant correlation with overall physician rating (P<.001 for all analyses). Best-fit analysis demonstrated a negative relationship between number of ratings or comments and overall rating until physicians achieved 21 ratings or 6 comments. Thereafter, a positive relationship was seen.
Conclusions: In our study, a dichotomous rating distribution was seen with more than 90% of ratings being either excellent or poor. A negative relationship between number of ratings or comments and overall rating was initially seen, after which a positive relationship was demonstrated. Combined, these data suggest that physicians can benefit from understanding online ratings and that proactive steps to encourage patient rating submissions may help optimize overall rating.
Recent data demonstrate that most Americans use the internet to search for health information [- ]. In addition, a large percentage of patients obtain information about physicians through internet resources and identify online websites as important in their choice of health care providers [ , ]. A prior study evaluating patient trends reported that 59% of the US population reported physician-rating websites (PRWs) to be somewhat important in choosing their health care providers [ ]. At the same time, there has been a tremendous growth in the number of PRWs [ ]. There exist at least 28 PRWs that display information about physician training and allow users to rate physician or staff characteristics.
Although criticisms regarding the validity of PRWs are often raised by physicians, these data show the importance of online physician reputations. In addition to the importance of PRWs in guiding patient selection as consumers, online rating systems are also part of a more widespread focus on the patient experience. Accordingly, the Hospital Quality Alliance was established in an effort to promote transparency of care quality reporting . The initiatives of the Hospital Quality Alliance and Medicare are seen in publicly available data focused on core care measures that include patient surveys about their care (Hospital Consumer Assessment of Healthcare Providers and Systems). As such measures of patient experience become more commonly used to assess care quality and influence reimbursement models (eg, value-based purchasing), it becomes even more important that physicians maintain a working knowledge of online patient reviews.
Even so, investigation suggests that many physicians have little familiarity with PRW, do not commonly check their own reviews, and spend minimal time managing their digital reputation . Although reputation management is a frequent focus in commerce and marketing literature, little is written about online reputation management of physicians. Suggestions for optimization of online ratings within the general literature include actively encouraging patients to submit ratings and responding to negative comments online [ ].
Within the urology literature, we could identify only two studies focused on the assessment of online ratings [, ]. Thus, we sought to comprehensively assess online ratings in a large cohort of urologists. Specific study aims included the assessment of the relationship between number of ratings and the overall mean rating. We hypothesized that number of ratings would demonstrate a positive correlation with overall ratings as this may reflect initiatives by certain physicians to actively manage their reputation and encourage patients to submit online comments or ratings. We also sought to assess the distribution of ratings and assess for a correlation between individual physician and staff characteristic ratings and overall rating.
We conducted an analysis of urologic physician ratings and related information on the website Healthgrades. Data retrieval was facilitated using Java (version 8). Specific focus was placed on aggregating data related to physician and staff ratings, including number and distribution of ratings, number of comments, physician performance characteristics, as well as office and staff performance characteristics.
In brief, overall physician ratings are provided as a score between 1 and 5 (1=poor; 5=excellent). Ratings are also available for a specific physician (trustworthiness, explains conditions well, answers questions, time well spent) and staff (scheduling, office environment, staff friendliness) performance variables. Each of these physician and staff performance variables is also rated on a score of 1 to 5.
Inclusion and exclusion criteria were designed in an effort to focus on a cohort of actively practicing urologists and exclude those that may be in residency, retired, or deceased. Accordingly, physicians with a known age of 35 to 74 years were included. These age criteria were selected after an initial data review of age-related ratings distribution, which revealed that the majority of physicians with ages younger than 35 or older than 74 years had zero ratings. Physicians without data specifically detailing age or an age estimation (years out from medical school) were also excluded.
Analysis first focused on descriptive statistics to assess overall physician rating, number of ratings or comments per physician, and ratings related to specific physician and staff performance variables. Variables are presented as mean and standard deviation. We then assessed for a Pearson correlation between physician and staff performance variables and overall physician rating. Finally, we performed a best-fit analysis to assess for an association between number of physician ratings or comments and overall rating. Statistical analysis was performed using R (version 3.4.1). All tests were performed with α=0.05. The University of Virginia (Charlottesville, VA) institutional review board determined that this study met the criteria for nonhuman research (IRB #: 20592).
Data were retrieved for 14,430 urologists, of which 9921 met the inclusion criteria and were included in study analysis. A total of 99,959 ratings and 23,492 comments were seen across 9921 urologists. The mean number of ratings and comments per urologist was 10.1 (SD 4.3) and 2.4 (SD 6.0), respectively. In addition, a significant range in number of ratings (0-395) and comments (0-241) per urologist was seen. Analysis demonstrated that 1554 of 9921 (15.66%) and 4077 of 9921 (41.09%) of physicians had zero ratings and zero comments, respectively.
The distribution of ratings is showed in. The vast majority of ratings were either 5 (“excellent”) (67.53%, 67,505/99,959) or 1 (“poor”) (24.22%, 24,218/99,959). Mean overall physician rating was 3.9 (SD 1.7). Physician and staff performance variable statistics and their correlation with overall ratings are detailed in . All physician and staff performance ratings demonstrated a positive and statistically significant correlation with overall physician rating. The statistical coefficients (R value) for trustworthiness and answers questions were highest. Physician measures had higher correlations with overall rating than did staff measures.
Best-fit analyses of the relationship between number of ratings and overall physician rating as well as between number of comments and overall physician rating are shown inand with locally weighted smoothing added for clarity. A negative relationship between the number of ratings and overall rating was seen until physicians achieved 21 ratings; thereafter, a positive relationship was seen. Similarly, a U-shaped relationship was seen when assessing the relationship between number of comments and overall rating, with the transition point being six comments.
|Performance variable||Rating, mean (SD)||Correlation with overall ratinga, R||P value|
|Explains conditions well||3.99 (0.88)||.953||<.001|
|Answers questions||4.20 (0.70)||.957||<.001|
|Time well spent||4.02 (0.76)||.947||<.001|
|Office environment||3.96 (0.88)||.796||<.001|
|Staff friendliness||3.98 (0.88)||.817||<.001|
aOverall rating, mean (SD)=3.87 (1.72).
This study reveals several important findings. To our knowledge, this is the largest study comprehensively analyzing online ratings of a nationwide sample of urologists. Overall, more than 80% of urologists had at least one rating, demonstrating the use of PRWs by patients. These data are consistent with a prior study evaluating urologist ratings, which highlighted only that a small percentage of physicians do not have ratings associated with their profiles [, ]. Interestingly, other investigations demonstrated that a large percentage of physicians overall have no ratings, but that specialists are twice as likely to have online ratings when compared with generalists [ , ]. Combined with our investigation, these data highlight the high utilization of PRWs within the surgical community and the need to demonstrate awareness with one’s online ratings.
Second, the vast majority of ratings submitted were either excellent (5) or poor (1), with almost one-quarter of ratings being 1. Other studies have described ratings distributions, with most being positive. Kadry and colleagues  demonstrated that approximately two of three patient reviews were favorable across 23 specialties. Lagu et al [ ] found that 88% of ratings were positive for both generalists and specialists (as defined by a rating of 3 or greater on a 4-point scale). A prior study also demonstrated that most comments on PRWs are positive [ ]. Generally, our findings are consistent with these prior investigations suggesting that most ratings are either excellent or poor [ ]. Notably, in review of physician ratings on RateMD.com, Gao et al [ ] found that 42% of the ratings were between 2 and 4 on a 5-point scale. This differs significantly from our results, which demonstrate an extreme dichotomy of ratings.
Further, all performance variables assessed strongly correlated with overall physician rating. This finding suggests that these variables all influence a patient’s overall satisfaction with the visit. A prior study demonstrated a statistically significant correlation between staff and physician ratings . In addition, Kadry and colleagues [ ] reported a strong correlation between a diverse number of dimensions of the patient appointment and overall rating. In this study, dimensions assessed included communication skills (eg, listens and answers questions) and access (eg, ease of appointment, punctuality). Our analysis adds to this literature because it assesses further variables that may influence patient satisfaction.
Most notably, our analysis demonstrated a U-shaped relationship between number of ratings and overall mean rating. A similar relationship was observed in the relationship between number of comments and overall mean rating. Accordingly, before achieving 21 patient ratings, a negative relationship was demonstrated between number of ratings and overall mean rating, followed by a positive relationship. Similarly, a rating nadir was seen at six comments, after which a positive relationship was seen. We hypothesize that this relationship is created from the significant impact that a single poor rating can have on the overall mean rating when there are few ratings. Prior opinion supports this theory, suggesting that in cases in which there are few ratings, one outlying value or comment can have a disproportionately large influence .
Combined, these findings emphasize the importance of active knowledge and management of online reputation by urologists. Experience related to online reputation management suggests that a single negative review likely has a greater influence than multiple positive evaluations . Despite this fact, a large percentage of physicians do not check their online profile [ , ]. Physician criticism of PRWs is understandable given previous studies demonstrating inconsistencies between patient ratings and quality of care [ - ]. However, given data demonstrating the rapid increase in the utilization of PRWs by patients, it is important that physicians have a working knowledge of their online reputation. Further, patients are increasingly providing online reviews of hospitals and treatment centers [ ]. As patient satisfaction with physicians can also be influenced by the patient experience at the hospitals where they offer care, physician awareness of these facility reviews is also important.
In addition, active steps by physicians should be considered to help optimize ratings. Foremost, our data suggest that efforts should focus on building total volume of reviews on PRWs. Suggested approaches involve the use of collateral to solicit reviews, including patient cards, videos, and emails . In addition, patients can be encouraged to complete online ratings and surveys at the time of encounter, thus offering a more proactive approach [ ]. Finally, appropriately addressing negative comments or providing personalized review responses is suggested as a potential method of demonstrating physician focus on the patient experience to other potential patients visiting the PRW [ ]. This is important given a study showing that only 39% of physicians agree with their profile ratings [ ]. Further study is ongoing at our institution to assess specific methods of optimizing patient engagement and ratings.
Beyond commercial PRWs, focus on additional online forums can help optimize physicians’ digital reputations. One such method includes using online professional networking websites (eg, Doximity) to publish professional accomplishments . The creation of a personal online blog by physicians offers another technique to share information with patients [ ]. Further, the use of social media pages (eg, Facebook) can be an effective method of managing online reputation. Indeed, social media presence, such as Facebook followers, has been shown to correlate with US News and World Report reputation score [ ]. Finally, utilizing noncommercial PRWs can also be valuable because the percentage of positive comments has been shown to be higher on health systems’ online review websites when compared to commercial PRWs [ ].
Study limitations include the study focus on ratings from a single PRW. Accordingly, the findings in this study may not be representative of trends across all PRWs. Healthgrades was selected because it is the most widely used PRW . Supporting this trend is a prior systematic review showing that Healthgrades is the most widely selected PRW assessed in published investigations [ ]. In addition, our study aim was to systematically assess ratings information across a large cohort of urologists through use of Java programming. Indeed, our cohort consisted of almost 10,000 urology profiles. However, given this methodology, analysis of text comments was not possible. Similarly, given the variability in rating scales and domains across the PRW, the inclusion of multiple PRWs and systematic comparison is difficult. Novel methods of automated analysis of text reviews have been recently reported and may allow for a more comprehensive study of text-based patient reviews in the future [ , ].
Nonetheless, we believe our study conclusions are strengthened by the large size of our cohort. Prior systematic review of studies on patient online reviews demonstrated that, in general, the number of providers with online reviews reported in investigations represented only a small percentage of the total workforce . Recent data from the American Board of Medical Specialties reports 13,039 board-certified urologists [ ]. Although there may be additional urologists without board certification, these data suggest that our study captured over 75% of all urologists within the United States and highlight the significance of our cohort size. In addition, our data represent a large and diverse sample size across various regions, practice types, physician ages, and other physician characteristics. Future study is ongoing to assess the potential relationship between these variables and online ratings. Such study is important given conflicting data regarding the relationship between physician characteristics (such as gender, practice experience, and academic productivity) and patient online ratings [ ].
In conclusion, our study demonstrates that most online urologist profiles have received ratings. Further, a dichotomous rating distribution is seen, with more than 90% of ratings being either poor or excellent. A negative relationship between number of ratings and overall rating is initially seen, following which a positive relationship is demonstrated. Combined, these data suggest that physicians can benefit from understanding ratings associated with their online profile and that proactive steps to optimize their rating may be helpful.
Conflicts of Interest
- Reimann S, Strech D. The representation of patient experience and satisfaction in physician rating sites. A criteria-based analysis of English- and German-language sites. BMC Health Serv Res 2010;10:332 [FREE Full text] [CrossRef] [Medline]
- Segal J. The role of the Internet in doctor performance rating. Pain Physician 2009;12(3):659-664 [FREE Full text] [Medline]
- The Harris Poll. 2010 Aug 04. "Cyberchondriacs" on the rise? URL: https://tinyurl.com/y3xkhqz5 [accessed 2019-05-29] [WebCite Cache]
- Hanauer DA, Zheng K, Singer DC, Gebremariam A, Davis MM. Public awareness, perception, and use of online physician rating sites. JAMA 2014 Feb 19;311(7):734-735. [CrossRef] [Medline]
- Kuehn BM. More than one-third of US individuals use the Internet to self-diagnose. JAMA 2013 Feb 27;309(8):756-757. [CrossRef] [Medline]
- Lagu T, Metayer K, Moran M, Ortiz L, Priya A, Goff SL, et al. Website characteristics and physician reviews on commercial physician-rating websites. JAMA 2017 Dec 21;317(7):766-768 [FREE Full text] [CrossRef] [Medline]
- CMS.gov: Centers for Medicare & Medicaid Services. Hospital Compare URL: https://tinyurl.com/y5bfcqz2 [accessed 2019-05-31] [WebCite Cache]
- Waxer JF, Srivastav S, DiBiase CS, DiBiase SJ. Investigation of radiation oncologists' awareness of online reputation management. JMIR Cancer 2019 Apr 01;5(1):e10530 [FREE Full text] [CrossRef] [Medline]
- Baker T. Physicians Practice. 2015 Nov 13. Five online reputation management strategies for physicians URL: http://www.physicianspractice.com/marketing/five-online-reputation-management-strategies-physicians [accessed 2019-05-31] [WebCite Cache]
- Asanad K, Parameshwar PS, Houman J, Spiegel BM, Daskivich TJ, Anger JT. Online physician reviews in female pelvic medicine and reconstructive surgery: what do patients really want? Female Pelvic Med Reconstr Surg 2018;24(2):109-114. [CrossRef] [Medline]
- Ellimoottil C, Hart A, Greco K, Quek ML, Farooq A. Online reviews of 500 urologists. J Urol 2013 Jun;189(6):2269-2273. [CrossRef] [Medline]
- Emmert M, Sander U, Esslinger AS, Maryschok M, Schöffski O. Public reporting in Germany: the content of physician rating websites. Methods Inf Med 2012;51(2):112-120. [CrossRef] [Medline]
- Kadry B, Chu LF, Kadry B, Gammas D, Macario A. Analysis of 4999 online physician ratings indicates that most patients give physicians a favorable rating. J Med Internet Res 2011;13(4):e95 [FREE Full text] [CrossRef] [Medline]
- Lagu T, Hannon NS, Rothberg MB, Lindenauer PK. Patients' evaluations of health care providers in the era of social networking: an analysis of physician-rating websites. J Gen Intern Med 2010 Sep;25(9):942-946 [FREE Full text] [CrossRef] [Medline]
- Lagu T, Norton CM, Russo LM, Priya A, Goff SL, Lindenauer PK. Reporting of patient experience data on health systems' websites and commercial physician-rating websites: mixed-methods analysis. J Med Internet Res 2019 Mar 27;21(3):e12007 [FREE Full text] [CrossRef] [Medline]
- Hu N, Pavlou P, Zhang J. Overcoming the J-shaped distribution of product reviews. Commun ACM 2009 Oct;52(10):e1 [FREE Full text]
- Gao GG, McCullough JS, Agarwal R, Jha AK. A changing landscape of physician quality reporting: analysis of patients' online ratings of their physicians over a 5-year period. J Med Internet Res 2012;14(1):e38 [FREE Full text] [CrossRef] [Medline]
- Sobin L, Goyal P. Trends of online ratings of otolaryngologists: what do your patients really think of you? JAMA Otolaryngol Head Neck Surg 2014 Jul;140(7):635-638. [CrossRef] [Medline]
- ReputationManagement.com. URL: https://www.reputationmanagement.com/ [accessed 2018-08-14]
- Johnson C. Survey finds physicians very wary of doctor ratings. Physician Exec 2013;39(1):6-8, 10, 12. [Medline]
- Kennedy GD, Tevis SE, Kent KC. Is there a relationship between patient satisfaction and favorable outcomes? Ann Surg 2014 Oct;260(4):592-598; discussion 598 [FREE Full text] [CrossRef] [Medline]
- Tevis SE, Kennedy GD, Kent KC. Is there a relationship between patient satisfaction and favorable surgical outcomes? Adv Surg 2015;49:221-233 [FREE Full text] [CrossRef] [Medline]
- Okike K, Peter-Bibb TK, Xie KC, Okike ON. Association between physician online rating and quality of care. J Med Internet Res 2016 Dec 13;18(12):e324 [FREE Full text] [CrossRef] [Medline]
- Campbell L, Li Y. Are Facebook user ratings associated with hospital cost, quality and patient satisfaction? A cross-sectional analysis of hospitals in New York State. BMJ Qual Saf 2018 Dec;27(2):119-129. [CrossRef] [Medline]
- Trehan SK, Nguyen JT, Marx R, Cross MB, Pan TJ, Daluiski A, et al. Online patient ratings are not correlated with total knee replacement surgeon-specific outcomes. HSS J 2018 Jul;14(2):177-180. [CrossRef] [Medline]
- Hong YA, Liang C, Radcliff TA, Wigfall LT, Street RL. What do patients say about doctors Online? A systematic review of studies on patient online reviews. J Med Internet Res 2019 Apr 08;21(4):e12521 [FREE Full text] [CrossRef] [Medline]
- Emmert M, Sauter L, Jablonski L, Sander U, Taheri-Zadeh F. Do physicians respond to web-based patient ratings? An analysis of physicians' responses to more than one million web-based ratings over a six-year period. J Med Internet Res 2017 Jul 26;19(7):e275 [FREE Full text] [CrossRef] [Medline]
- Prabhu AV, Kim C, De Guzman E, Zhao E, Madill E, Cohen J, et al. Reputation management and content control: an analysis of radiation oncologists' digital identities. Int J Radiat Oncol Biol Phys 2017 Dec 01;99(5):1083-1091. [CrossRef] [Medline]
- Triemstra JD, Poeppelman RS, Arora VM. Correlations between hospitals' social media presence and reputation score and ranking: cross-sectional analysis. J Med Internet Res 2018 Nov 08;20(11):e289 [FREE Full text] [CrossRef] [Medline]
- Rivas R, Montazeri N, Le NX, Hristidis V. Automatic classification of online doctor reviews: evaluation of text classifier algorithms. J Med Internet Res 2018 Nov 12;20(11):e11141 [FREE Full text] [CrossRef] [Medline]
- Li J, Liu M, Li X, Liu X, Liu J. Developing embedded taxonomy and mining patients' interests from web-based physician reviews: mixed-methods approach. J Med Internet Res 2018 Dec 16;20(8):e254 [FREE Full text] [CrossRef] [Medline]
- ABMS Board Certification Report: 2016-2017. Chicago, IL: American Board of Medical Specialties; 2017. URL: http://www.abms.org/media/139572/abms_board_certification_report_2016-17.pdf [accessed 2019-05-31] [WebCite Cache]
|PRW: physician-rating website|
Edited by G Eysenbach; submitted 06.10.18; peer-reviewed by K Zheng, R Smith; comments to author 05.03.19; revised version received 23.03.19; accepted 02.05.19; published 02.07.19Copyright
©C William Pike, Jacqueline Zillioux, David Rapp. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 02.07.2019.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on http://www.jmir.org/, as well as this copyright and license information must be included.