Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Monday, March 11, 2019 at 4:00 PM to 4:30 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Advertisement

Citing this Article

Right click to copy or hit: ctrl+c (cmd+c on mac)

Published on 26.03.18 in Vol 20, No 3 (2018): March

Preprints (earlier versions) of this paper are available at http://preprints.jmir.org/preprint/8986, first published Sep 16, 2017.

This paper is in the following e-collection/theme issue:

    Original Paper

    How Online Quality Ratings Influence Patients’ Choice of Medical Providers: Controlled Experimental Survey Study

    1Department of Operations and Information Management, University of Connecticut, Stamford, CT, United States

    2Center for Technology Innovation, The Brookings Institution, Washington, DC, United States

    3Department of Decision, Operations and Information Technologies, Robert H Smith School of Business, University of Maryland at College Park, College Park, MD, United States

    Corresponding Author:

    Niam Yaraghi, PhD

    Center for Technology Innovation

    The Brookings Institution

    1755 Massachusetts Ave NW

    Washington, DC, 20036

    United States

    Phone: 1 2027632073

    Email: niam.yaraghi@uconn.edu


    ABSTRACT

    Background: In recent years, the information environment for patients to learn about physician quality is being rapidly changed by Web-based ratings from both commercial and government efforts. However, little is known about how various types of Web-based ratings affect individuals’ choice of physicians.

    Objective: The objective of this research was to measure the relative importance of Web-based quality ratings from governmental and commercial agencies on individuals’ choice of primary care physicians.

    Methods: In a choice-based conjoint experiment conducted on a sample of 1000 Amazon Mechanical Turk users in October 2016, individuals were asked to choose their preferred primary care physician from pairs of physicians with different ratings in clinical and nonclinical aspects of care provided by governmental and commercial agencies.

    Results: The relative log odds of choosing a physician increases by 1.31 (95% CI 1.26-1.37; P<.001) and 1.32 (95% CI 1.27-1.39; P<.001) units when the government clinical ratings and commercial nonclinical ratings move from 2 to 4 stars, respectively. The relative log odds of choosing a physician increases by 1.12 (95% CI 1.07-1.18; P<.001) units when the commercial clinical ratings move from 2 to 4 stars. The relative log odds of selecting a physician with 4 stars in nonclinical ratings provided by the government is 1.03 (95% CI 0.98-1.09; P<.001) units higher than a physician with 2 stars in this rating. The log odds of selecting a physician with 4 stars in nonclinical government ratings relative to a physician with 2 stars is 0.23 (95% CI 0.13-0.33; P<.001) units higher for females compared with males. Similar star increase in nonclinical commercial ratings increases the relative log odds of selecting the physician by female respondents by 0.15 (95% CI 0.04-0.26; P=.006) units.

    Conclusions: Individuals perceive nonclinical ratings provided by commercial websites as important as clinical ratings provided by government websites when choosing a primary care physician. There are significant gender differences in how the ratings are used. More research is needed on whether patients are making the best use of different types of ratings, as well as the optimal allocation of resources in improving physician ratings from the government’s perspective.

    J Med Internet Res 2018;20(3):e99

    doi:10.2196/jmir.8986

    KEYWORDS



    Introduction

    To improve quality, foster competition, promote transparency, and help patients make informed decisions, it is critical for patients to have access to reliable information and make cognizant choices about their medical providers [1,2]. In recent years, a concerted effort in the United States has been put in place to develop and publicly report quality measures of medical providers [3].

    The Centers for Medicare and Medicaid Services (CMS) is the most prominent governmental agency in the United States that collects, aggregates, and reports quality measures of different aspects of medical care. Through initiatives such as Hospital Compare [4], CMS reports quality data on both clinical and nonclinical aspects of medical services offered by different providers. Surgical complications, infections, readmission, and death rates are examples of metrics that measure the clinical aspects of medical care. Surveys of patients’ experiences, such as the Hospital Consumer Assessment of Healthcare Providers and Systems, capture metrics that measure nonclinical aspects of care. In parallel with CMS, private and commercial agencies such as Vitals [5], RateMDs [6], and ProPublica [7] also collect and report quality metrics on both clinical and nonclinical aspects of care. Recent research shows that although the ratings provided by commercial agencies may be inconsistent with each other [8], they are more comprehensive and cover a broader range of domains than what is included in ratings reported by CMS [9,10].

    Ratings of health care providers are growing in importance and popularity [11-18], affecting both the revenue and the reputation of medical providers [19-22]. For example, when CMS released its quality metrics of nursing homes to the public, the market share of 1-star facilities decreased by 8%, whereas the market share of 5-star facilities increased by more than 6% [23]. Similar effects have also been documented for hospitals [24]. Although nonclinical ratings provided by commercial agencies are correlated with the conventional measures of patient experience as reported by governmental agencies [25,26], the relationship between patient reviews and medical outcomes is not clear. Some studies find that patient satisfaction reported as nonclinical ratings is not associated with clinical outcomes [27-32], whereas others report a strong association between these two types of ratings [33,34]. For a review of literature on the association between the social media reviews and the clinical quality outcomes, see Verhoef et al [35].

    Despite the significant differences between the types (clinical and nonclinical) and the sources (governmental and commercial agencies) of ratings, variations in their relative significance for patient choice of medical providers are not known. The purpose of this research was to fill this gap by uncovering the relative importance of these ratings in the decision-making processes of different groups of patients.


    Methods

    Data Source

    We used a primary dataset consisting of responses of 1000 individuals who were each paid 50 cents to participate in an online experiment through Amazon Mechanical Turk (AMT) in October 2016. These individuals were all master users of AMT and live in the United States. According to AMT, a user achieves a master distinction by consistently completing requests with a high degree of accuracy. Masters must continue to pass AMT’s statistical monitoring to maintain their status [36].

    Table 1 provides a comparison of demographics between the sample in this study and the US population. In contrast to the US population, our sample consisted of less affluent, but more educated, younger adults. Although, when compared with the US population, our sample of AMT users consisted of younger and more technologically savvy individuals, we relied on this sample to conduct our analysis for the following reasons. First, given the question posed in this research, the sample did not need to be representative of the US population and, instead, only had to represent individuals who used information resources available on the Internet. As this study compared the importance of two information resources that are exclusively Web-based, its sample also had to include the individuals who could use resources on the Web. Second, prior research shows that despite limitations, data that are gathered from “AMT samples are at least as reliable as those obtained via traditional methods. Overall, AMT can be used to obtain high-quality data inexpensively and rapidly” [37].

    Study Design

    To determine how ratings on different attributes affect individuals’ evaluations of medical providers, we designed an experiment and conducted a choice-based conjoint analysis [38] as a rigorous method of eliciting preferences [39]. We describe the method below.

    The combination of 2 categories (clinical and nonclinical) and 2 sources (governmental and commercial agencies) resulted in 4 different types of ratings: clinical ratings provided by a governmental agency, nonclinical ratings provided by a governmental agency, clinical ratings provided by a commercial agency, and nonclinical ratings provided by a commercial agency. In this research, we use “governmental agency” and “public agency” interchangeably. We assigned a high or low value to each type of rating, and thereby created 16 profiles of hypothetical physicians. In a 1-to-4-star rating system, to induce appropriate variation, we used 2 stars to indicate low ratings and 4 stars to indicate high ratings. Each profile represented a physician with different ratings on the 4 categories. These profiles were balanced, which means that each of the 2 levels (2 and 4 stars) in each of the 4 types of ratings appeared the same number of times in physician profiles. Using these 16 profiles, we then created 8 pairs of physicians such that the 4 types of ratings in each pair were orthogonal [40]. This ensured that any pair of levels from different rating types appeared the same number of times in the design. We used % mktex [41] macro in SAS software (version 9.4) to create the balanced and orthogonal design. Table 2 shows the 16 profiles in 8 pairs.

    In a Web-based interface, we first provided respondents with a brief tutorial on different sources and types of ratings. Specifically, we described the public agency as “the department of Health and Human Services, which is a branch of the federal government” and the commercial agency as “websites such as Yelp, RateMDs, Healthgrades, Vitals, Zocdoc, and DoctorScorecard.”

    Table 1. Characteristics of 949 respondents and the US population.
    View this table
    Table 2. Physician profiles used in choice-based conjoint experiment. “gGvernment” indicates that a public agency provides the ratings, and “Commercial” indicates that a private organization provides the ratings. In the Web-based interface, the hypothetical physician profiles in each pair were shown side-by-side and respondents were asked to choose the physician they prefer. The sequence of the pairs and the attributes in each profile were generated randomly to ensure that the order of the presentation of rank of the attributes did not influence the respondent’s choice. The values of 2 or 4 in the table, respectively, indicate a “2” or “4” star rating in the physician profiles provided to respondents in the Web-based experiment.
    View this table
    Figure 1. Screenshot of the choice-based conjoint experiment.
    View this figure

    We also distinguished clinical and nonclinical ratings and explained to the survey respondents that clinical ratings by the public agency were determined “based on official statistics on how often physicians provide care that research shows leads to the best results for patients” and nonclinical ratings by the public agency were determined based on “a national survey that asks patients about their experiences with staff, nurses, and doctors during a recent visit to the doctor.” Similarly, we explained that clinical ratings provided by the commercial agency were determined by “the patient online reviews about how patients evaluate the medical expertise of the doctor” and nonclinical ratings provided by the commercial agency were created based on “patient online reviews about their experiences with staff, nurses, and doctors during a recent visit to the doctor.” To assess if respondents correctly distinguished the differences between the types and the sources of ratings, at the end of the survey, we asked them to describe each type of the ratings in their own words. Our examination of their responses confirmed that all respondents had fully understood different ratings.

    We then presented the 8 pairs of hypothetical profiles of physicians in a random sequence and asked respondents to choose the physician they prefer in each pair. A screenshot of 1 of the 8 comparison pairs is presented in Figure 1, which corresponds to the choices in pair Seven as shown in Table 1. To simulate a realistic decision-making scenario, we asked the respondents to imagine that they have moved to a new town and have to choose a new primary care physician based solely on the 4 types of ratings provided to them. This approach ensured that the choice of the respondents in our experiment was only driven by the ratings and was not confounded by any other factor outside of our model, such as insurance coverage, location, or race of the physician [42,43].

    Once respondents finished the evaluation of physicians in the 8 pairs, we asked them a series of questions designed to evaluate their health status, medical literacy, trust in Web-based reviews, and trust in government as 4 composite indexes. We conducted factor analysis to operationalize these 4 constructs using validated items that we derived from prior literature in information systems [44,45] and medicine [46,47]. Details on the items, composite indexes, and factor analysis are provided in Multimedia Appendix 1.

    One potential concern with the study design was that respondents may not complete the choice task thoughtfully. To detect and filter the responses that were provided hastily and without careful attention, we included 2 trap questions in the experiment.

    The first trap question was the choice of physicians in the eighth pair (shown in Table 2), one of which was superior on all of the 4 types of ratings and clearly dominated the pair. A respondent’s choice of an inferior physician indicated lack of attention to the experiment. The second trap question asked, “How happy will you be if you receive a letter from Internal Revenue Service that says you should pay a large amount of taxes to the government?” We assumed that a respondent did not pay attention to the question if she chose “extremely happy” or “happy” as a response to this question.

    Statistical Analysis

    Our research design fit the multinomial logit model with clustered error terms [48,49]. Following the suggestions of Kuhfeld [50], we used the PHREG [51] procedure in SAS software for the estimation. In this model, the dependent variable was binary and indicated the choice that a respondent made from a pair of hypothetical physician profiles. The 4 types of ratings in each profile constituted our main independent variables. In the multinomial logit model used in this study, the probability that a respondent chose a specific physician in a pair was a function of the attributes of that specific physicians as well as the attributes of the other physician in the pair. The PHREG [51] procedure in SAS not only allowed us to account for the conditional dependency of choices for the alternatives in a pair but also adjusted for the correlation between the 8 choices made by the same respondent. Using this model, we could examine the relative importance of the 4 types of ratings. We further explored whether patient attributes, such as age, gender, and income, moderated the impact of the ratings. To statistically compare the effects of different regression coefficients, we implemented the tests provided by Paternoster et al [52].


    Results

    On the basis of the answers to the 2 trap questions, we excluded 51 observations from our initial sample of 1000 responses. We retained the remaining 949 responses for further analysis (Table 1). We present the estimation results of our multinomial logit model in Table 3.

    As shown in the last (full model) column of Table 3, the relative log odds of choosing a physician increased by 1.31 (95% CI 1.26-1.37; P<.001) and 1.32 (95% CI 1.27-1.39; P<.001) units when the government clinical ratings and commercial nonclinical ratings moved from 2 to 4 stars, respectively. The importance of these 2 types of ratings was statistically equivalent (P=.49). By comparison, the relative log odds of choosing a physician increased by a modest 1.12 (95% CI 1.07-1.18; P<.001) units when the commercial clinical ratings moved from 2 to 4 stars. The relative log odds of selecting a physician with 4 stars in nonclinical ratings provided by the government was 1.03 (95% CI 0.98-1.09; P<.001) units higher than a physician with 2 stars in this rating. The difference between the effects of government nonclinical ratings and commercial clinical ratings on patients’ choice of a primary care physician were statically significant (P=.04). The difference between the effects of clinical ratings provided by government and those provided by a commercial agency was statistically significant (P<.001). Likewise, the difference between the government clinical ratings and the government nonclinical ratings was also statistically significant (P<.001).

    One standard deviation improvement in a patient’s health status increased the relative log odds of choosing a physician with 4 stars in commercial nonclinical ratings by 0.18 (95% CI 0.13-0.24; P<.001) units and decreased the relative log odds of choosing a physician with 4 stars in government clinical ratings by 0.14 (95% CI 0.08-0.19; P<.001) units.

    Medical literacy had no statistically significant effect on how patients evaluated different types of ratings. As the level of trust in overall Web-based ratings increased, the importance of nonclinical ratings provided by a commercial agency also increased. One standard deviation increase in a patient’s trust in Web-based reviews increased the relative log odds of choosing a physician with 4 stars in nonclinical commercial ratings by 0.07 (95% CI 0.02-0.13; P=.05) units. Unsurprisingly, as the patients’ level of trust in the government increased, the importance of clinical ratings provided by government increased, whereas the importance of nonclinical ratings provided by a commercial agency decreased.

    One standard deviation increase in a patient’s trust in government increased the relative log odds of choosing a physician with 4 stars in government clinical ratings by 0.20 (95% CI 0.15-0.25; P<.001) units and decreased the relative log odds of choosing a physician with 4 stars in commercial nonclinical ratings by −0.15 (95% CI 0.10-0.21; P<.001) units. These trends remained consistent even when we included more variables in our model. We also examined how patients’ demographic characteristics of gender, race, income, education, marital status, and age affected the importance of each of the 4 ratings in their evaluation of primary care physicians. Table 4 presents the results. The log odds of selecting a physician with 4 stars in nonclinical government ratings relative to a physician with 2 stars was 0.23 (95% CI 0.13-0.33; P<.001) units higher for females compared with males. Similar star increase in nonclinical commercial ratings increased the relative log odds of selecting the physician by female patients by an additional 0.15 (95% CI 0.04-0.25; P=.006) units, compared with males.

    Table 3. The relative importance of different types and sources of ratings on patients’ choice. GC: clinical ratings provided by a public agency (government). GNC: nonclinical ratings provided by a public agency (government). YC: clinical ratings provided by a commercial agency (commercial). YNC: nonclinical ratings provided by a commercial agency (commercial).
    View this table
    Table 4. Interaction of ratings and patient characteristics. GC: clinical ratings provided by a public agency (government). GNC: nonclinical ratings provided by a public agency (government). YC: for clinical ratings provided by a commercial agency (commercial). YNC: the nonclinical ratings provided by a commercial agency (commercial).
    View this table

    Discussion

    Principal Findings

    To the best of our knowledge, this was the first research that, using a conjoint analysis, uncovered how individuals used Web-based ratings to compare and choose medical providers. We found that the clinical ratings provided by the government and the nonclinical ratings provided by a commercial agency were significantly more important for patient choice than nonclinical ratings provided by the government or clinical ratings provided by commercial agencies. We also found some differences in the importance of ratings based on the sociodemographic and health characteristics of respondents. Healthier patients paid more attention to nonclinical ratings, especially those from a commercial agency. On the other hand, for healthier patients, the importance of clinical ratings, notably those that are provided by the government, was lower. We found that female patients gave more importance to nonclinical ratings provided by both public and commercial agencies, compared with males. In comparison with other races, white respondents paid less attention to the nonclinical ratings provided by government. There was no other difference between racial groups in the importance of different types of ratings in the physician choice decision. Income did not play a role in the way respondents used the ratings in their decision. As patients get older, nonclinical ratings provided by the government and the clinical ratings provided by a commercial agency became even less important in how they evaluated medical providers.

    A particular strength of this study was that we utilized a carefully controlled experimental design to observe the revealed preferences of participants rather than merely asking them to state them in response to a questioner, which could otherwise be subject to attribution or social desirability biases. Revealed preferences elicited in this experiment provided a more natural context, even when presented in hypothetical settings, and gave us greater confidence that the effects we observed within the sample were driven by the conjoint attributes rather than other unobserved factors.

    Limitations

    One limitation of our study was that we rated the attributes of the physicians by either 2 or 4 stars, whereas in reality, the ratings usually have 5 levels, between 1 and 5 stars. We limited the ratings to only 2 levels to reduce the number of possible combinations. If we considered 5 levels for each rating, the number of possible physician profiles would have surged from 16 to 625. Respondents could not reasonably compare these many physician profiles with each other. A second limitation of this study was that, in comparison with the US general population, its sample was drawn from younger, more educated, and less affluent individuals. Although samples from AMT have been shown to respond similarly to representative samples of the US population [37], the results from the study must be interpreted in light of the characteristics of the sample. Third, this study only focused on American respondents, and therefore, findings may not generalize to individuals outside of the United States. This was due to the fact that constructs such as medical literacy, health status, and trust in government significantly vary across individuals from different countries. Moreover, the presence of commercial websites and the availability of alternative government websites also vary across countries, which represents a further limitation on generalizability. Finally, in our study, we did not ask respondents whether they were familiar with the sources of information they were being asked to evaluate, primarily because our major focus was on the source (ie, government vs commercial) rather than a specific website. Future experiments could also ask respondents about their familiarity with the sources of information that they are asked to evaluate in the experiment.

    Future Research

    There are 3 potential areas for further research. The first is to examine how familiar individuals are with the sources of information provided by governmental and commercial agencies. Although most individuals are now fairly familiar with the commercial rating websites, knowledge about the other sources of information provided through governmental websites may be limited. It would be useful to quantify the level of awareness of such information as a precursor to designing appropriate policies to inform the public. The second is to replicate this study on an international sample to investigate how individuals outside of the United States rely on different sources and types of information for choosing their primary care physicians. Finally, the relative importance of Web-based ratings in comparison with other factors such as insurance coverage, recommendations of family and friends, and proximity to patients’ residence is still unclear and could be investigated in future research.

    Policy Recommendations

    The findings of this research have implications for policy makers and medical providers. Although the government has expended substantial resources on clinical quality ratings, our study indicates a need to also acknowledge the importance of nonclinical measures. This is consistent with the recent CMS efforts and policy recommendations [53] to tie reimbursements to patient satisfaction. To the extent that nonclinical ratings appear to be more important for healthier patients, it clearly underscores the important role played by the “experience” of interacting with a physician for individuals whose visits to the doctor are likely to be preventive rather than curative. Primary care providers can consider ways in which the patient’s experience can be improved, such as reduced waiting time and more empathetic interactions, which will eventually be reflected in the nonclinical ratings they receive. The results of this study could also encourage a public relations campaign to increase public awareness of the reviews that are government maintained and are more clinically based. Our result on gender differences in the relative salience of nonclinical ratings further revealed the importance of improving the patient experience for providers who are focused on women’s health services.

    With respect to patients’ age, we found that older patients and those who trusted government more paid more attention to government-provided ratings. This is corroborated by prior literature, which documents that citizens who trust government more are also more satisfied with government websites [54]. We therefore recommend that CMS create website content and user experiences that are tailored for Medicare beneficiaries and older patients as they rely on government-provided information more than the younger patients. Our results also indicated that commercial websites can be more successful in attracting younger individuals. If CMS intends to expand its audience, it should consider information dissemination strategies that appeal to patients in this segment.

    Given the recent apprehensions expressed about the quality and representativeness of ratings provided by commercial websites [55], it is a matter of some concern that patients gave equal importance to commercial ratings of nonclinical aspects of care much as they did to government ratings of clinical aspects of care. This is likely a result of the richness of the information that patients believe they can receive from other patients who have engaged in interactions with the medical provider. It might also be driven by other factors such as the first mover advantage of commercial organizations as they have been active in rating a wide variety of services earlier than other governmental agencies. To that end, our findings suggest that patients have developed a preference for commercial websites for experience-based ratings of medical providers, that is, ratings that primarily capture information about the patient’s experience with the medical provider. Thus, government agencies that offer similar ratings should pay careful attention to improve the usability of the information while concurrently addressing any perceptual obstacles that may prevent consumers from using these ratings.

    Conclusions

    Our research shows that patients pay equal attention to both clinical and nonclinical ratings when choosing a primary care physician. To obtain information about clinical ratings, they rely more on government sources, whereas for information on nonclinical ratings, they rely more on commercial sources. Both public and private agencies expend significant resources to design metrics, collect data, calculate ratings, and report them to the public. These resources are limited and should be optimally allocated to the type of ratings that consumers appreciate and will use the most. The findings of this research highlight the importance of efforts from government agencies such as CMS to improve its reporting of nonclinical ratings. Given the importance of nonclinical ratings in patients’ decision making, we recommend that medical providers pay close attention to their nonclinical ratings on commercial websites as they represent a consequential source of customer feedback for improving the patient experience. Ultimately, the overarching objective of all rating sources must be focused on protecting patients from incorrect or misleading data, while simultaneously educating them on how best to interpret and make best use of the information presented.

    Acknowledgments

    GG and WW are partially supported by NSF Career Award #1254021.

    Conflicts of Interest

    None declared.

    Multimedia Appendix 1

    How online quality ratings influence patients’ choice of medical providers: a controlled experimental survey study appendix (online-only material).

    PDF File (Adobe PDF File), 69KB

    References

    1. Yegian J, Dardess P, Shannon M, Carman K. Engaged patients will need comparative physician-level quality data and information about their out-of-pocket costs. Health Aff (Millwood) 2013;32(2):328-337. [Medline]
    2. Hibbard J, Slovic P, Peters E, Finucane M. Strategies for reporting health plan performance information to consumers: evidence from controlled studies. Health Serv Res 2002;37(2):291-313. [CrossRef] [Medline]
    3. James J, Felt-Lisk S, Werner R, Agres T, Schwartz A, Dentzer S. Issuelab. 2012. Health policy brief: public reporting on quality and costs   URL: https:/​/www.​issuelab.org/​resource/​health-affairs-rwjf-health-policy-brief-public-reporting-on-quality-and-costs.​html [accessed 2018-02-20] [WebCite Cache]
    4. Medicare. Hospital compare   URL: https://www.medicare.gov/hospitalcompare/search.html? [accessed 2018-02-16] [WebCite Cache]
    5. Vitals.   URL: http://www.vitals.com/ [accessed 2018-02-16] [WebCite Cache]
    6. RateMDs. Doctor reviews and ratings near you   URL: https://www.ratemds.com/dc/washington/ [accessed 2017-02-13] [WebCite Cache]
    7. Wei S, Pierce O, Allen M. ProPublica. Surgeon scorecard   URL: https://projects.propublica.org/surgeons/ [accessed 2018-02-16] [WebCite Cache]
    8. Austin J, Jha A, Romano P, Singer S, Vogus T, Wachter R, et al. National hospital ratings systems share few common scores and may generate confusion instead of clarity. Health Aff (Millwood) 2015;34(3):423-430. [Medline]
    9. Ranard B, Werner R, Antanavicius T, Schwartz H, Smith R, Meisel Z, et al. Yelp reviews of hospital care can supplement and inform traditional surveys of the patient experience of care. Health Aff (Millwood) 2016;35(4):697-705. [Medline]
    10. Greaves F, Ramirez-Cano D, Millett C, Darzi A, Donaldson L. Harnessing the cloud of patient experience: using social media to detect poor quality healthcare. BMJ Qual Saf 2013;22(3):4. [CrossRef]
    11. Reimann S, Strech D. The representation of patient experience and satisfaction in physician rating sites. A criteria-based analysis of English- and German-language sites. BMC Health Serv Res 2010;10(1):332. [CrossRef]
    12. Gao GG, McCullough JS, Agarwal R, Jha AK. A changing landscape of physician quality reporting: analysis of patients' online ratings of their physicians over a 5-year period. J Med Internet Res 2012 Feb;14(1):e38 [FREE Full text] [CrossRef] [Medline]
    13. Hanauer D, Zheng K, Singer D, Gebremariam A, Davis M. Public awareness, perception, and use of online physician rating sites. JAMA 2014;311(7):734-735. [Medline]
    14. Greaves F, Millett C. Consistently increasing numbers of online ratings of healthcare in England. J Med Internet Res 2012 Jun 29;14(3):e94 [FREE Full text] [CrossRef] [Medline]
    15. Findlay S. Consumers' interest in provider ratings grows, and improved report cards and other steps could accelerate their use. Health Aff (Millwood) ? 2016;35(4):688-696. [Medline]
    16. McLennan S, Strech D, Reimann S. Developments in the frequency of ratings and evaluation tendencies: a review of German physician rating websites. J Med Internet Res 2017;19(8):e299. [Medline]
    17. Zwijnenberg N, Hendriks M, Bloemendal E, Damman O, de JJ, Delnoij D, et al. Patients' need for tailored comparative health care information: a qualitative study on choosing a hospital. J Med Internet Res 2016 Nov 28;18(11):e297 [FREE Full text] [CrossRef] [Medline]
    18. Emmert M, Meszmer N, Sander U. Do health care providers use online patient ratings to improve the quality of care? results from an online-based cross-sectional study. J Med Internet Res 2016 Sep 19;18(9):e254 [FREE Full text] [CrossRef] [Medline]
    19. Sobin L, Goyal P. Trends of online ratings of otolaryngologists: what do your patients really think of you? JAMA Otolaryngol Head Neck Surg 2014 Jul;140(7):635-638. [CrossRef] [Medline]
    20. Mehta SJ. Patient satisfaction reporting and its implications for patient care. AMA J Ethics 2015 Jul 01;17(7):616-621 [FREE Full text] [CrossRef] [Medline]
    21. Bakhsh W, Mesfin A. Online ratings of orthopedic surgeons: analysis of 2185 reviews. Am J Orthop (Belle Mead NJ) 2014 Aug;43(8):359-363. [Medline]
    22. Elliott M, Beckett M, Lehrman W, Cleary P, Cohea C, Giordano L, et al. Understanding the role played by medicare’s patient experience points system in hospital reimbursement. Health Aff (Millwood) 2016;35(9):1680. [CrossRef]
    23. Werner RM, Konetzka RT, Polsky D. Changes in consumer demand following public reporting of summary quality ratings: an evaluation in nursing homes. Health Serv Res 2016 Jun;51 Suppl 2:1291-1309. [CrossRef] [Medline]
    24. Chandra A, Finkelstein A, Sacarny A, Syverson C. Health care exceptionalism? performance and allocation in the us health care sector. Am Econ Rev 2016 Aug;106(8):2110-2144 [FREE Full text] [CrossRef] [Medline]
    25. Bardach NS, Asteria-Peñaloza R, Boscardin WJ, Dudley RA. The relationship between commercial website ratings and traditional hospital performance measures in the USA. BMJ Qual Saf 2013 Mar;22(3):194-202 [FREE Full text] [CrossRef] [Medline]
    26. Greaves F, Pape U, King D, Darzi A, Majeed A, Wachter R, et al. Associations between internet-based patient ratings and conventional surveys of patient experience in the English NHS: an observational study. BMJ Qual Saf 2012;21(7):605. [Medline]
    27. Franks P, Fiscella K, Shields C, Meldrum S, Duberstein P, Jerant A, et al. Are patients’ ratings of their physicians related to health outcomes? Ann Fam Med 2005;3(3):234. [Medline]
    28. Gray BM, Vandergrift JL, Gao GG, McCullough JS, Lipner RS. Website ratings of physicians and their quality of care. JAMA Intern Med 2015 Feb;175(2):291-293. [CrossRef] [Medline]
    29. Campbell L, Li Y. Are Facebook user ratings associated with hospital cost, quality and patient satisfaction? a cross-sectional analysis of hospitals in New York state. BMJ Qual Saf 2018;27(2):119-129. [Medline]
    30. Okike K, Peter-Bibb T, Xie K, Okike O. Association between physician online rating and quality of care. J Med Internet Res 2016;18(12):e324. [CrossRef]
    31. Greaves F, Pape U, King D, Darzi A, Majeed A, Wachter R, et al. Associations between web-based patient ratings and objective measures of hospital quality. Arch Intern Med 2012;172(5):435-436. [Medline]
    32. Greaves F, Pape U, Lee H, Smith D, Darzi A, Majeed A, et al. Patients' ratings of family physician practices on the internet: usage and associations with conventional measures of quality in the English National health service. J Med Internet Res 2012;14(5):e146. [Medline]
    33. Glickman SW, Boulding W, Manary M, Staelin R, Roe MT, Wolosin RJ, et al. Patient satisfaction and its relationship with clinical quality and inpatient mortality in acute myocardial infarction. Circ Cardiovasc Qual Outcomes 2010 Mar;3(2):188-195 [FREE Full text] [CrossRef] [Medline]
    34. Isaac T, Zaslavsky AM, Cleary PD, Landon BE. The relationship between patients' perception of care and measures of hospital quality and safety. Health Serv Res 2010 Aug;45(4):1024-1040 [FREE Full text] [CrossRef] [Medline]
    35. Verhoef L, Van de Belt T, Engelen L, Schoonhoven L, Kool R. Social media and rating sites as tools to understanding quality of care: a scoping review. J Med Internet Res 2014;16(2):e56. [Medline]
    36. Mturk. Amazon mechanical turk   URL: https://www.mturk.com/worker/help [accessed 2018-02-16] [WebCite Cache]
    37. Buhrmester M, Kwang T, Gosling S. Amazon's mechanical turk: a new source of inexpensive, yet high-quality, data? Perspect Psychol Sci 2011;6(1):3-5. [CrossRef]
    38. Green PE, Srinivasan V. Conjoint analysis in marketing: new developments with implications for research and practice. J Mark 1990;54(4):19.
    39. Ryan M, Farrar S. Using conjoint analysis to elicit preferences for health care. BMJ 2000 Jun 03;320(7248):1530-1533 [FREE Full text] [Medline]
    40. Hauser J. MIT. 2007. A note on conjoint analysis   URL: http://www.mit.edu/~hauser/Papers/NoteonConjointAnalysis.pdf [accessed 2018-02-16] [WebCite Cache]
    41. Kuhfeld W. SAS. Orthogonal arrays   URL: http://support.sas.com/techsup/technote/ts723.html [accessed 2018-02-16] [WebCite Cache]
    42. Saha S, Taggart SH, Komaromy M, Bindman AB. Do patients choose physicians of their own race? Health Aff (Millwood) 2000;19(4):76-83 [FREE Full text] [Medline]
    43. Blizzard R. Gallup. Healthcare Panel: How Do People Choose Hospitals? Internet. Gallup; 2005. Gallup news   URL: http://news.gallup.com/poll/19402/Healthcare-Panel-How-People-Choose-Hospitals.aspx [accessed 2018-02-16] [WebCite Cache]
    44. Bonelli S. Bright Local. 2016. Local consumer review survey 2016   URL: https://www.brightlocal.com/learn/local-consumer-review-survey/ [accessed 2018-02-16] [WebCite Cache]
    45. McKnight D, Choudhury V, Kacmar C. Developing and validating trust measures for e-commerce: an integrative typology. Inf Syst Res 2002;13(3):334-359.
    46. Wallston K, Cawthon C, McNaughton C, Rothman R, Osborn C, Kripalani S. Psychometric properties of the brief health literacy screen in clinical practice. J Gen Intern Med 2014;29(1):119-126. [Medline]
    47. Chew L, Bradley K, Boyko E. Brief questions to identify patients with inadequate health literacy. Fam Med 2004;36(8):588-594. [Medline]
    48. McFadden D. Berkeley. Front Econom Internet New York: Academic Press; 1973. Conditional logit analysis of qualitative choice behavior   URL: https://elsa.berkeley.edu/reprints/mcfadden/zarembka.pdf [accessed 2018-02-16] [WebCite Cache]
    49. Manski C, McFadden D. Berkeley. Cambridge: MIT Press; 1981. Structural analysis of discrete data with econometric applications   URL: https://emlab.berkeley.edu/~mcfadden/discrete/front_matter.pdf [accessed 2018-02-16] [WebCite Cache]
    50. Kuhfeld W. SAS. Carry, NC: SAS; 2005. Marketing research methods in SAS   URL: https://support.sas.com/techsup/technote/mr2010.pdf [accessed 2018-02-20] [WebCite Cache]
    51. SAS. Cary, NC: SAS Institute; 2008. The PHREG procedure   URL: https://support.sas.com/documentation/cdl/en/statugphreg/61816/PDF/default/statugphreg.pdf [accessed 2018-02-20] [WebCite Cache]
    52. Paternoster R, Brame R, Mazerolle P, Piquero A. Using the correct statistical test for the equality of regression coefficients. Criminology 1998;36(4):859-866. [Medline]
    53. Jha A. JAMA. Payment power to the patients   URL: https://newsatjama.jama.com/2017/05/22/jama-forum-payment-power-to-the-patients/ [accessed 2018-02-16] [WebCite Cache]
    54. Welch E, Hinnant C, Moon M. Linking citizen satisfaction with e-government and trust in government. J Public Adm Res Theory 2005;15(3):371-391. [CrossRef]
    55. Lagu T, Metayer K, Moran M, Ortiz L, Priya A, Goff SL, et al. Website characteristics and physician reviews on commercial physician-rating websites. JAMA 2017 Feb 21;317(7):766. [CrossRef]


    Abbreviations

    AMT: Amazon Mechanical Turk
    CMS: Centers for Medicare and Medicaid Services


    Edited by G Eysenbach; submitted 16.09.17; peer-reviewed by H Miller, T Kool, E Leas; comments to author 12.10.17; revised version received 21.11.17; accepted 10.12.17; published 26.03.18

    ©Niam Yaraghi, Weiguang Wang, Guodong (Gordon) Gao, Ritu Agarwal. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 26.03.2018.

    This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on http://www.jmir.org/, as well as this copyright and license information must be included.