Background: Although online physician rating information is popular among Chinese health consumers, the limited number of reviews greatly hampers the effective usage of this information. To date, little has been discussed on the variables that influence online physician rating from the users’ perspective.
Objective: This study aims to investigate the factors associated with the actual behavior and intention of generating online physician rating information in urban China.
Methods: A web-based cross-sectional survey was conducted, and the valid responses of 1371 Chinese health consumers were recorded. Using a pilot interview, we analyzed the effects of demographics, health variables, cognitive variables, and technology-related variables on online physician rating information generation. Binary multivariate logistic regression, multiple linear regression, one-way analysis of variance analyses, and independent samples t test were performed to analyze the rating behavior and the intentions of the health consumers. The survey instrument was designed based on the existing literature and the pilot interview.
Results: In this survey, 56.7% (778/1371) of the responders used online physician rating information, and 20.9% (287/1371) of the responders rated the physicians on the physician rating website at least once (posters). The actual physician rating behavior was mainly predicted by health-related factors and was significantly associated with seeking web-based physician information (odds ratio [OR] 5.548, 95% CI 3.072-10.017; P<.001), usage of web-based physician service (OR 2.771, 95% CI 1.979-3.879; P<.001), health information-seeking ability (OR 1.138, 95% CI 0.993-1.304; P=.04), serious disease development (OR 2.699, 95% CI 1.889-3.856; P<.001), good medical experience (OR 2.149, 95% CI 1.473-3.135; P<.001), altruism (OR 0.612, 95% CI 0.483-0.774; P<.001), self-efficacy (OR 1.453, 95% CI 1.182-1.787; P<.001), and trust in online physician rating information (OR 1.315, 95% CI 1.089-1.586; P=.004). Some factors influencing the intentions of the posters and nonposters rating the physicians were different, and the rating intention was mainly determined by cognitive and health-related factors. For posters, seeking web-based physician information (β=.486; P=.007), using web-based medical service (β=.420; P=.002), ability to seek health information (β=.193; P=.002), rating habits (β=.105; P=.02), altruism (β=.414; P<.001), self-efficacy (β=.102; P=.06), trust (β=.351; P<.001), and perceived ease of use (β=.275; P<.001) served as significant predictors of the rating intention. For nonposters, ability to seek health information (β=.077; P=.003), chronic disease development (β=.092; P=.06), bad medical experience (β=.047; P=.02), rating habits (β=.085; P<.001), altruism (β=.411; P<.001), self-efficacy (β=.171; P<.001), trust (β=.252; P<.001), and perceived usefulness of rating physicians (β=.109; P<.001) were significantly associated with the rating intention.
Conclusions: We showed that different factors affected the physician rating behavior and rating intention. Health-related variables influenced the physician rating behavior, while cognitive variables were critical in the rating intentions. We have proposed some practical implications for physician rating websites and physicians to promote online physician rating information generation.
With the development of user-generated content and prevalent use of mobile devices, some industries (ie, food service, travel, and e-commerce) have begun gathering web-based reviews, because of which many websites of these industries have now become reliable and effective . The health care system has also started garnering web-based reviews, even though the development of the review platform was slow in the initial years. Similar to how people review products, health consumers can post their opinions on the health care received and have access to other patients’ opinions on the care they received from physicians. In particular, information on physician rating websites seems to play an increasingly important role in the life of health consumers and has attracted the attention of medical practitioners. A study [ ] showed that in 2007, only 3%-7% of the health consumers in the United States used physician rating websites, but the proportion increased to 23% in 2012 [ ], 25% in 2013 in Germany [ ], 42% in 2014 in the United States [ ], and 43.6% in 2016 in Germany [ ]. Online physician rating information is an important factor that seems to increasingly influence a patient’s choice of medical practitioners [ ]. On the contrary, physicians have always reported a negative attitude toward online physician rating information [ - ] because they fear that limited reviews could produce bias and negative web-based reviews could ruin their reputation [ , ]. Previous studies have shown the content analyses of physician rating websites in different clinical specialties [ - ], and the average number of reviews per physician was found to be low [ - ], even though the number of reviews has increased rapidly in the past few years [ ]. Emmert et al [ ] reported that only 11.03% of the Germans had posted ratings on a physician rating website in 2013, while this percentage increased to 23% in 2016 [ ]. The limited number of reviews is the key factor that has affected the adoption of online physician rating information by both physicians and consumers. Thus, it is important to investigate the factors that predict the generation of online physician rating information from the perspective of health consumers.
Previous studies have mainly focused on the usage of online physician rating information and the related factors. Terlutter et al  reported that women, young adults, and people with higher education levels or chronic diseases used physician rating websites more than their counterparts. Galizzi et al [ ] found that white British people and people with high incomes were less likely to use physician rating websites. Further, female participants, widows, and those with high health care utilization showed a significantly high likelihood of being aware of physician rating websites [ ]. In China, due to the promotion of the “internet + health care” strategy by the government, physician rating websites are becoming increasingly popular among urban citizens. Hao et al [ - ] conducted a content analysis of Chinese physician rating websites and identified factors that were related to physician ratings. Zhang et al [ ] analyzed the negative comments on physician rating websites to identify the possible solutions for improving patient satisfaction. Li et al [ ] developed a hierarchical topic taxonomy to uncover the latent structure of the physician reviews and illustrated its application in mining data on patients’ interests. Deng et al [ ], Han et al [ ], and Li and Hubner [ ] investigated how web-based ratings and other factors influence the selection of physicians by the Chinese consumers. However, studies on physician rating websites in the Chinese context are still limited and little is known about the factors influencing the generation of online physician rating information.
To fill this research gap, we first conducted a web-based pilot interview and recruited 30 Chinese citizens with different education levels, occupational backgrounds, and hometowns. We introduced several Chinese physician rating websites at the beginning of the interview; thereafter, the participants reported their experience of using the physician rating websites. Only 5 of the 30 participants generated online physician rating information. Finally, participants were asked why they did or did not generate online physician rating information. Following the procedure of qualitative analysis, 3 researchers transcribed and coded the data, and we finally identified the factors related to the generation of online physician rating information. These factors were divided into three dimensions, namely, health and habit-related factors (ie, usage of web-based physician service, ability to seek health information, health conditions, experience of medical service, and rating habits in the e-commerce context), cognitive factors (ie, altruism, self-efficacy, and trust in online physician rating information), and technology-related factors (ie, perceived usefulness and perceived ease of use). Cognitive factors were often reported to be associated with knowledge-sharing behavior , and technology-acceptance factors were often associated with system adoption [ ]. The use of physician rating websites to rate physicians signifies knowledge-sharing and system-adoption behavior. In this study, we applied a similar procedure used by Terlutter et al [ ], Galizzi et al [ ], and Emmert et al [ ] to empirically explore the significant factors that predict the actual behavior and intention of rating the physicians on the physician rating websites. The results of this study will be useful for understanding the web-based rating behavior of health consumers and for further promoting the development of physician rating websites.
Since physicians in rural areas are seldom rated on physician rating websites , our study focused on physicians in the urban regions of China. We used the snowball sampling method to recruit participants through web-based social networking. First, we selected 160 WeChat friends with varying gender, education levels, and occupational backgrounds to complete the web-based questionnaire. Second, we requested these participants to invite friends with varying genders, education levels, and occupational backgrounds to participate in the web-based survey. In total, we received 1556 responses from September 2018 to October 2018 and from August 2019 to October 2019. Among the total number of responders, 185 were excluded from the analysis because of inconsistent answer patterns (eg, flatliners, contradictions) or because the participants tried to complete the questionnaire quickly with incomplete answers in a short span of time. Finally, this study considered the responses of 1371 valid respondents. We paid each participant 2 RMB (US $0.3) to compensate for their time.
The researchers designed a survey based on the existing literature [, ] and their pilot interview. All items, except categorical variables, were measured using a 7-point Likert scale, with the options ranging from “strongly disagree” to “strongly agree” ( ). To ensure the validity of the scale in our questionnaire, we adopted measurement items from the existing literature and we modified some items to adapt to the online physician rating scenario based on our first pilot study with 30 Chinese citizens. We calculated the mean values of the multiple items as predictor scores after checking the measurement’s internal reliability.
The questionnaire was created in English. One researcher translated it into Chinese, and then another researcher translated it back into English to ensure the consistency of the content. After developing the Chinese questionnaire, 3 researchers in health informatics were invited to assess the ease of understanding, logical consistencies, item sequence, and contextual relevance of the items in the questionnaire. We made some minor modifications based on their suggestions. Furthermore, a pilot test was conducted with 20 participants, and the items were modified slightly.
Rating Behavior, Rating Intention, and Demographic Variables
To ensure that the respondents understood the online physician rating system, a screenshot of a physician rating website was presented in the introductory phase before the respondents answered the questions. The actual behavior of rating a physician was assessed by asking if the respondents had rated physicians on physician rating websites previously (0=no, 1=yes). We defined participants as posters if they had rated a physician on a physician rating website at least once and we defined participants as nonposters if they had never rated a physician on the physician rating website. The intention of rating a physician was assessed using a 3-item scale adapted from the study of Ajzen . This scale was found to be reliable (mean 5.064, SD 1.189; Cronbach α=.949). Additionally, data on demographic variables such as age, gender, education level, marital status, monthly income, daily internet use, and the number of vulnerable family members were also collected ( ).
Health and Habit-Related Variables
The usage of web-based physician information was assessed by asking respondents if they had ever sought physician information on the internet (0=no, 1=yes). The usage of web-based physician service was assessed by asking participants if they had ever booked or consulted a physician on the internet (0=no, 1=yes). The ability to seek health information was assessed using 2 items adapted from the model developed by Richard et al . The scale (mean 4.649, SD 1.350; Cronbach α=.890) assessed the participant’s ability to search for web-based health information. The health conditions of the participants or their family members were measured using the following 2 questions: “Did you or your family members develop any chronic disease in the past 2 years?” and “Did you or your family members develop any serious disease in the past 2 years?” (0=no, 1=yes). Medical experience was measured using the following questions: “I had a very good medical experience in the past 2 years” and “I had a very bad medical experience in the past 2 years” (0=no, 1=yes). In the e-commerce era, some consumers have the habit of posting reviews after performing web-based transactions, and the habit was found to be effective in the online physician rating scenario in our pilot interview. The following rating habit in the e-commerce context was assessed using an item adapted from a previous study [ ]: “Rating the product after a web-based transaction has become a habit for me” (1=strongly disagree, 7=strongly agree).
Altruism is a behavior intended to benefit another, even when this action may involve sacrificing one’s welfare . A 3-item scale on altruism was adapted from previous studies [ , ] and applied in our pilot interview. This scale was found to be reliable (mean 5.438, SD 1.042; Cronbach α=.910). Self-efficacy refers to the belief or the estimate of an individual about his/her own ability to perform a particular task [ ]. The self-efficacy scale was adapted from prior studies and applied in our pilot interview [ , ], and the scale was found to be reliable (mean 4.594, SD 1.202; Cronbach α=.770). Trust refers to a situation in which one party willingly relies on the actions of the other party [ ]. The scale for trust in online physician rating information was adapted from a previous study [ ], and this scale was also found to be reliable (mean 4.711, SD 1.224; Cronbach α=.885).
Perceived usefulness refers to the usefulness of using physician rating websites to rate physicians, and perceived ease of use refers to the ease of using physician rating websites to rate physicians. Based on our pilot interview and previous studies [, ], we adapted reliable scales for perceived usefulness (mean 5.147, SD 1.162; Cronbach α=.842) and perceived ease of use (mean 4.405, SD 1.127; Cronbach α=.742).
Data were downloaded from the web-based questionnaire database to computers in our laboratory in Nanjing University, China. Two independent research assistants examined the data and removed 185 unqualified cases. Data analyses were conducted using the SPSS 23.0 software (IBM Corp). We examined the descriptive statistics for all the variables. Since we focused on participants who were aware of physician rating websites before completing our survey, binary logistic regression analysis was performed to examine the effect of the variables on the likelihood of generating online physician rating information. Multiple linear regression, one-way analysis of variance (ANOVA), and independent samples t test (two-tailed) were performed to investigate the different factors influencing the physician rating intentions of the posters and nonposters. We applied data screening procedures to identify problematic patterns within the data set before performing linear regression. Linear relationship, multivariate normality, multicollinearity tested by variance inflation factors (VIFs), autocorrelations tested by Durbin-Watson, and homoscedasticity were tested, and we found that the data could be used for further linear regression. A bootstrapping procedure (with 5000 bootstrap samples) was used in our regression models.
Demographic Data of the Participants
The demographic characteristics of the participants are presented in. The age range of most of the participants was between 25 and 40 years. Out of the 1371 participants, 789 (57.6%) were women and 980 (71.5%) were married. The monthly income of 69.1% (947/1371) of the participants ranged between 3000 RMB (US $435) and 12,000 RMB (US $1740). With respect to the education level, 68.5% (939/1371) of the participants had completed college or higher level of education. Of the 1371 participants, 778 (56.7%) used online physician rating information and 287 (20.9%) rated the physicians on physician rating websites.
|Demographic characteristics||Value, n (%）|
|Income (RMB ¥)a|
|County/bureau level||337 (24.6)|
|Provincial level||402 (29.3)|
|Middle school||432 (31.5)|
|Children and elders|
|Daily internet use|
|Tb≤3 h||158 (11.5)|
|3<T≤5 h||303 (22.1)|
|5<T≤7 h||292 (21.3)|
|7<T≤9 h||294 (21.4)|
|9<T≤11 h||324 (23.6)|
|Online physician rating awareness|
|Online physician rating usage|
|Online physician rating generation|
aA currency exchange rate of RMB ¥1=US $0.14 is applicable.
Factors Associated With the Actual Behavior of Rating Physicians on Physician Rating Websites
We focused on participants who were aware of physician rating websites before our survey (n=972), andand show the results of 4 binary logistic regressions. In the first step, a binary logistic regression between having rated a physician or not (yes=1, no=0) as the criterion and demographic variables was performed (Model 1). Model 1 was significant. Age (β=.146; P=.06), monthly income (β=.197; P=.003), and education level (β=–.308; P=.02) were significantly associated with the likelihood of rating a physician on physician rating websites. However, we also noticed that Nagelkerke R2 (.041) was quite low, and –2 log-likelihood (1130.138) was high. The results indicated that demographic variables only explained a small part of the actual rating behavior.
Then, we entered the health- and habit-related variables into regression Model 2. The model improved with a Nagelkerke R2 change of 0.238. The regression coefficients were significant for the following variables: experience of seeking physician information on the internet (β=1.713; P<.001), usage of web-based physician service (β=1.019; P<.001), ability to seek health information (β=.129; P=.04), development of serious diseases (β=.993; P<.001), and good medical experience (β=.765; P<.001). We also noticed that gender (β=.410; P=.02) and marital status (β=–.441; P=.047) were significant factors associated with the actual rating behavior, while age (β=.116; P=.16) was not significant after health-related factors were considered.
Following Model 2, cognitive variables were entered into Model 3, which were also significant (P<.001). Furthermore, Model 3 showed a minor improvement over Model 2, with increased Nagelkerke R2 change of 0.035. The significant factors in Model 2 mentioned above were still significant. Altruism was negatively (β=–.492; P<.001) related to the likelihood of rating physicians. Self-efficacy (β=.374; P<.001) and trust in online physician rating information (β=.274; P=.004) were significantly and positively related to the likelihood of the rating behavior.
Based on Model 3, technology-related variables were entered into Model 4. However, no improvement was observed in Nagelkerke R2 and the regression coefficients of perceived usefulness (P=.42) and perceived ease of use (P=.33) were not significant. Significant variables in Model 2 and Model 3 were also significant in Model 4.
Besides the reliability indices mentioned above, collinearity statistics using VIF and tolerance values were tested. The results showed that VIF scores did not exceed 2.479, and tolerance values were not lower than 0.411. According to the criteria proposed by Montgomery and Peck  that VIF should be lower than 10 and tolerance value should be more than 0.1, our results indicated that multicollinearity was not a big concern.
|Model 1a||Model 2b|
|β||Sig.c||ORd||95% CI of OR||β||Sig.||OR||95% CI of OR|
|Number of children and elders||.080||.09||1.084||0.985-1.192||.042||.44||1.043||0.938-1.159|
|Daily internet use||–.046||.40||0.955||0.858-1.063||–.086||.15||0.917||0.815-1.033|
|Health- and habit-related variables|
|Seeking physician information||—||—||—||—||1.713||<.001||5.548||3.072-10.017|
|Usage of web-based medical service||—||—||—||—||1.019||<.001||2.771||1.979-3.879|
|Health information seeking ability||—||—||—||—||.129||.04||1.138||0.993-1.304|
|Good medical experience||—||—||—||—||.765||<.001||2.149||1.473-3.135|
|Bad medical experience||—||—||—||—||.267||.13||1.306||0.926-1.843|
aχ2 /Sig.: 27.887 (df=7) /<.001; –2 log-likelihood: 1130.138; Nagelkerke R2: 0.041.
bχ2 /Sig.: 210.132 (df=15) /<.001; –2 log-likelihood: 947.892; Nagelkerke R2: 0.279.
cSig.: significance probability.
dOR: odds ratio.
|Model 3a||Model 4b|
|β||Sig.c||ORd||95% CI of OR||β||Sig.||OR||95% CI of OR|
|Number of children and elders||.019||.73||1.019||0.915-1.136||.019||.74||1.019||0.914-1.135|
|Daily internet use||–.068||.27||0.935||0.829-1.054||–.068||.27||0.934||0.828-1.054|
|Health and habit-related variables|
|Seeking physician information||1.810||<.001||6.113||3.355-11.137||1.812||<.001||6.121||3.357-11.159|
|Usage of web-based medical service||.951||<.001||2.589||1.838-3.647||.939||<.001||2.559||1.814-3.610|
|Health information seeking ability||.147||.04||1.158||1.009-1.330||.150||.04||1.162||1.010-1.336|
|Good medical experience||.800||<.001||2.227||1.515-3.273||.788||<.001||2.200||1.372-2.849|
|Bad medical experience||.325||.07||1.384||0.971-1.971||.322||.07||1.380||0.968-1.967|
|Perceived ease of use||—||—||—||—||.087||.33||1.091||0.917-1.298|
aχ2 /Sig.: 240.174 (df=18) / <.001; –2 log-likelihood: 917.851; Nagelkerke R2: 0.314.
bχ2 /Sig.: 241.789 (df=20) / <.001; –2 log-likelihood: 916.236; Nagelkerke R2: 0.316.
cSig.: significance probability.
dOR: odds ratio.
Predictive Factors for the Intention of Rating a Physician on Physician Rating Websites
To investigate how the variables influence the rating intention of the participants differently, we divided the samples into 2 groups, namely, posters group and nonposters group. Using hierarchical multiple regression analyses, we tested the effects of different dimensions of the factors on the rating intention. By controlling the demographic variables, we found that health-, cognitive-, and technology-related variables explained 21.3%, 38.1%, and 5.5% of the increased variance in the rating intention of the posters and 12.8%, 48.1%, and 0.4% of the increased variance for nonposters, respectively. The VIF and tolerance values showed that multicollinearity was not a concern in any model.
displays the final models. For posters who rated the physicians on the physician rating websites, health and habit-related variables, that is, seeking physician information on the internet (β=.486; P=.007), using web-based medical services (β=.420; P=.002), ability to seek health information (β=.193; P=.002), and habits of ratings (β=.105; P=.02) were found to be significantly and positively related to the rating intention. The cognitive variables, that is, altruism (β=.414; P<.001), self-efficacy (β=.102; P=.06), and trust in online physician rating information (β=.351; P<.001) were also significant predictors of the rating intention. Perceived usefulness was not significantly associated with the rating intention (β=–.031; P=.63), while perceived ease of use (β=.271; P<.001) was a significant predictor.
For nonposters who did not rate the physicians on the physician rating websites, usage of web-based medical service (β=.077; P=.003), development of chronic disease (β=.092; P=.06), bad medical experience (β=.047; P=.02), and habits of ratings (β=.085; P<.001) were found to be significantly associated with the rating intention. Similar to that observed in the posters group, altruism (β=.411; P<.001), self-efficacy (β=.171; P<.001), and trust (β=.252; P<.001) were also found to be the predictors of the rating intentions of nonposters. Since nonposters did not post web-based physician reviews, perceived ease of use (β=.017; P=.505) was not significantly associated with the rating intention, but perceived usefulness (β=.109; P=.001) was a significant predictor of the rating intention.
|Postersa (n=287)||Nonpostersb (n=1084)|
|β||Sig.c||95% CI||β||Sig.||95% CI|
|Number of children and elders||.180||.13||–0.054||0.414||–.063||.31||–0.184||0.058|
|Daily internet use||.010||.89||–0.142||0.162||–.004||.93||–0.091||0.083|
|Health- and habit-related variables|
|Seeking physician information||.486||.007||0.134||0.838||–.075||.14||–0.173||0.024|
|Usage of web-based medical service||.420||.002||0.157||0.684||.030||.53||–0.065||0.126|
|Health information seeking ability||.193||.002||0.071||0.316||.077||.003||0.025||0.128|
|Good medical experience||–.005||.96||–0.208||0.198||.000||.99||–0.094||0.093|
|Bad medical experience||.034||.68||–0.124||0.191||.047||.02||0.021||0.232|
|Perceived ease of use||.275||<.001||0.199||0.350||.017||.505||–0.032||0.066|
aF /Sig.: 123.812 (df=20)/ <.001; Nagelkerke R2: 0.617.
bF /Sig.: 114.296 (df=20)/ <.001; Nagelkerke R2: 0.623.
cSig.: significance probability.
Furthermore, we used one-way ANOVA and independent samples t test to compare the differences between posters (n=287) and nonposters (n=1084) on the rating intention and related factors. Following the suggestion by Fritz et al , Cohen d was used to estimate the effect size. It can be seen from that posters had a higher level of rating intention than nonposters (t1369=5.569; P<.001). Regarding self-efficacy, the 2 groups differed as expected (t1369=5.771; P<.001), with posters ascribing higher self-efficacy than the nonposters. also indicated that posters trusted the information on physician rating websites to a greater extent than nonposters (t1369=5.549; P<.001). The 2 groups did not differ significantly in altruism at P<.05 level (t1369=1.697; P=.09). Additionally, posters perceived higher levels of usefulness (t1369=3.020; P=.003) and ease of use (t1369=3.928; P<.001) than nonposters. With regard to the effect size, a Cohen d value of 0.2 indicated a small effect and a value of 0.5 indicated a medium effect. Thus, the effect sizes for rating intention, self-efficacy, and trust were found to be medium, while the effect sizes for perceived usefulness and perceived ease of use were found to be small.
|Variables||Poster, mean (SD)||Nonposter, mean (SD)||t (df)||P value||Cohen d|
|Rating intention||5.495 (1.120)||5.064 (1.189)||5.569 (1369)||<.001||0.373|
|Altruism||5.556 (1.074)||5.438 (1.042)||1.697 (1369)||.09||0.112|
|Self-efficacy||5.052 (1.229)||4.594 (1.202)||5.771 (1369)||<.001||0.377|
|Trust||5.150 (1.058)||4.711 (1.224)||5.549 (1369)||<.001||0.358|
|Perceived usefulness||5.380 (1.150)||5.147 (1.162)||3.020 (1369)||.003||0.202|
|Perceived ease of use||4.700 (1.085)||4.405 (1.127)||3.928 (1369)||<.001||0.267|
Previous studies on online physician rating information mainly focused on the usage of online physician ratings and related factors [- , ], and only 2 studies [ , ] have shown the proportion (11% and 23%) of people who rated the physicians on physician rating websites. Our study focused on the urban Chinese population and found that 20.9% (287/1371) of the respondents rated the physicians on physician rating websites at least once. An important aspect of our study was that we investigated the factors that predicted the actual behavior of rating the physicians on the physician rating websites. Since only 56.7% (778/1371) of the participants had used online physician rating information, we examined the effects of different factors on the rating intentions of the posters and nonposters. Our results also show that the factors affecting the actual rating behavior and rating intention were different, even though the rating behavior was positively related to the rating intention in our partial correlation analysis (r=.148; P<.001).
Our study shows that only sociodemographic variables cannot produce a satisfying model to predict the actual behavior of rating physicians on the physician rating websites. Even though monthly income and education level were significantly correlated with the rating behavior (Model 1 in), the Nagelkerke R2 (0.041) of the logistic regression model was low. We also found that gender and marital status were significantly associated with the rating behavior when health and cognitive variables were included. The change in Nagelkerke R2 indicated that it was necessary to integrate additional health and cognitive variables to predict the rating behavior to a more satisfying extent. Health-related factors played an important role in predicting the likelihood of the rating behavior. In our study, participants with the experience of seeking physician information on the internet, who used web-based physician service, and who had higher ability to seek health information were more prone to rate physicians on the physician rating websites. Since there have been incidents of poor physician-patient relationships and severe cases of vicious attacks on medical professionals particularly in China [ ], many health consumers choose to check physician information on the internet and seek web-based health information to avoid unpleasant medical experiences. Seeking web-based health information increased their awareness of the online physician rating information and motivated them to rate physicians. Development of serious diseases and good medical experience were also predictors of the rating behavior. This result corroborated that of previous studies that showed a large number of positive reviews on physician rating websites [ - ]. Further, altruism was negatively related to the rating behavior, indicating that egoistic motivation played a role, and nonposters showed exaggerated level of altruism in their behavior of generating online physician rating information. Self-efficacy reflects an optimistic self-belief that one can perform a task, and it was found to be positively related to the rating behavior. In a web-based context, trust is always a big concern, and it was found to be positively related to the usage of online physician rating information [ ]. In our study, trust in online physician rating information was also positively related to the rating behavior. However, as most participants had not used physician rating websites, perceived usefulness and perceived ease of use were not significantly associated with the rating behavior.
Regarding the rating intention, cognitive factors explained the largest variance in the rating intention, and factors influencing the rating intention of posters and nonposters were different. The common factors were health information-seeking ability, rating habit, altruism, self-efficacy, and trust in online physician rating information. However, most health- and technology-related variables that predicted the rating intentions of the posters and nonposters were different. For health-related variables, the rating intention of the posters was mainly predicted by the usage of web-based health information or service, while the rating intention of the nonposters was associated with the health status and medical experience. Although the results inindicated that serious disease development and good medical experience predicted the actual rating behavior, our linear regression model demonstrated that chronic disease and bad medical experience were associated with the rating intentions of the nonposters after they became aware of physician rating websites. Additionally, perceived usefulness was associated with the rating intention of the nonposters and perceived ease of use was associated with the rating intention of the posters. Further, we noticed that the posters judged the rating intention, altruism, trust in online physician rating information, perceived usefulness, and perceived ease of use higher than the nonposters.
Based on the results in our study, we have some recommendations for physician rating websites and physicians who are the stakeholders of online physician rating information generation. Commercial physician rating websites are the main sources for health consumers to access online physician rating information; thus, a large amount of online physician rating information is necessary and critical for the development of physician rating websites. Large amount of online physician rating information can be generated as follows. First, physician rating websites need to cooperate with widely used search engines and social media to increase the awareness of these websites among health consumers, since our results indicated that many consumers were unaware of these websites, and usage of web-based physician information could improve online physician rating information generation behavior and rating intention. Although Chinese physician rating websites have provided services for many years and are top-ranked in the search engine results page, most health consumers are still unfamiliar with these physician rating websites and are uncertain about their reliability. Second, physician rating websites need to cooperate with hospitals officially. Health consumers have high levels of trust in public hospitals. Thus, cooperation with hospitals would enhance consumers’ trust and improve the usability of commercial physician rating websites. Our findings suggested that trust was positively related to the physician rating behavior and intention. In fact, reviews on some physician rating websites in China increased greatly after physician rating websites provided booking services for hospitals. These physician rating websites are becoming increasingly popular among health consumers in cities. Third, physician rating websites must provide additional incentives for health consumers to generate online physician rating information. Knowledge sharing is an altruistic behavior. However, our results indicated altruism to be negatively related to the actual rating behavior. Egoism may play an important role in the actual rating behavior. Thus, a better incentive mechanism is needed for attracting health consumers to rate physicians on the physician rating websites. Fourth, physician rating websites need to cooperate with physicians and provide web-based medical services, besides online physician rating information. In the past 2 years, many physicians in China have begun to use physician rating websites to provide web-based medical services, which have greatly increased the number of reviews and the usability of these websites.
The results of our study could also be interesting for physicians. Online physician rating information is important for physicians to boost their reputation and to enjoy success in their careers. Thus, physicians need to actively encourage patients to generate online physician ratings by performing the following measures. First, physicians should be concerned about patients’ medical experiences. We found that good medical experience predicted the actual behavior of rating the physician on the physician rating websites. This finding is consistent with that reported in previous studies that showed positive reviews for physicians [, ]. Physicians need not worry about negative reviews ruining their reputation, even though a bad medical experience was positively related to the rating intention of nonposters. Physicians are encouraged to show empathy to their patients, who may consequently provide positive reviews about them. Second, physicians should recommend physician rating websites to their patients and encourage them to provide online physician ratings after receiving the medical service. Physician recommendations would increase patients’ trust in online physician rating information and directly lead to the generation of more reviews. Even though it is embarrassing to be rated by patients, physicians should accept that online physician rating information could lead to their medical service improvement.
Limitations and Future Direction
This study has some limitations. First, we used a snowball sampling method and focused on well-educated people who were younger than 46 years in China. There is a possibility of selection bias among respondents, even though they are the potential online physician rating information users. Thus, a large randomized sample would certainly be desirable in future studies. Second, we only tested the altruistic motivation, which was found to be negatively related to the rating behavior. Future studies should analyze how egoistic motivation directly affects the rating behavior. Third, we did not differentiate people with bad medical experience from people with good medical experience. Medical experience could be an interesting variable to focus on, considering the special patient-physician relationship in China. Researchers should explore if the kind of medical experience has nuanced the effect on the intention to post online physician rating information with regard to the unsatisfying physician-patient relationships in China. Finally, factors influencing the actual behavior and intention of rating physicians were quite different in our study. Since many participants were unaware of physician rating websites before our survey, it would be better to examine how these factors affect their actual rating behavior. Even though the intention is predictive of future behavior, the self-reported intention might be exaggerated. A long-term follow-up study is needed in the future to investigate how different factors affect the actual rating behavior after health consumers become aware of the online physician rating information.
Since the limited number of web-based reviews greatly hampers the effective usage of physician rating information, it is important to discuss the variables that influence the generation of physician rating information from the health consumer’s perspective. Our cross-sectional study shows that factors affecting the physician rating behavior and rating intention are different. We found that health-related variables influenced the physician rating behavior while cognitive variables were critical in the rating intentions. Based on our findings, we have provided some practical suggestions for physician rating websites and physicians to promote the generation of online physician rating information.
The research was supported by the Philosophy and Social Science Foundation of the Guangdong Province (GD19YTS01), MOE (Ministry of Education in China) Project of Humanities and Social Sciences (20YJCZH039), Guangdong Basic and Applied Basic Research Joint Foundation (2019A1515110347), and Guangzhou Philosophy and Social Sciences Foundation (2020GZQN38). We sincerely appreciate the research participants in this study. We also thank the editors and the anonymous reviewers for their thoughtful comments and constructive suggestions during the review process.
XH was the project leader and a major contributor to drafting the manuscript. BL was responsible for data collection and was another major contributor to drafting the manuscript. TZ and JQ were responsible for data collection and data analysis. All authors read and approved the final manuscript.
Conflicts of Interest
Survey questionnaire.DOCX File , 360 KB
- Pasternak A, Scherger J. Online reviews of physicians: what are your patients posting about you? Fam Pract Manag 2009;16(3):9-11 [FREE Full text] [Medline]
- Tu H, Lauer J. Word of mouth and physician referrals still drive health care provider choice. Res Brief 2008 Dec(9):1-8. [Medline]
- Hanauer DA, Zheng K, Singer DC, Gebremariam A, Davis MM. Public awareness, perception, and use of online physician rating sites. JAMA 2014 Feb 19;311(7):734-735. [CrossRef] [Medline]
- Emmert M, Meier F, Pisch F, Sander U. Physician Choice Making and Characteristics Associated With Using Physician-Rating Websites: Cross-Sectional Study. J Med Internet Res 2013 Aug 28;15(8):e187. [CrossRef]
- Leslie J. Software Advice. 2014 Nov 19. Patient Use of Online Reviews URL: http://www.softwareadvice.com/medical/industryview/online-reviews-report-2014/ [accessed 2019-04-23]
- McLennan S, Strech D, Meyer A, Kahrass H. Public Awareness and Use of German Physician Ratings Websites: Cross-Sectional Survey of Four North German Cities. J Med Internet Res 2017 Nov 09;19(11):e387. [CrossRef]
- McBride DL. Parental use of online physician rating sites. J Pediatr Nurs 2015;30(1):268-269. [CrossRef] [Medline]
- Wangler J, Jansky M. How Family Practitioners Judge Physician Rating Websites - Assessments, Experiences, Effects. ZFA 2017;93(5):216-220. [CrossRef]
- Samora JB, Lifchez SD, Blazar PE, American Society for Surgery of the Hand EthicsProfessionalism Committee. Physician-Rating Web Sites: Ethical Implications. J Hand Surg Am 2016 Jan;41(1):104-110.e1. [CrossRef] [Medline]
- Holliday AM, Kachalia A, Meyer GS, Sequist TD. Physician and Patient Views on Public Physician Rating Websites: A Cross-Sectional Study. J GEN INTERN MED 2017 Feb 1;32(6):626-631. [CrossRef] [Medline]
- Gordon HS. Consider Embracing the Reviews from Physician Rating Websites. J Gen Intern Med 2017 Jun;32(6):599-600 [FREE Full text] [CrossRef] [Medline]
- Ellimoottil C, Leichtle S, Wright C, Fakhro A, Arrington A, Chirichella T, et al. Online physician reviews: the good, the bad and the ugly. Bull Am Coll Surg 2013 Sep;98(9):34-39. [Medline]
- McLennan S. Quantitative Ratings and Narrative Comments on Swiss Physician Rating Websites: Frequency Analysis. J Med Internet Res 2019 Jul 26;21(7):e13816 [FREE Full text] [CrossRef] [Medline]
- Liu J, Hou S, Evans R, Xia C, Xia W, Ma J. What Do Patients Complain About Online: A Systematic Review and Taxonomy Framework Based on Patient Centeredness. J Med Internet Res 2019 Aug 07;21(8):e14634 [FREE Full text] [CrossRef] [Medline]
- Hong YA, Liang C, Radcliff TA, Wigfall LT, Street RL. What Do Patients Say About Doctors Online? A Systematic Review of Studies on Patient Online Reviews. J Med Internet Res 2019 Apr 8;21(4):e12521. [CrossRef]
- Pike CW, Zillioux J, Rapp D. Online Ratings of Urologists: Comprehensive Analysis. J Med Internet Res 2019 Jul 02;21(7):e12436 [FREE Full text] [CrossRef] [Medline]
- Kirkpatrick W, Abboudi J, Kim N, Medina J, Maltenfort M, Seigerman D, et al. An Assessment of Online Reviews of Hand Surgeons. Arch Bone Jt Surg 2017 May;5(3):139-144 [FREE Full text] [Medline]
- Okike K, Peter-Bibb TK, Xie KC, Okike ON. Association Between Physician Online Rating and Quality of Care. J Med Internet Res 2016 Dec 13;18(12):e324 [FREE Full text] [CrossRef] [Medline]
- Lagu T, Metayer K, Moran M, Ortiz L, Priya A, Goff SL, et al. Website Characteristics and Physician Reviews on Commercial Physician-Rating Websites. JAMA 2017 Feb 21;317(7):766-768 [FREE Full text] [CrossRef] [Medline]
- Emmert M, Halling F, Meier F. Evaluations of dentists on a German physician rating Website: an analysis of the ratings. J Med Internet Res 2015 Jan 12;17(1):e15 [FREE Full text] [CrossRef] [Medline]
- Terlutter R, Bidmon S, Röttl J. Who uses physician-rating websites? Differences in sociodemographic variables, psychographic variables, and health status of users and nonusers of physician-rating websites. J Med Internet Res 2014 Mar 31;16(3):e97 [FREE Full text] [CrossRef] [Medline]
- Galizzi MM, Miraldo M, Stavropoulou C, Desai M, Jayatunga W, Joshi M, et al. Who is more likely to use doctor-rating websites, and why? A cross-sectional study in London. BMJ Open 2012;2(6):e001493 [FREE Full text] [CrossRef] [Medline]
- Hao H. The development of online doctor reviews in China: an analysis of the largest online doctor review website in China. J Med Internet Res 2015 Jun 01;17(6):e134 [FREE Full text] [CrossRef] [Medline]
- Hao H, Zhang K. The Voice of Chinese Health Consumers: A Text Mining Approach to Web-Based Physician Reviews. J Med Internet Res 2016 May 10;18(5):e108 [FREE Full text] [CrossRef] [Medline]
- Hao H, Zhang K, Wang W, Gao G. A tale of two countries: International comparison of online doctor reviews between China and the United States. Int J Med Inform 2017 Mar;99:37-44. [CrossRef]
- Zhang W, Deng Z, Hong Z, Evans R, Ma J, Zhang H. Unhappy Patients Are Not Alike: Content Analysis of the Negative Comments from China's Good Doctor Website. J Med Internet Res 2018 Jan 25;20(1):e35 [FREE Full text] [CrossRef] [Medline]
- Li J, Liu M, Li X, Liu X, Liu J. Developing Embedded Taxonomy and Mining Patients' Interests From Web-Based Physician Reviews: Mixed-Methods Approach. J Med Internet Res 2018 Aug 16;20(8):e254 [FREE Full text] [CrossRef] [Medline]
- Deng Z, Hong Z, Zhang W, Evans R, Chen Y. The Effect of Online Effort and Reputation of Physicians on Patients' Choice: 3-Wave Data Analysis of China's Good Doctor Website. J Med Internet Res 2019 Mar 08;21(3):e10170 [FREE Full text] [CrossRef] [Medline]
- Han X, Qu J, Zhang T. Exploring the impact of review valence, disease risk, and trust on patient choice based on online physician reviews. Telemat Informatics 2019 Dec;45:101276. [CrossRef]
- Li S, Hubner A. The Impact of Web-Based Ratings on Patient Choice of a Primary Care Physician Versus a Specialist: Randomized Controlled Experiment. J Med Internet Res 2019 Jun 28;21(6):e11188 [FREE Full text] [CrossRef] [Medline]
- Ma WW, Chan A. Knowledge sharing and social media: Altruism, perceived online attachment motivation, and perceived online relationship commitment. Comput Human Behav 2014 Oct;39:51-58. [CrossRef]
- Szajna B. Software Evaluation and Choice: Predictive Validation of the Technology Acceptance Instrument. MIS Q 1994 Sep;18(3):319-324. [CrossRef]
- Ajzen I. The theory of planned behavior. Organ Behav Hum Decis Process 1991 Dec;50(2):179-211. [CrossRef]
- Richard M, Chebat J, Yang Z, Putrevu S. A proposed model of online consumer behavior: Assessing the role of gender. J Bus Res 2010 Sep;63(9-10):926-934. [CrossRef]
- Shiau W, Luo MM. Continuance intention of blog users: the impact of perceived enjoyment, habit, user involvement and blogging time. Behav Inf Technol 2013 Jun;32(6):570-583. [CrossRef]
- Harris J. Altruism: Should it be Included as an Attribute of Medical Professionalism? Heal Prof Educ 2018 Mar;4(1):3-8. [CrossRef]
- Zhang X, Liu S, Deng Z, Chen X. Knowledge sharing motivations in online health communities: A comparative study of health professionals and normal users. Comput Human Behav 2017 Oct;75:797-810. [CrossRef]
- Chang HH, Chuang S. Social capital and individual motivations on knowledge sharing: Participant involvement as a moderator. Inf Manag 2011 Jan;48(1):9-18. [CrossRef]
- Bandura A. Self-efficacy: Toward a unifying theory of behavioral change. Adv Behav Res Ther 1978 Jan;1(4):139-161. [CrossRef]
- Zhang X, de Pablos PO, Xu Q. Culture effects on the knowledge sharing in multi-national virtual classes: A mixed method. Comput Human Behav 2014 Feb;31:491-498. [CrossRef]
- Tamjidyamcholo A, Bin Baba MS, Tamjid H, Gholipour R. Information security – Professional perceptions of knowledge-sharing intention under self-efficacy, trust, reciprocity, and shared-language. Comput Educ 2013 Oct;68:223-232. [CrossRef]
- Mayer RC, Davis JH, Schoorman FD. An Integrative Model Of Organizational Trust. AMR 1995 Jul;20(3):709-734. [CrossRef]
- Sparks BA, Browning V. The impact of online reviews on hotel booking intentions and perception of trust. Tour Manag 2011 Dec;32(6):1310-1323. [CrossRef]
- Karahanna E, Straub DW. The psychological origins of perceived usefulness and ease-of-use. Inf Manag 1999 Apr;35(4):237-250. [CrossRef]
- Davis FD. Perceived Usefulness, Perceived Ease of Use, and User Acceptance of Information Technology. MIS Q 1989 Sep;13(3):319. [CrossRef]
- Montgomery DC, Peck EA, Vining G. Introduction to Linear Regression Analysis. New York: Wiley-Interscience; Feb 26, 2007:323-341.
- Fritz CO, Morris PE, Richler JJ. Effect size estimates: Current use, calculations, and interpretation. J Exp Psychol Gen 2012;141(1):2-18. [CrossRef]
- The Lancet. Protecting Chinese doctors. The Lancet 2020 Jan;395(10218):90. [CrossRef]
- Emmert M, Meier F. An analysis of online evaluations on a physician rating website: evidence from a German public reporting instrument. J Med Internet Res 2013 Aug 06;15(8):e157 [FREE Full text] [CrossRef] [Medline]
- Emmert M, Sander U, Esslinger AS, Maryschok M, Schöffski O. Public Reporting in Germany: the Content of Physician Rating Websites. Methods Inf Med 2018 Jan 19;51(02):112-120. [CrossRef]
- Ellimoottil C, Hart A, Greco K, Quek ML, Farooq A. Online Reviews of 500 Urologists. J Urol 2013 Jun;189(6):2269-2273. [CrossRef]
- Lagu T, Hannon NS, Rothberg MB, Lindenauer PK. Patients' evaluations of health care providers in the era of social networking: an analysis of physician-rating websites. J Gen Intern Med 2010 Sep;25(9):942-946 [FREE Full text] [CrossRef] [Medline]
|ANOVA: analysis of variance|
|VIF: variance inflation factor|
Edited by G Eysenbach; submitted 17.04.19; peer-reviewed by M Duplaga, S Bidmon; comments to author 03.10.19; revised version received 17.12.19; accepted 29.03.20; published 04.06.20Copyright
©Xi Han, Bei Li, Tingting Zhang, Jiabin Qu. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 04.06.2020.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on http://www.jmir.org/, as well as this copyright and license information must be included.