[Skip to Navigation]
Sign In
Figure.  Mean Weighted Ratings for Urologists in California Stratified by Practice Volume
Mean Weighted Ratings for Urologists in California Stratified by Practice Volume

The horizontal line in each box indicates the median, while the top and bottom borders of each box mark the 75th and 25th percentiles, respectively. The whiskers above and below each box mark the 90th and 10th percentiles, respectively. The points beyond the whiskers are outliers beyond the 10th percentiles.

Table.  Multivariable Regression Analysis for Predictors of Online Weighted Ratings for Urologists in California
Multivariable Regression Analysis for Predictors of Online Weighted Ratings for Urologists in California
1.
Hanauer  DA, Zheng  K, Singer  DC, Gebremariam  A, Davis  MM.  Public awareness, perception, and use of online physician rating sites.  JAMA. 2014;311(7):734-735.PubMedGoogle ScholarCrossref
2.
Okike  K, Peter-Bibb  TK, Xie  KC, Okike  ON.  Association between physician online rating and quality of care.  J Med Internet Res. 2016;18(12):e324.PubMedGoogle ScholarCrossref
3.
Murphy  GP, Awad  MA, Osterberg  EC,  et al.  Web-based physician ratings for California physicians on probation.  J Med Internet Res. 2017;19(8):e254.PubMedGoogle ScholarCrossref
4.
Emmert  M, Adelhardt  T, Sander  U, Wambach  V, Lindenthal  J.  A cross-sectional study assessing the association between online ratings and structural and quality of care measures: results from two German physician rating websites.  BMC Health Serv Res. 2015;15:414.PubMedGoogle ScholarCrossref
5.
Nwachukwu  BU, Adjei  J, Trehan  SK,  et al.  Rating a sports medicine surgeon’s ‘quality’ in the modern era: an analysis of popular physician online rating websites.  HSS J. 2016;12(3):272-277.PubMedGoogle ScholarCrossref
6.
Lee  V.  Transparency and trust—online patient reviews of physicians.  N Engl J Med. 2017;376(3):197-199.PubMedGoogle ScholarCrossref
Research Letter
July 2018

Association of Patient Volume With Online Ratings of California Urologists

Author Affiliations
  • 1Department of Surgery, Washington University, St Louis, Missouri
  • 2Department of Urology, University of California, San Francisco
  • 3Department of Surgery, King Abdulaziz University, Rabigh, Saudi Arabia
  • 4Department of Surgery, Dell Medical School, University of Texas, Austin
  • 5Department of Biostatistics and Epidemiology, University of California, San Francisco
JAMA Surg. 2018;153(7):685-686. doi:10.1001/jamasurg.2018.0149

Online reviews are an increasingly popular tool for patients to evaluate and choose physicians.1 Although the accuracy, utility, and meaning of online reviews are debated by physicians, the patient perspective is a valued component of the physician-patient relationship and is likely to increase in importance.2,3 Reliable online reviews provide guidance for health care consumers as well as feedback to physicians. Online reviews are influenced by many factors, including patient wait times; however, little else is known about physician practice patterns and their effect on reviews.4 We evaluated Medicare billing data and online reviews of urologists in California, with the hypothesis that urologists with higher-volume practices would have lower patient ratings, potentially owing to shorter physician-patient interactions and increased wait times.

Methods

We retrospectively reviewed Medicare data from January 1 to December 31, 2014, on the 665 urologists in California. We obtained data from propublica.com on patients receiving Medicare that tracked physician billing and reimbursement, including the number of patients seen and number of services billed. We also recorded sex and practice setting (academic vs private practice) of each urologist. Practice settings were considered academic if they were associated with a residency training program. The number of reviews and mean score (range, 1-5, where 1 indicates the poorest rating and 5 indicates the best rating) were then obtained from 4 websites (Ratemd.com, Healthgrades.com, Vitals.com, and Yelp.com). We compared urologists’ weighted ratings and stratified by number of patients seen, Medicare services billed, sex, and practice setting. Data analysis was performed from January 1 to June 30, 2017. Univariable and multivariable linear regression were performed. Confounding variables were chosen a priori for each analysis and included in the multivariable model. A Wilcoxon-type test was used to test for trend. All tests were 2-sided and P < .05 was considered significant. The study was approved by the University of California–San Francisco institutional review board, who granted a waiver of informed consent as the data accessed were public data and informed consent was deemed unnecessary.

Results

Of the 665 urologists in California with Medicare patients in 2014, the mean total number of reviews in the 4 websites combined was 10, and 651 urologists had at least 1 rating. Ratemd.com had 325 reviews, Healthgrades.com had 600 reviews, Vitals.com had 604 reviews, and Yelp.com had 236 reviews. Among the study sample, there were 600 male urologists and 581 urologists who worked in a nonacademic setting. Mean weighted ratings for academic urologists were 4.2 (95% CI, 4-4.3), compared with 3.7 (95% CI, 3.6-3.8) for their nonacademic peers (P < .001). The Figure demonstrates the difference in ratings for academic and private practice urologists broken down into tertiles by number of patients seen. Female urologists had similar mean weighted ratings compared with men (3.9 [95% CI, 3.7-4.2] vs 3.8 [95% CI, 3.7-3.8]; P = .10).

The median number of Medicare patients seen per physician in 2014 was 426 (interquartile range, 241-693), with 2293 (interquartile range, 845-5139) total services billed. There was a significant trend toward higher ratings for urologists who saw fewer Medicare patients, using a Wilcoxon-type test for trend.

The multivariable analysis of ratings controlled for sex, practice setting, and total services billed (Table). Academic physicians were associated with higher scores, while an increase in Medicare patient load was associated with poorer ratings. For every 100 patients seen, ratings were lowered by 0.04 (P = .001). More services billed was associated with lower ratings in univariable analysis, but this was not significant in our multivariable model.

Limitations of this study include the use of propublica.com Medicare data, which may not accurately represent a physician’s non-Medicare patient population.

Discussion

Online patient ratings for urologists in California were lower for those with higher-volume practices. Research in other specialties suggests that physicians with busier practices have longer wait times and spend less time with patients, which are major drivers of ratings.4 Univariable analysis suggested that ratings were actually poorer for doctors who billed for more services. For urologists, many of these services are invasive procedures, which may contribute to lower ratings.

Female urologists had no difference in ratings from their male counterparts, although other fields have shown higher ratings for women.5 The large difference between ratings for academic and nonacademic urologists was surprising. The perception of seeing an expert in a particular subspecialty may be appealing to patients and drive higher ratings. More research is needed to determine what factors lead to more satisfied patients.6

Back to top
Article Information

Accepted for Publication: December 28, 2017.

Corresponding Author: Gregory P. Murphy, MD, Department of Surgery, Washington University, 4960 Children’s Pl, Campus Box 8242, St Louis, MO 63110 (murphyg@wustl.edu).

Published Online: March 21, 2018. doi:10.1001/jamasurg.2018.0149

Author Contributions: Drs Murphy and Breyer had full access to all the data in the study and take responsibility for the integrity of the data and the accuracy of the data analysis.

Study concept and design: Murphy, Awad, Gaither, Breyer.

Acquisition, analysis, or interpretation of data: All authors.

Drafting of the manuscript: Murphy, Osterberg, Baradaran.

Critical revision of the manuscript for important intellectual content: Murphy, Awad, Tresh, Gaither, Baradaran, Breyer.

Statistical analysis: Murphy, Awad, Gaither, Breyer.

Administrative, technical, or material support: Murphy, Tresh, Baradaran.

Study supervision: Murphy, Baradaran, Breyer.

Conflict of Interest Disclosures: None reported.

Funding/Support: The Alafi Foundation was used to obtain Medicare data from propublica.com.

Role of the Funder/Sponsor: The Alafi Foundation was used for design and conduct of the study, specifically data collection. The Alafi Foundation had no role in the management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication.

References
1.
Hanauer  DA, Zheng  K, Singer  DC, Gebremariam  A, Davis  MM.  Public awareness, perception, and use of online physician rating sites.  JAMA. 2014;311(7):734-735.PubMedGoogle ScholarCrossref
2.
Okike  K, Peter-Bibb  TK, Xie  KC, Okike  ON.  Association between physician online rating and quality of care.  J Med Internet Res. 2016;18(12):e324.PubMedGoogle ScholarCrossref
3.
Murphy  GP, Awad  MA, Osterberg  EC,  et al.  Web-based physician ratings for California physicians on probation.  J Med Internet Res. 2017;19(8):e254.PubMedGoogle ScholarCrossref
4.
Emmert  M, Adelhardt  T, Sander  U, Wambach  V, Lindenthal  J.  A cross-sectional study assessing the association between online ratings and structural and quality of care measures: results from two German physician rating websites.  BMC Health Serv Res. 2015;15:414.PubMedGoogle ScholarCrossref
5.
Nwachukwu  BU, Adjei  J, Trehan  SK,  et al.  Rating a sports medicine surgeon’s ‘quality’ in the modern era: an analysis of popular physician online rating websites.  HSS J. 2016;12(3):272-277.PubMedGoogle ScholarCrossref
6.
Lee  V.  Transparency and trust—online patient reviews of physicians.  N Engl J Med. 2017;376(3):197-199.PubMedGoogle ScholarCrossref
×