[Skip to Content]
Access to paid content on this site is currently suspended due to excessive activity being detected from your IP address 54.205.176.107. Please contact the publisher to request reinstatement.
Sign In
Individual Sign In
Create an Account
Institutional Sign In
OpenAthens Shibboleth
[Skip to Content Landing]
Download PDF
Table 1.  
Spearman Correlation Coefficients and Agreement in Performance Rankings of FY09 Surgical Quality Measuresa
Spearman Correlation Coefficients and Agreement in Performance Rankings of FY09 Surgical Quality Measuresa
Table 2.  
Agreement in High, Average, and Low Hospital Surgical Performance Using FY09 Surgical Quality Measuresa
Agreement in High, Average, and Low Hospital Surgical Performance Using FY09 Surgical Quality Measuresa
1.
Khuri  SF, Daley  J, Henderson  W,  et al; National VA Surgical Quality Improvement Program.  The Department of Veterans Affairs’ NSQIP: the first national, validated, outcome-based, risk-adjusted, and peer-controlled program for the measurement and enhancement of the quality of surgical care. Ann Surg. 1998;228(4):491-507.
PubMedArticle
2.
Bratzler  DW.  The Surgical Infection Prevention and Surgical Care Improvement Projects: promises and pitfalls. Am Surg. 2006;72(11):1010-1016.
PubMed
3.
Patient Safety Indicators overview. Agency for Healthcare Research and Quality website. http://www.qualityindicators.ahrq.gov/Modules/psi_overview.aspx. Accessed March 3, 2013.
4.
Department of Veterans Affairs; Veterans Health Administration (VHA). 2010 VHA Facility Quality and Safety Report. http://www.va.gov/health/docs/HospitalReportCard2010.pdf. Published October 2010. Accessed August 13, 2014.
5.
Agency for Healthcare Research and Quality (AHRQ). AHRQ Quality Indicators: Composite Measures User Guide for the Patient Safety Indicators (PSI), Version 4.2. Rockville, MD: AHRQ; 2010.
6.
Feinstein  AR.  Multi-item “instruments” vs Virginia Apgar’s principles of clinimetrics. Arch Intern Med. 1999;159(2):125-128.
PubMedArticle
Research Letter
Association of VA Surgeons
November 2014

Measuring Surgical QualityWhich Measure Should We Trust?

Author Affiliations
  • 1Center for Healthcare Organization and Implementation Research, VA Boston Healthcare System, Boston, Massachusetts
  • 2Department of Surgery, Boston University School of Medicine, Boston, Massachusetts
  • 3Department of Operations and Technology Management, Boston University School of Management, Boston, Massachusetts
  • 4Department of Surgery, VA Boston Healthcare System, Boston, Massachusetts
  • 5Harvard Medical School, Boston, Massachusetts
JAMA Surg. 2014;149(11):1210-1212. doi:10.1001/jamasurg.2014.373

The use of surgical quality measures to target quality improvement efforts and evaluate hospital performance is now standard. Surgical quality in Veterans Affairs (VA) hospitals is measured by the VA Surgical Quality Improvement Program (VASQIP),1 the Surgical Care Improvement Program (SCIP),2 and the Patient Safety Indicators (PSIs).3 Each approach has a different perspective on surgical quality and uses a different source of data. For example, the VASQIP evaluates 30-day postoperative morbidity and mortality outcomes among other parameters, the SCIP measures compliance with specific perioperative processes of care, and the PSIs calculate the rates of potentially preventable, inpatient, surgical adverse events using administrative data. We explored the correlation between VASQIP, SCIP, and PSI measures and how consistently they identified high- and low-performing VA hospitals.

Methods

We used quality indicator data from fiscal year 2009 (ie, from October 1, 2008, to September 30, 2009) from 67 VA hospitals with advanced surgical programs. We obtained the hospitals’ VASQIP morbidity and mortality observed to expected ratios and SCIP compliance scores from the 2010 VA Facility Quality and Safety Report.4 We ran the PSI software on hospital surgical discharge data to generate observed and risk-adjusted rates for each of the 7 postoperative PSIs. We then adapted the PSI composite software to develop a PSI surgery composite score calculated for the postoperative PSIs using numerator-based weights.5 Using these 4 quality measures, we ranked hospitals and examined the correlation between ranks. We also identified the top and bottom 25% of hospitals, and calculated the number of hospitals with high or low performance on multiple indicators.

Results

Few comparisons yielded significant correlations. Only the hospital VASQIP morbidity observed to expected ratio and the hospital PSI surgery composite score had a significant, albeit weak, association (r = 0.267, P = .03) (Table 1). Agreement on whether hospitals were high, average, or low performers was similarly moderate: the SCIP compliance score and the PSI surgery composite score agreed on performance category for 45% of the hospitals (the highest agreement), whereas the SCIP compliance score and the VASQIP mortality observed to expected ratio agreed for only 37% (lowest agreement). Although none of the hospitals performed well on all 4 measures, 5 of the 67 hospitals (7%) were in the top 25% on 3 of the measures. On all 4 measures, 7 hospitals (10%) were considered average, and 1 hospital (1%) was in the bottom 25% (Table 2).

Discussion

High performance on one type of surgical quality measure was not associated with high performance on another. The lack of correlation between the VASQIP measures, the SCIP compliance score, and the PSI surgery composite score suggests that these indicators measure different dimensions of surgical quality. Information from multiple quality measures is useful in directing individual facilities toward different quality improvement activities. However, from the perspective of comparing facilities, these differences highlight the importance of examining more than 1 measure.

Our findings illustrate the potential confusion that may be associated with multiple, poorly correlated measures that purport to measure quality. However, the confusion arises only when quality is conceptualized as an underlying latent construct that is reflected in the individual indicators. When quality is conceptualized as a construct created by combining individual indicators that reflect different dimensions of quality, the low correlation of individual indicators does not create a problem. In fact, as noted by Feinstein,6 combining uncorrelated dimensions into a composite measure is more consistent with clinical needs than a composite created from multiple dimensions of the same phenomena. We postulate that measures such as the PSIs, VASQIP, and SCIP could be used to develop a single composite measure of quality that encompasses several aspects of surgical quality. In the future, a single composite measure of surgical quality could provide more actionable information for patients, health care professionals, and policy makers as they attempt to differentiate hospital performance.

Back to top
Article Information

Corresponding Author: Hillary J. Mull, PhD, MPP, Center for Healthcare Organization and Implementation Research, VA Boston Healthcare System, 150 S Huntington Ave, 152M, Boston, MA 02130 (hillary.mull@va.gov).

Published Online: September 24, 2014. doi:10.1001/jamasurg.2014.373.

Author Contributions: Dr Mull had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.

Study concept and design: Mull, Chen, Shwartz, Itani.

Acquisition, analysis, or interpretation of data: Mull, Chen, Shwartz, Rosen.

Drafting of the manuscript: Mull, Itani.

Critical revision of the manuscript for important intellectual content: All authors.

Statistical analysis: Mull, Chen, Shwartz.

Obtained funding: Rosen

Administrative, technical, or material support: Chen, Itani.

Study supervision: Rosen.

Conflict of Interest Disclosures: None reported.

Funding/Support: This research was funded by VA Health Services Research and Development Service grant SDR 07-002 (Dr Rosen, principal investigator).

Role of the Funder/Sponsor: The VA Health Services Research and Development Service had no role in the design and conduct of the study; collection, management, analysis, or interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication.

Previous Presentation: This paper was presented at the 38th Annual Surgical Symposium of the Association of VA Surgeons; April 7, 2014; New Haven, Connecticut.

References
1.
Khuri  SF, Daley  J, Henderson  W,  et al; National VA Surgical Quality Improvement Program.  The Department of Veterans Affairs’ NSQIP: the first national, validated, outcome-based, risk-adjusted, and peer-controlled program for the measurement and enhancement of the quality of surgical care. Ann Surg. 1998;228(4):491-507.
PubMedArticle
2.
Bratzler  DW.  The Surgical Infection Prevention and Surgical Care Improvement Projects: promises and pitfalls. Am Surg. 2006;72(11):1010-1016.
PubMed
3.
Patient Safety Indicators overview. Agency for Healthcare Research and Quality website. http://www.qualityindicators.ahrq.gov/Modules/psi_overview.aspx. Accessed March 3, 2013.
4.
Department of Veterans Affairs; Veterans Health Administration (VHA). 2010 VHA Facility Quality and Safety Report. http://www.va.gov/health/docs/HospitalReportCard2010.pdf. Published October 2010. Accessed August 13, 2014.
5.
Agency for Healthcare Research and Quality (AHRQ). AHRQ Quality Indicators: Composite Measures User Guide for the Patient Safety Indicators (PSI), Version 4.2. Rockville, MD: AHRQ; 2010.
6.
Feinstein  AR.  Multi-item “instruments” vs Virginia Apgar’s principles of clinimetrics. Arch Intern Med. 1999;159(2):125-128.
PubMedArticle
×