Objective
To explore hospital comparison Web sites for general surgery based on: (1) a systematic Internet search, (2) Web site quality evaluation, and (3) exploration of possible areas of improvement.
Design
A systematic Internet search was performed to identify hospital quality comparison Web sites in September 2006. Publicly available Web sites were rated on accessibility, data/statistical transparency, appropriateness, and timeliness. A sample search was performed to determine ranking consistency.
Results
Six national hospital comparison Web sites were identified: 1 government (Hospital Compare [Centers for Medicare and Medicaid Services]), 2 nonprofit (Quality Check [Joint Commission on Accreditation of Healthcare Organizations] and Hospital Quality and Safety Survey Results [Leapfrog Group]), and 3 proprietary sites (names withheld). For accessibility and data transparency, the government and nonprofit Web sites were best. For appropriateness, the proprietary Web sites were best, comparing multiple surgical procedures using a combination of process, structure, and outcome measures. However, none of these sites explicitly defined terms such as complications. Two proprietary sites allowed patients to choose ranking criteria. Most data on these sites were 2 years old or older. A sample search of 3 surgical procedures at 4 hospitals demonstrated significant inconsistencies.
Conclusions
Patients undergoing surgery are increasingly using the Internet to compare hospital quality. However, a review of available hospital comparison Web sites shows suboptimal measures of quality and inconsistent results. This may be partially because of a lack of complete and timely data. Surgeons should be involved with quality comparison Web sites to ensure appropriate methods and criteria.
The Internet has become ubiquitous in our lives. Approximately 70% of Americans use the Internet and average more than 32 hours per month online.1,2 We shop, pay our bills, socialize, and, more than ever before, get health-related information online. A total of 113 million Americans searched for health-related information on the Web in 2006.3 Previously, health-related Internet use was mainly to get information on a specific disease, dietary issues, or exercise; however, this is no longer true. Of health-related Internet users, 29% search for information on specific hospitals or physicians.3 This is the fastest-growing category of search, up 38% in the past 4 years.3
At the same time that Americans are increasingly using the Internet to find hospitals and physicians, there has also been a call from payers and the public for transparency and accountability in health care. There are a number of data sources available. Hospitals in all states must submit detailed information (eg, procedure and diagnostic codes) on all Medicare patients to the Centers for Medicare and Medicaid Services (CMS). In addition, 21 states collect data on all patients discharged from nonfederal acute care hospitals (Arizona, California, Florida, Iowa, Maine, Maryland, Massachusetts, Nevada, New Hampshire, New Jersey, New York, North Carolina, Oregon, Pennsylvania, Rhode Island, Texas, Utah, Vermont, Virginia, Washington, and Wisconsin), and all Veterans’ Administration hospitals provide data through the National Surgical Quality Improvement Program. Unfortunately, these data are not readily available for the public because they are generally in a raw and unusable form. For the public to use these data, they must be interpreted and presented in a user-friendly manner. The public and private sectors are attempting to bring these data to the patient through creation of publicly available hospital comparison Web sites.
Hospital comparison Web sites allow patients to search for hospitals within a given area and compare them based on their performance on various quality measures. These sites are likely to influence the health care system because payers support them and the public believes in them. A recent Harris Interactive/Wall Street Journal poll shows that 57% of adults believe that assessment by third-party organizations that monitor health care quality is a fair way to measure and compare quality of care. This is comparable to the percentage of adults who believed it was fair to judge care based on tests that measure how well physicians are caring for chronic diseases (61%) and the rate of preventive screening tests (57%).4 In addition, General Motors, the nation's largest private purchaser of health care, believes, “American consumers should know as much about the medical care they receive as they do about the vehicles they purchase,”5 and is providing employees access to a hospital comparison Web site as part of its health care benefit. General Motors is not alone, with most major insurers, including Wellpoint, Humana, United Healthcare, Aetna, Cigna, and many state and regional Blue Cross and Blue Shield insurers, providing access to this type of information.5
Surgical patients often have time to research hospitals before an elective operation; thus, surgeons may likely be affected by hospital comparison Web sites. Yet, to our knowledge, there is little in the literature that examines these Web sites and their content. The overarching aim of the present study is to evaluate these hospital comparison Web sites as they apply to general surgery. Specifically, we will (1) perform a systematic search of the Internet for hospital comparison Web sites, (2) evaluate these Web sites, and (3) explore possible areas of improvement.
A systematic search of the Internet to find hospital comparison Web sites was performed in September 2006 using the 4 most popular Internet search engines: Google, Yahoo!, MSN/Windows Live, and AOL.6-9 These sites are used in approximately 90% of all Internet searches performed in the United States.10 Six specific terms were entered into each of the 4 sites to ensure standardized searches. The 6 terms were hospital quality, hospital quality comparison, hospital comparison, hospital ranking, surgical quality comparison, and surgical ranking. The first 3 pages of results for each search term (in each search engine) were examined. Each search engine has 10 Web site search results per page and an additional variable number of paid advertisement links. We examined search results and advertised links because patients would see both when searching for hospital comparison Web sites. Thus, 30 sites and all advertisement links for each search term were examined to determine if they compared hospitals based on surgical quality. In addition, any pertinent links within the identified Web sites were followed and examined. In total, 846 Web site links were examined.
From the previously described Internet search strategy, Web sites were subsequently included in this study if they met 3 inclusion criteria. The site needed to (1) rank and compare hospitals based on surgical quality measures, (2) rank hospitals nationally (ie, not be restricted to 1 region or state), and (3) be accessible to the public (ie, not restricted to patients in a given insurance plan). Web sites meeting all 3 criteria made up the Web site study sample.
Data collection and analysis
Web sites identified through the systematic search previously described were compared for content and examined for trends. Standardized data were collected from each of the selected Web sites. These data included general information, such as name and URL (both withheld for proprietary sites), and more detailed information on Web site accessibility, transparency, appropriateness, timeliness, and consistency.
Web site accessibility was assessed by examining 3 criteria: cost, requirement of sign-up, and visibility. Web sites were considered most accessible if they were free, did not require sign-up, and were highly visible. Visibility was determined using the Google search engine because it accounts for more than 50% of Internet searches performed.10 A Web site was scored with higher visibility if it appeared on the first page of results for any of the search terms.
Data transparency was assessed by examining 3 criteria: data source, statistical/analytical method, and risk adjustment. Web sites were considered most transparent if their data sources were provided and available to the public, if their statistical and analytical methods were explicitly defined such that their calculations could be reproduced, and if they performed and explicitly defined their risk adjustment method (if outcome measures were used).
Appropriateness was assessed by examining the quality measures used in ranking hospitals. Web sites that used a greater variety of measures, including measures of process, structure, and outcomes, were considered more appropriate. In addition, Web sites that used procedure-specific measures (ie, ranked hospitals by specific operations) were considered more appropriate.
Timeliness was assessed by examining the age of the data used to rank hospitals. Web sites with data less than 1 year old were considered timely.
Consistency was assessed by examining the results of procedure-specific sample searches. To provide concrete illustrations of use and resulting outcomes of the hospital comparison Web sites, sample searches were performed comparing 4 Los Angeles–area hospitals (a large academic medical center, a large county hospital, a large private hospital, and a moderate-sized private hospital) on 3 common procedures (laparoscopic cholecystectomy, hernia repair, and colectomy).
With the Web site inclusion criteria, search strategy, and definitions, 6 Web sites were identified. One was a government Web site (CMS's Hospital Compare).11,12 Two Web sites were run by nonprofit organizations: the Joint Commission on Accreditation of Healthcare Organizations' (JCAHO’s) Quality Check and the Leapfrog Group's Hospital Quality and Safety Survey Results.13 Three Web sites were proprietary (sites A, B, and C [names and URLs withheld]). Many of the sites that were identified in the search, but not included in this study, were insurance company sites that provided a regionally restricted hospital comparison tool available only to their enrollees. Another commonly identified category of Web site not included in this study was state-specific quality comparison sites (eg, the New York State Coronary Artery Bypass Grafting Reporting System).
Three criteria were used to identify Web site accessibility: cost, need for sign-up, and ease of Web site identification (Table 1). The CMS's Hospital Compare and the JCAHO's Quality Check were rated as the most accessible overall. Both were free, required no sign-up or log-in, and were highly visible. Quality Check was more visible (first page for 4 of 6 search terms) than Hospital Compare (first page for 3 of 6 items), but all of Quality Check's links were paid advertisements vs none for Hospital Compare. The Leapfrog Group's Hospital Quality and Safety Survey Results was free and required no log-in, but was not as visible (first page for 1 of 6 terms). The proprietary sites were much less accessible. All required sign-up and log-in, and 2 of the 3 required a modest annual fee. While Web site B was most visible (first page for 6 of 6 terms), direct links to the other proprietary sites could not be found on the first page for any of the search terms.
The CMS, JCAHO, and Leapfrog Group Web sites were most transparent (Table 1). They provided data sources and statistical methods (including the risk adjustment method when appropriate) such that their calculations could be repeated and their exact results duplicated. The 3 proprietary sites did provide data source information and some general information about statistical and risk adjustment methods; however, this information was not explicitly stated and, therefore, is not reproducible (ie, a researcher with the same data could not repeat their calculations and duplicate their results). In addition, some of their quality measures were ill defined. For example, the term complications was used and loosely defined, but it was not clear whether a higher than expected complication rate for coronary artery bypass grafting means that a hospital had more sternotomy infections or urinary tract infections.
Appropriateness of hospital comparisons
To judge the appropriateness of the hospital comparisons made by these Web sites, we examined the breadth of the quality measures used (Table 1). The CMS and JCAHO Web sites used only Medicare's surgical infection prevention process measures to compare hospital surgical quality. The 2 surgical infection prevention measures used were as follows: (1) if antibiotics were given within 1 hour of surgery start time and (2) if antibiotics were discontinued within 24 hours after surgery (48 hours for cardiac surgery). While these are likely valid measures, they do not seem sufficient to judge the quality of an entire surgical department. Furthermore, neither site gives procedure-specific measures.
The Leapfrog Group's hospital quality comparison criteria focused mainly on structural components of care: computerized order entry, intensivist-staffed intensive care units, and procedural volume (mortality and some process measures are used to compare coronary artery bypass grafting quality in 4 states). While the Leapfrog Group did compare hospitals at the procedural level, only 4 high-risk surgical procedures were examined (coronary artery bypass grafting, pancreatic resection, esophagectomy, and abdominal aortic aneurysm repair), and these procedures compose a small percentage of overall surgical procedures in this country.
The 3 proprietary sites were more complete. All 3 allowed users to compare hospitals on multiple common operations. Web site A compared hospitals on structural (eg, procedural volume and cost), process (eg, surgical infection prevention measures), and outcome (eg, complications, mortality, and patient satisfaction) measures. Web site C compared hospitals on structural (procedural volume and cost) and outcome (mortality, complications, and length of stay) measures. Web site B compared hospitals only on outcome measures (mortality or complications). Only Web sites A and C allowed patients to use their personal preferences to rank hospitals. For example, on these sites, patients may choose to have different criteria, such as mortality, complications, and cost, included or not included in the ranking computation.
Timeliness and consistency
None of the Web sites provided real-time data (Table 1). All data sources for which date information is given were more than 1 year old.
To determine the consistency of these Web sites, sample searches were performed on the 3 proprietary sites, as previously described. Each of the 3 sites had a different method and criteria for scoring hospitals; however, each site essentially ranked hospitals as average, above average, or below average for a given procedure. Because these sites were masked, the ratings were reported as average, best, or worst for all 3 sites regardless of the actual terms used. For any given search, multiple hospitals could receive the same ranking. The CMS and JCAHO Web sites were not included in these searches because they do not rank hospitals at the procedural level. The Leapfrog Group's Web site was not included because it does not report on any of the searched procedures.
For laparoscopic cholecystectomy, the Web sites were consistent (Table 2). Hospital 3 was consistently ranked highly, hospitals 1 and 2 were in the middle, and hospital 4 was at the bottom. For hernia repair, the Web sites performed poorly because of a lack of data and/or reporting (Table 2). Web site B did not report on hernia repair. Web site C only reported information for hospital 1, stating that the others did not meet minimum annual volume criteria and, thus, could not be ranked. For colectomy, the Web sites also performed poorly, but in this case it was because their results were contradictory (Table 2). For example, hospital 2 was ranked best by Web sites B and C but worst by Web site A. Similarly, hospital 4 was ranked worst by Web site A but best by Web site C.
Health care is becoming increasingly transparent. While transparency is likely good for quality of care, it must be performed appropriately. The present study examines data that are transparent to the public on the Internet. Using a standardized search strategy, uniform definitions, and explicit inclusion criteria, we identified Web sites that rated surgical care and outcomes.
We have examined hospital comparison Web sites for many factors, including accessibility, data transparency, appropriateness, timeliness, and consistency. We identified 1 government, 2 nonprofit, and 3 proprietary Web sites. In general, the government and nonprofit Web sites were more accessible and transparent with their data. However, the proprietary sites provided more detailed information that was procedure specific and complete. None of the Web sites provided information that was less than 1 year old.
Previous research14-17 has shown that public reporting of data does improve performance on quality measures within the medical realm (eg, acute myocardial infarction quality measures). In addition, consumer surveys have shown that a hospital's reputation is affected by participation in public reporting of quality measures.15 However, to our knowledge, only 1 study18 has evaluated the content and validity of 1 hospital comparison Web site. That study looked only at acute myocardial infarction measures and found that the Web site discriminated well in the aggregate, but poorly between individual hospitals. To our knowledge, our study is the first to review hospital comparison Web sites and how they rate surgical departments. We found that these Web sites are inconsistent in their level of detail and in their results.
The federal government, states, and third-party organizations (Leapfrog Group and American College of Surgeons) are collecting increasing amounts of performance data on hospitals and, more recently, individual physicians. At the same time, in part because of the increasing cost of health care, payers are demanding more accountability in medicine. Consequently, these hospital comparison Web sites have arisen to fill the need for user-friendly dissemination of publicly collected data. While some may doubt that these Web sites are having significant influence on patient care, the health care information industry believes and is investing. In late 2006, WebMD purchased the 5-year-old hospital comparison Web site company, Subimo, for $60 million.19
As surgeons, these Web sites may affect our practice significantly in the future. At this time, the government and nonprofit Web sites do not provide sufficient procedure-level information for most of our patients. Thus, patients must use the more detailed proprietary Web sites and this poses several concerns. First, these sites do not provide rankings based on current information. In general, rankings are based on data that are 2 or more years old. This dated information may penalize hospitals for the performance of surgeons who no longer operate at their facility. Second, as seen in the sample searches previously described, rankings can be quite inconsistent and contradictory, even for the most common procedures, which may be because of differences in statistical method and risk adjustment. Finally, financial motivation is also a concern. None of the Web sites require payment from hospitals or surgeons to be included, but as they gain influence in the health care market, this is a potential conflict of interest.
Some of the concerns raised previously are because of the proprietary nature of the Web sites (eg, lack of transparency), but some are because of limitations in overall data quality. The data available are fragmented and inconsistent. For example, 21 states provide all-patient discharge data, but the data are collected by coders, are not audited, and are not uniform across states. The most useful data would be consistent across hospitals and states to provide uniform and high-quality data points based on standardized definitions. One example of the collection of high-quality data is the National Surgical Quality Improvement Program. The National Surgical Quality Improvement Program is a national, risk-adjusted, peer-controlled program with audited data collection by trained surgical clinical reviewers/abstractors. The National Surgical Quality Improvement Program collects procedural volume data and 45 preoperative, 17 intraoperative, and 33 outcomes data points.20 Another example is the American College of Surgeons' National Cancer Database Electronic Quality Improvement Packet. The Electronic Quality Improvement Packet is a data audit and feedback program that uses an existing electronic system to communicate data on quality measures for quality improvement. The American College of Surgeons analyzes data received from more than 1400 National Cancer Database hospitals and provides feedback on the tabulated results, including the quality of raw data that each hospital submits (eg, areas in which data are missing or questionable), and preliminary findings have demonstrated that this feedback has led to improvement in the quality of data being resubmitted.21 Improvements in the quality of data will allow for more accurate reporting of hospital quality. In addition to increased quality of data, more timely data are needed. Hospital comparison Web sites report data that are, in general, 2 or more years old. Many are trying to address this issue; for example, the Electronic Quality Improvement Packet is initiating a Rapid Case Ascertainment system such that data audit, analysis, and feedback are performed within 4 to 6 months of diagnosis. Such real-time data collection would be novel in terms of reporting.
If accurate databases with uniform measures capable of timely reporting existed across the country, then a consolidation of the number of Web sites that rank or compare hospitals might be warranted. A possible collaboration of organizations to collect and present uniform data may lead to a more accessible, transparent, appropriate, consistent, and timely mechanism for comparing hospitals for specific surgical procedures. Improvement of hospital comparison Web sites is particularly important for surgeons because it probably will not be long before data on individual physicians will be available. The CMS has established the Physician Quality Reporting Initiative, which will collect data on 74 physician-specific quality measures starting in July 2007.22
In summary, the present article evaluates the level of information provided on the Internet to evaluate and compare hospitals on surgical quality. Six Web sites were evaluated, with demonstrably significant variations in data accessibility, transparency, appropriateness, timeliness, and consistency. Further work is needed to improve these issues, particularly the accessibility by patients, the quality and type of data reporting, the statistical method, and the criteria by which hospitals and specific operations are compared. It is probably important that surgeons be involved with the development of such reporting Web sites so that the comparisons accurately and appropriately reflect the quality of surgical care.
Correspondence: Michael J. Leonardi, MD, 10833 Le Conte Ave, 72-215 Center for Health Sciences, Box 956904, Los Angeles, CA 90095-6904 (mjleonardi@mednet.ucla.edu).
Accepted for Publication: May 19, 2007.
Author Contributions:Study concept and design: Leonardi, McGory, and Ko. Acquisition of data: Leonardi, McGory, and Ko. Analysis and interpretation of data: Leonardi, McGory, and Ko. Drafting of the manuscript: Leonardi, McGory, and Ko. Critical revision of the manuscript for important intellectual content: Leonardi, McGory, and Ko. Statistical analysis: Leonardi, McGory, and Ko. Obtained funding: Leonardi. Administrative, technical, and material support: Leonardi. Study supervision: Leonardi, McGory, and Ko.
Financial Disclosure: None reported.
Funding/Support: This study was supported by the Robert Wood Johnson Clinical Scholars Program.
Previous Presentation: This paper was presented at the 78th Annual Meeting of the Pacific Coast Surgical Association; February 20, 2007; Kohala Coast, Hawaii; and is published after peer review and revision. The discussions that follow this article are based on the originally submitted manuscript and not the revised manuscript.
John Hunter, MD, Portland, Oregon: This is a very timely and illuminating paper. It is a gift to us all when a couple of very bright surgeons take such a confusing array of Internet offerings and organize their observations so clearly.
My observations and questions are these:
1. It appears that the publicly available Web sites (CMS, JCAHO, Leapfrog) have little of the data that the public wants and the proprietary Web sites have lots of the data that the public wants, but it is inconsistently reliable and you gotta pay to get it. How frustrating. The authors make the plea that surgeons be involved in reporting, but how are we to be involved? If we get too close to the data, the public won't believe us. If we stay too removed, we won't believe the data reported.
The ACS [American College of Surgeons] NSQIP [National Surgical Quality Improvement Program] tool is the most likely vehicle available to provide accurate data across a broad range of procedures, but some hospitals don't want to participate for economic reasons (it costs $100 000 to get started, and then there is the annual cost of a nurse). Is NSQIP the answer? Is the price a real barrier to universal participation or a convenient place to hide for hospitals with poor performance?
2. How should we feel about conclusions drawn from 2-year-old administrative data? If we agree that the data are as likely to be misleading as helpful, how do we get better data to our patients? If the ACS takes this on—which they should—it is unlikely we will be able to make the Google search engine start at the ACS Web site without paying an arm and a leg for this privilege.
3. Provider-specific data. This is a bit of a slippery slope. As the ACGME [Accreditation Council for Graduate Medical Education] recognizes that systems-based practice is the present and the future, where does the individual provider fit into this wave of data transparency? Certainly, the surgeon who has cut 10 bile ducts accidentally is probably not a high-quality provider, no matter how high the denominator of total cases. This is a rare problem indeed.
More frequently, the difficulty is that current risk adjustment strategies do not discern subtle, but important, differences in practice, well-known to the medical staff of a hospital. Take the master surgeon who does all the open redo bariatric cases in patients with a BMI [body mass index, which is calculated as weight in kilograms divided by height in meters squared] of greater than 60 and compare him to his partner who does only first-time laparoscopic cases in patients of equivalent age with a BMI of 35 to 45. We expect higher mortality from the master surgeon, but the database browser may shun the more experienced surgeon because his mortality rate is “too high.” None of the databases mentioned contain the risk adjustment strategy to correct for these differences in case mix.
And what about 2 partners who work together, coding all cases in the name of the senior surgeon. A patient researching for data on the junior partner will not recognize this contribution. Should we provide the Internet shopper surgeon-specific outcome data, recognizing how easily this is misreported and misinterpreted, or should we advocate for institution-specific data only?
Dr Ko: We have heard throughout this meeting how everyone in health care is really trying to improve quality. To Err Is Human came out in the late 1990s and over the subsequent 7 to 8 years a lot has been accomplished in health care and a lot has been accomplished in surgery, with a fair amount due to Dr Scott Jones at the ACS, Division of Research and Optimal Patient Care. With his vision and leadership, he helped to initiate the private sector NSQIP program, the guidelines review programs, and the bariatric accreditation program, among other things. As surgeons, while we have a lot going for us and we are doing a lot, it seems like the quality efforts are really going to further explode. We are seeing increasing amounts of pay for performance, we are collecting and evaluating our SCIP [Surgical Care Improvement Program] data, and all of the alphabet soup groups like the NQF [National Quality Forum] and the AQA [Ambulatory Care Quality Alliance] are trying to identify ways for us to evaluate, demonstrate, and improve our quality. Finally, as Dr Russell notes, we are going to be regulated more, and we're going to be measured.
One of the reasons to measure our quality is for us to see our own data and to improve. But, another reason is for our patients. At present, if somebody has an esophageal or a colon cancer, how do they find a good hospital and a surgeon to help them? Word of mouth is 1 option. Following their own doctor's recommendation is another option. A third possibility is the Internet. The Internet has definitely exploded and has given us an exhaustive amount of information at our fingertips. But, how is the surgical information on the Internet? Is it appropriate? Is it reliable? Is it consistent? Is it what we would want? These are the types of questions that Dr Leonardi attempted to address in this study.
What we found was that the information available on the Internet is not perfect. In fact, some of the sites have a ways to go. But, what's important is that it is a start. Clearly there is more work that needs to be done, but these sites have offered information upon which we can build. Importantly, however, we believe that surgeons need to be more involved with this information sharing—otherwise, suboptimal data with suboptimal analyses may be shared on the Web, and we’ll only have ourselves to blame.
Now to address Dr Hunter's questions. The first question asked how do surgeons get involved, whether the data from NSQIP are the answer, and cost issues. If everyone took a step back and thought about what would be the ideal thing to do to measure quality, I think everyone would agree that the data used for measurement have to be good quality, and representative. It would be something that is maybe mandatory, something that would probably be reproducible, something that is standardized so that, for example, the definition and labeling of a wound infection in hospital A are the same as a wound infection in hospitals B, C, and D. We would also need to have a system that minimizes gaming. If you think about these issues altogether, the NSQIP may be one of the most ideal, if not the most ideal, programs to do this for a number of reasons. First, for data collection, there is a trained data collector; it's not the surgeon, it's not the nurse, and it's not the resident. It is somebody who is trained and pretty much does this full-time. What is very important also is that there are quality control measures of every NSQIP hospital. For example, that data collection person goes through lots of training and is also audited. The collected data come in, and if they don't look like they are consistent with what they have done in the last 1000 cases, this issue will be brought up.
Now, Dr Hunter mentioned the cost of it, and also a lot of people have been talking about value, which is quality per cost. Hospitals are really thinking about that. There are a couple of things to mention in this regard. First, with good data and good data feedback, quality should improve. If quality improves and there are less complications, then costs will consequently go down as well. It's been shown repeatedly in many areas that high quality saves money.
Second, having good data is helpful for pay-for-performance issues. As many in the audience probably know, our hospitals are reporting on the SCIP. Well, CMS is now allowing hospitals to use the data collected in NSQIP to fulfill the SCIP criteria. So, not only will the hospitals gain the value of knowing their risk-adjusted outcomes with high-quality data but they will also be able to use these same data to fulfill the CMS requirement for SCIP.
Is NSQIP perfect now? No, but it continually is improving quality, and quality improvement is iterative. And in terms of being a reliable, appropriate, and helpful data source, NSQIP is probably one of the best programs available.
Your next question asked whether or not we trust administrative data? I guess the best answer is that we can sometimes trust administrative data. But equally important to recognize is the issue of whether we are looking at the right variables. In your paper, there was a good discussion asking what are the appropriate measures to use. This is very important. What metric should we use? For example, for an outpatient breast procedure, for an elective inguinal hernia repair, or for a Nissen for that matter, inpatient mortality is probably not the best and most helpful measure to use. However, inpatient mortality is available and, for that reason, I suspect some of the “report card” Web sites report it. So, getting back to the administrative databases, if we are examining a procedure where mortality is an appropriate outcome metric, that's great. You can use administrative databases for that—they are cheap and they are pretty reliable for inpatient mortality. If, on the other hand, you want to look at quality of life or patient satisfaction, that clearly is not available.
The third issue that Dr Hunter raised was hospital quality vs surgeon quality. I think everyone in this room will agree that there are some procedures that are more dependent on the surgeon and there are some things that are more dependent on the hospital, and there is probably everything in between. In point of fact, we are still learning—how to measure, what to measure, and who to measure. From what we know thus far, however, there is not one size that fits all. And while we are right now mostly measuring hospital quality and that is what is being most routinely reported, this is going to change. Surgeons will be increasingly measured and reported. And also as many predict that in 10 years, 80% of our procedures will be outpatient, outpatient facilities will be measured as well, anything from the freestanding outpatient surgical center to your office. We have to develop ways to take into account the things raised by Dr Hunter, such as case mix, and other factors important to quality and outcomes. At the very least, we need to ensure that data being used are reliable, valid, and appropriate. In brief, this whole area of quality measurement, monitoring, and reporting is expanding—and surgeons should probably be very involved.
Jeffrey Pearl, MD, San Francisco, California: I hope the Program Committee understands that these are important topics, and in the future we would like to have more panel discussions about these issues. Since the administrative databases are the key to what is available on the Internet really for most people who look for things, and I just recently learned that the administrative databases are totally based on extraction of the data from the medical record and the coders and how the coders work, they use certain terminology so that “hypokalemia” is very important to them but “down arrow k” has no meaning to them whatsoever and won't be used. I am wondering, do you think the ACS could help us by taking on ICD-9 [International Classification of Diseases, Ninth Revision] coding to help us get more money for what we do? Could they help us by teaching the language of the coding abstractor so that we could describe our patient issues to the world better?
Dr Ko: Short answer, yes. Improving coding is important. And advocacy for reimbursement is important, and is ongoing at the college.
Thomas R. Russell, MD, Chicago, Illinois: The whole area of quality improvement and patient safety has become a densely populated landscape. Everybody is into it. We go to these meetings, and the insurance companies express their views, the employers have their views, and it all gets very confusing.
We have to seize this opportunity to influence the quality arena. If the MDs [doctors of medicine] don't do it, the MBAs [masters of business administration] will do it. This much is clear.
Transparency is the word of the day now, not only transparency with respect to quality but also cost. This is a significant change in the direction of health care reform today. So, we need to increase the visibility of our efforts in this area, including creating greater awareness about the ACS NSQIP. We need to position ourselves as drivers of this quality improvement movement.
So thanks for bringing this up today. Much still needs to be done. We are in the early beginnings of this game, and it's incumbent upon leading this effort.
Financial Disclosure: None reported.
14.Marshall
MNShekelle
PGLeatherman
SBrook
RH The public release of performance data: what do we expect to gain? a review of the evidence.
JAMA 2000;283
(14)
1866- 1874
PubMedGoogle ScholarCrossref 15.Hibbard
JHStockard
JTusler
M Hospital performance reports: impact on quality, market share, and reputation.
Health Aff (Millwood) 2005;24
(4)
1150- 1160
PubMedGoogle ScholarCrossref 16.Lindenauer
PKRemus
DRoman
S
et al. Public reporting and pay for performance in hospital quality improvement.
N Engl J Med 2007;356
(5)
486- 496
PubMedGoogle ScholarCrossref 17.Robinowitz
DLDudley
RA Public reporting of provider performance: can its impact be made greater?
Annu Rev Public Health 2006;27517- 536
PubMedGoogle ScholarCrossref 18.Krumholz
HMRathore
SSChen
JWang
YRadford
MJ Evaluation of a consumer-oriented Internet health care report card: the risk of quality ratings based on mortality data.
JAMA 2002;287
(10)
1277- 1287
PubMedGoogle ScholarCrossref 20.Khuri
SFDaley
JHenderson
W
et al. The Department of Veterans Affairs' NSQIP: the first national, validated, outcome-based, risk-adjusted, and peer-controlled program for the measurement and enhancement of the quality of surgical care: National VA Surgical Quality Improvement Program.
Ann Surg 1998;228
(4)
491- 507
PubMedGoogle ScholarCrossref