Context The National Practitioner Data Bank (NPDB) is believed to be an important
source of information for peer review activities by the majority of those
who use it. However, concern has been raised that hospitals may be underreporting
physicians with performance problems to the NPDB.
Objective To examine variation in clinical privileges action reporting by hospitals
to the NPDB, changes in reporting over time, and the association of hospital
characteristics with reporting.
Design Retrospective cohort study of privileges action reports to the NPDB
between 1991 and 1995, linked with the 1992 and 1995 databases from the Annual
Survey of Hospitals conducted by the American Hospital Association.
Setting and Participants A total of 4743 short-term, nonfederal, general medical/surgical hospitals
throughout the United States that were continuously open between 1991-1995
and registered with the NPDB.
Main Outcome Measures (1) Reporting of 1 or more privileges actions during the 5-year study
period and (2) privileges action reporting rates (numbers of actions reported
per 100,000 admissions).
Results Study hospitals reported 3328 privileges actions between 1991 and 1995;
34.2% reported 1 or more actions during the period. The range of privileges
action reporting rates for these hospitals was 0.40 to 52.27 per 100,000 admissions,
with an overall rate of 2.36 per 100,000 admissions. The proportion of hospitals
reporting an action decreased from 11.6% in 1991 to 10.0% in 1995 (P=.008). After adjustment for other factors, urban hospitals had significantly
higher reporting than rural hospitals (adjusted odds ratio [OR], 1.21 [95%
confidence interval {CI}, 1.02-1.43]), while members of the Council of Teaching
Hospitals of the Association of American Medical Colleges had significantly
lower reporting than nonmembers (adjusted OR, 0.54 [95% CI, 0.40-0.73]). There
were notable regional differences in reporting, with the east south Central
region having the lowest rate per 100,000 admissions (1.49 [95% CI, 1.33-1.65]).
Conclusions The results of this study indicate a low and declining level of hospital
privileges action reporting to the NPDB. Several potential explanations exist,
1 of which is that the information reported to the NPDB is incomplete.
The National Practitioner Data Bank (NPDB) serves as a central repository
of information about health care providers' malpractice payments, adverse
licensure actions, professional membership restrictions, and adverse hospital
privileging actions.1 Hospitals are required
to query the NPDB for all new staff appointments and at least once every 2
years for all existing medical staff. They are also required to report actions
that affect the clinical privileges of their medical staff. Reportable actions
include reduction, restriction, suspension, or revocation of clinical privileges
for at least 31 days; the voluntary resignation of clinical privileges either
while peer review of a potential quality concern is taking place or in lieu
of the peer review process; and the denial of clinical privileges to a new
or existing medical staff member when peer review judgment is involved.2
Two surveys found that the majority of those entities using the NPDB
rated it as an important source of information for peer review activities.3,4 The utility of the NPDB, however, is
dependent on its ability to provide complete and accurate information to its
users.
A 1995 Office of Inspector General report raised concern that there
may be underreporting by hospitals of physicians with performance problems
to the NPDB.5 The report found that over 3
years, about 75% of all hospitals, some of them facilities with more than
300 beds, had not reported a single privileges action to the NPDB. The report
also found great variation in privileges action reporting from state to state.
Finally, the number of privileges action reports submitted by hospitals was
less than half the number of reports of licensure actions submitted by state
licensing boards during the same 3-year period. If there is underreporting
of privileges actions to the NPDB, this would undermine the NPDB's effectiveness
as a quality-improvement tool.
In this study, we used 5 years of data from the NPDB linked to information
from the American Hospital Association (AHA) to further examine (1) reporting
of hospital privileges actions to the NPDB, (2) associations between hospital
characteristics and reporting, and (3) changes in reporting over time. We
hypothesized that physicians would be reticent to take reportable privileges
actions against their peers, resulting in a decrease in reporting over the
study period.
Hospitals listed as open by the AHA for 1 or more years between 1991
and 1995 were hand linked to hospitals registered with the NPDB from its inception
through 1995. Study hospitals were short-term, nonfederal, and general medical/surgical
facilities open throughout the study period.
Hospital characteristics were obtained from the AHA's database of US
hospitals. For almost all variables, we used information from the 1995 AHA
database to describe the hospitals in this study. The percentage of active
and associate staff who were board certified was no longer included in the
1995 survey, so we used data from the only other year of the AHA database
we had available (1992). Data on hospital characteristics were linked to information
that the NPDB maintains on each privileges action taken by hospitals, including
the date of the action and the length of the action.
Variables Describing Clinical Privileges Actions
We identified whether a hospital had submitted a privileges action report
in each of the study years, and calculated a rate of privileges action reports
per 100,000 admissions for each hospital. The number of admissions in each
hospital over 5 years was calculated using 1992 and 1995 AHA data. We assumed
relative constancy in number of admissions, and applied 1992 admission figures
to the years 1991-1993 and the 1995 admissions to the years 1994-1995. Reports
that had been submitted initially and later rescinded were not included.
We used individual hospital-based variables to calculate both the percentage
of hospitals reporting any privileges action and the rate of privileges actions
per 100,000 admissions over the study period for groups of hospitals. This
aggregate privileges action rate sums the number of actions for all hospitals
with a particular characteristic (eg, rural hospitals) over the 5 years, divides
by the total number of admissions for those same hospitals, and expresses
the rate per 100,000 admissions.
Hospital Characteristics of Interest
We were interested in the effect of a number of hospital characteristics
on hospital privileges action reporting: (1) governance (for-profit; nongovernment,
not-for-profit; government, nonfederal [from here on referred to as state
and local]), (2) accreditation by the Joint Commission on Accreditation of
Healthcare Organizations (JCAHO), (3) membership in the Council of Teaching
Hospitals of the Association of American Medical Colleges (AAMC), (4) urban/rural
location (urban defined as metropolitan statistical area counties, rural defined
as all other counties), (5) number of beds (<100, 100-299, ≥300), (6)
osteopathic vs nonosteopathic, (7) whether the hospital was contract managed,
(8) whether a nursing home was part of the hospital, (9) percentage of board-certified
physicians on staff, and (10) region of the country, defined by the standard
Census Bureau divisions (Table 1).
We examined variation in privileges action reporting rates for individual
hospitals, the change in reporting for all study hospitals over the study
period, and differences in reporting between groups of hospitals, stratified
by number of beds. χ2 Tests were used to assess the statistical
significance of differences between dichotomous outcome variables. A jackknife
procedure was used to estimate the confidence intervals around the aggregate
privileges action rates and to determine the statistical significance of differences
in rates between hospital groups.
Multiple logistic regression for the dichotomous outcome variable of
whether the hospital had reported any privileges actions and multiple linear
regression for the outcome variable of the number of privileges actions (natural
log) were used to identify the independent effect of each hospital characteristic
on reporting while controlling for other hospital characteristics and factors,
including number of admissions.
Of the 6903 hospitals listed as open for 1 or more years between 1991
and 1995 by the AHA, 6754 (97.8%) were linked with hospitals registered with
the NPDB. Of these 6754 hospitals, 6009 (89.0%) were open for all 5 study
years. A total of 4743 facilities were short-term, nonfederal, general medical/surgical
hospitals that were included in our analyses.
The majority of
the 4743 study hospitals were nongovernment, not-for-profit, and accredited
by JCAHO (Table 1). They were
about equally distributed between urban and rural areas. Nearly 30% had nursing
homes attached.
The 4743 study hospitals reported 3328 clinical
privileges actions over the 5-year study period; about a third of the hospitals
(34.2%) reported at least 1 action over the study period. The range of the
privileges action rates for individual hospitals that had taken actions was
between 0.40 and 52.27 per 100,000 admissions. The overall privileges action
rate for the study hospitals in aggregate was 2.36 per 100,000 admissions.
There was a decrease both in the percentage of hospitals reporting
at least 1 privileges action and in the rate of privileges actions over the
study period (Table 2). These
findings were generally consistent across hospitals with differing characteristics.
While the proportion of hospitals taking privileges actions decreased over
time, the proportion of actions that were permanent or indefinite increased
significantly, from 79.6% in 1991 to 85.1% in 1995 (P=.01).
Urban hospitals and hospitals
accredited by JCAHO were more likely to have reported 1 or more privileges
actions and had higher rates of reported actions per 100,000 admissions than
their counterparts for nearly all bed size categories (Table 3). State and local hospitals were least likely of the 3 hospital
ownership types to have reported actions and had the lowest rates of reporting
an action in almost all bed size categories. The majority of hospitals that
were members of the Council of Teaching Hospitals of the AAMC had 300 or more
beds. Within this bed size category, hospital members of the Council of Teaching
Hospitals of the AAMC had lower rates of reported privileges actions and were
less likely to have reported an action than nonmember hospitals.
There were significant regional
differences in privileges action reporting, with some of the lowest reporting
by hospitals in the east south Central region (ie, Alabama, Kentucky, Mississippi,
and Tennessee).
Regression analysis confirmed several of these
findings (Table 4). After controlling
for other factors, urban hospitals and JCAHO-accredited hospitals were significantly
more likely to report actions than their counterparts. Hospitals that were
members of the AAMC Council of Teaching Hospitals were significantly less
likely to report actions than nonmembers. These findings were essentially
the same when we used the total number of privileges actions (natural log)
as the dependent variable, although the finding for JCAHO-accredited hospitals
was no longer statistically significant. In this regression, hospitals with
300 or more beds reported significantly more privileges actions than hospitals
with fewer beds.
This study found evidence of a low and declining level of clinical privileges
action reporting by hospitals to the NPDB. The total number of clinical privileges
actions reported was small, and decreased over the study period. The variation
between individual hospitals in reporting was great. More than 65% of all
study hospitals, including more than 250 large hospitals (40.7%) with 300
or more beds, reported no privileges actions during the 5 years of the study.
There are a number of potential explanations for this low level of clinical
privileges action reporting to the NPDB.
Low Level of Quality-of-Care Problems
Data from other studies suggest that low level of quality-of-care problems
as an explanation for the low level of reporting is unlikely. The Harvard
Malpractice Study estimated that 1% of New York's hospitalizations involved
adverse events due to negligence (an estimated 27,179 events in New York State
during 1984).6 This figure was similar to the
0.8% negligence rate found in the California Medical Association's Medical
insurance feasibility study.7 While many negligent
events that occur in hospitals may not prompt clinical privileges action,
figures such as these suggest that the average of 47 privileges actions reported
by New York hospitals in each of our study years is low.
Underreporting of Clinical Privileges Actions Taken
Underreporting of clinical privileges actions taken is unlikely to fully
account for the low level of reporting, as it would suggest that hospitals
were failing to comply with legislation requiring clinical privileges action
reporting. Failure to comply with this legislation puts hospitals at risk
of losing the immunity protection for their peer review processes provided
by the Health Care Quality Improvement Act of 1986. On the other hand, there
may be room for interpretation of the reporting requirements to the NPDB,
leading to variation between hospitals in the types of actions reported. In
addition, a related analysis of the association between the strength of state-imposed
penalties for not reporting privileges actions and the level of reporting
to the NPDB found that the 3 states with strong penalties (>$5000) had significantly
higher reporting to the NPDB.8 The 1995 Office
of Inspector General report also found that at least 1 state licensing board
charged with sending privileges reports to the NPDB had made administrative
errors leading to underreporting of privileges actions.
Preferential Imposition of Penalties That Did Not Require Reporting
Two studies provide evidence that preferential imposition of penalties
that did not require reporting is a plausible explanation. A 1994 survey of
all short-stay general, rural hospitals in Washington, Alaska, Montana, and
Idaho found that 20% had increased the use of peer review decisions that did
not require reporting to the NPDB (eg, monitoring professional activities
or requiring continuing medical education without restricting privileges,
and imposing privileges actions of <31 days) in the prior 2 years.3 A second 1994 survey of 807 hospitals and 76 health
maintenance organizations found that in the prior 2 years 9.4% of hospitals
and 13.1% of health maintenance organizations reported that practitioners
had offered concessions to avoid having a reportable action taken against
them.4 These studies suggest that the NPDB
has had a paradoxical effect on hospital-based peer review activities.
Shift From Individual Peer Review to Continuous Quality Improvement
Activities
Over the study period, hospital quality improvement programs were shifting
emphasis from individual provider review to continuous quality improvement
(also known as total quality management). Continuous quality improvement emphasizes
quality improvement through examination and revision of systems and processes
within a facility rather than targeting individuals and their practices.9,10 The implementation of continuous quality
improvement programs could contribute to a decline in privileges actions either
by decreasing the frequency of negligent actions or shifting resources away
from peer review activities that result in privileges actions.
Substitution of Licensing Board Actions for Hospital Privileges Actions
Hospitals are not required to report the loss of privileges for physicians
who lose them through a licensing action. There were nearly 3 times the number
of licensing actions (11,680, not including reinstatements and revisions)
than privileges actions over the study period. In addition, licensing actions
increased between 1991 and 1995, while privileges actions decreased, raising
the possibility that licensure actions were substituting for privileges actions.
Despite these suggestive trends, we believe this is an unlikely explanation
for the low level of hospital privileges action reporting.
First, the study's finding that the proportion of permanent or indefinite
hospital privileges actions increased over the study period suggests that
it was physicians with less severe problems whose privileges actions declined
over the study period. This group was unlikely to have had licensing actions
substituted for their privileges actions. Second, licensing actions are much
more severe penalties than loss of hospital privileges. In Washington State,
licensing actions take an average of 9 to 12 months from complaint to action
(Maryella Jansen, Washington State Medical Quality Assurance Commission, oral
communication, February 26, 1999). It is unlikely that hospitals would maintain
a physician's privileges for this length of time if he/she had performance
problems severe enough to warrant licensing action. If this is the case, it
supports the conclusion that hospitals may be underascertaining physicians
with performance problems or using penalties that do not require reporting.
Low Level of Detection of Physicians With Performance Problems
The explanation that low level of detection of physicians with performance
problems is supported by the data presented above from the Harvard Medical
Practice Study6 and the higher level of licensing
actions that suggest many more quality-of-care problems than are reflected
by clinical privileges action reporting. In addition, the literature is replete
with articles discussing the many barriers to effective peer review, such
as lack of specific qualifications and training of physicians performing peer
review, poor agreement between physicians regarding quality of care, personal
or professional ties that can bias physician judgments about quality of care,
and fear of litigation.11-16
The evidence from this study cannot be used to definitively identify
the causes for the low and declining level of clinical privileges action reporting.
Supporting evidence from other sources and the high degree of dissatisfaction
with the concept of the NPDB and its operation reported in the early 1990s
suggest that underascertainment of physicians with performance problems and
the use of penalties that do not require reporting were the most significant
contributors to these findings, however.17-23
This study also found that some hospital characteristics were associated
with clinical privileges action reporting levels. Hospitals accredited by
JCAHO, which must uphold certain peer review standards to maintain their accreditation,
were more likely to report actions than nonaccredited hospitals. This supports
the idea that hospitals with some surveillance of their peer review processes
may be more likely to take clinical privileges actions against physicians
with performance problems. The finding of lower privileges action reporting
for member hospitals of the AAMC Council of Teaching Hospitals, on the other
hand, could be due to less effective peer review activity or the employment
of physicians with fewer performance problems in these teaching hospitals.
The latter explanation is supported by 3 recent studies24-26
suggesting that teaching hospitals may provide a higher quality of care than
nonteaching hospitals. Rural hospitals, which had lower action reporting than
urban hospitals, may be most likely to experience some of the barriers to
effective peer review (eg, difficulty objectively evaluating a peer with whom
the reviewer has close personal or professional ties or an economic relationship).12,27-30
In addition, rural hospitals may allocate fewer resources than urban facilities
to peer review. Alternately, rural hospitals may have provided higher-quality
care. Previous work in this area has been mixed, with some reporting higher,31,32 others lower,24
and still others equivalent quality of care.25
This study is limited by our ability to report clinical privileges action
rates using only admissions as the denominator, which results in rates that
are difficult to interpret. Admissions were used as a surrogate for physician
exposure. Is a rate of 5 privileges actions per 100,000 admissions high or
low? Is the difference between 5 and 3 privileges actions per 100,000 admissions
meaningful? Using physicians as the denominator would have created more easily
interpretable results, but was impossible due to the inconsistent nature of
the information on active medical staff size reported to the AHA.
Ensuring the highest quality health care system is of paramount importance
to the health care profession and the public. Support for rigorous surveillance
of the quality of health care providers led Congress to authorize the establishment
of the NPDB. This unique resource is one of only a few truly national data
sets that can provide information on the quality of care rendered by our country's
physicians. Organizations and institutions use it routinely in their day-to-day
decisions about credentialing and licensing of individual physicians. This
study's finding of a low level of clinical privileges action reporting suggests
that the information reported to the NPDB may be incomplete. This is not unexpected,
given the barriers to effective peer review that have been reported, including
fear of liability and preexisting personal and professional ties between peer
reviewers and their colleagues under review. In addition, the decline in privileges
action reporting over the study period raises the possibility that the NPDB
itself may be serving as a disincentive to effective hospital peer review
practices. Given the critical importance of quality of care, and our national
interest in fostering the movement of high-quality physicians to places of
need, it is axiomatic that some sort of national reporting system captures
and makes available data about problematic practitioners. To this end, it
is important to develop new strategies for ensuring effective hospital peer
review, as well as to find ways to minimize the disincentive that the NPDB
may have on peer review activities.
1.Mullan F, Politzer RM, Lewis CT, Bastacky S, Rodak J, Harmon RG. The National Practitioner Data Bank: report from the first year.
JAMA.1992;268:73-79.Google Scholar 2.US Department of Health and Human Services. National Practitioner Data Bank Guidebook. Rockville, Md: Health Resources and Services Administration, US Dept
of Health and Human Services; 1996. HRSA Publication 95-255.
3.Neighbor WE, Baldwin LM, West PA, Hart LG. Rural hospitals' experience with the National Practitioner Data Bank.
Am J Public Health.1997;87:663-666.Google Scholar 4.US Department of Health and Human Services. National Practitioner Data Bank: User Satisfaction
With Reporting and Querying and Usefulness of Disclosure Information for Decision
Making 1992-1994. Rockville, Md: Health Resources and Services Administration, US Dept
of Health and Human Services; 1995.
5.Office of Inspector General. Hospital Reporting to the National Practitioner Data
Bank. Rockville, Md: Office of Inspector General, US Dept of Health and
Human Services; 1995. Publication OEI-01-94-00050.
6.Brennan TA, Leape LL, Laird NM.
et al. Incidence of adverse events and negligence in hospitalized patients:
results of the Harvard Medical Practice Study I.
N Engl J Med.1991;324:370-376.Google Scholar 7.California Medical Association. Report on the Medical Insurance Feasibility Study. San Francisco, Calif: California Medical Association; 1977.
8.Scheutzow SO. State medical peer review: high cost but no benefit—is it time
for a change?
Am J Law Med.1999;25:7-60.Google Scholar 9.Wakefield DS, Helms CM. The role of peer review in a health care organization driven by TQM/CQI.
Jt Comm J Qual Improv.1995;21:227-231.Google Scholar 10.Berwick DM. Continuous improvement as an ideal in health care.
N Engl J Med.1989;320:53-56.Google Scholar 11.Hershey N, Bontempo LC. Assessing peer review in the quest for improved medical services: part
2.
Qual Assur Util Rev.1990;5:7-11.Google Scholar 12.Hershey N. Assessing peer review in the quest for improved medical services and
the implications for education in quality assessment: part 4.
Qual Assur Util Rev.1990;5:130-137.Google Scholar 13.Goldman RL. The reliability of peer assessments of quality of care.
JAMA.1992;267:958-960.Google Scholar 14.Rubin HR, Rogers WH, Kahn KL, Rubenstein LV, Brook RH. Watching the doctor-watchers: how well do peer review organization
methods detect hospital care quality problems?
JAMA.1992;267:2349-2354.Google Scholar 15.Hayward RA, McMahon LF, Bernard AM. Evaluating the care of general medicine inpatients: how good is implicit
review?
Ann Intern Med.1993;118:550-556.Google Scholar 16.Epstein BH, Kaufman A. Hospital peer review: a new proposal.
JAMA.1994;271:1485.Google Scholar 17.Faria MA. The Data Bank: why it should be abolished.
J Med Assoc Ga.1991;80:477-478.Google Scholar 18.Gale AH. When bad things happen to good doctors.
Mo Med.1992;89:720-726.Google Scholar 19.Larson K. The National Practitioner Data Bank: from the physician's perspective.
Minn Med.1990;73:35-37.Google Scholar 20.Johnson ID. Reports to the National Practitioner Data Bank.
JAMA.1991;265:407-408, 410-411.Google Scholar 21.Coleman WO. AMA House calls for dismantling of national physician data bank.
J Okla State Med Assoc.1992;85:35.Google Scholar 22.Ryzen E. The National Practitioner Data Bank: problems and proposed reforms.
J Leg Med.1992;13:409-462.Google Scholar 23.Snelson EA. Physicians under surveillance: the National Practitioner Data Bank.
Minn Med.1993;76:31-33.Google Scholar 24.Keeler EB, Rubenstein LV, Kahn KL.
et al. Hospital characteristics and quality of care.
JAMA.1992;268:1709-1714.Google Scholar 25.Brennan TA, Hebert LE, Laird NM.
et al. Hospital characteristics associated with adverse events and substandard
care.
JAMA.1991;265:3265-3269.Google Scholar 26.Hartz AJ, Krakauer H, Kuhn EM.
et al. Hospital characteristics and mortality rates.
N Engl J Med.1989;321:1720-1725.Google Scholar 27.Roberts CC. Quality assurance and risk management in small and rural hospitals:
the roles of trustees, administration, and medical staff.
Q Rev Biol.1987;13:205-208.Google Scholar 28.Wingert TD, Christianson JB, Moscovice IS. Quality assurance issues raised by proposed limited-service rural hospitals.
Qual Assur Util Rev.1991;6:38-46.Google Scholar 29.Hershey N. Compensation and accountability: the way to improve peer review.
Qual Assur Util Rev.1992;7:23-29.Google Scholar 30.Hershey N. Assessing peer review in the quest for improved medical services: part
3.
Qual Assur Util Rev.1990;5:63-68.Google Scholar 31.Welch HG, Larson EH, Hart LG, Rosenblatt RA. Readmission after surgery in Washington State rural hospitals.
Am J Public Health.1992;82:407-411.Google Scholar 32.Larson EH, Hart LG, Rosenblatt RA. Is non-metropolitan residence a risk factor for poor birth outcome
in the US?
Soc Sci Med.1997;45:171-188.Google Scholar