[Skip to Content]
Access to paid content on this site is currently suspended due to excessive activity being detected from your IP address 50.16.107.222. Please contact the publisher to request reinstatement.
Sign In
Individual Sign In
Create an Account
Institutional Sign In
OpenAthens Shibboleth
[Skip to Content Landing]
Download PDF
Image description not available.
Overall site scores for all 1994 proficiency testing challenges by testing group.
Table 1.—Percentage of Unsatisfactory Test Event Scores for Hospitals and Independent Laboratories and All Other Testing Sites by Analyte, Test, or Test Specialty From 1994
Image description not available.
Table 2.—Odds Ratios of Unsatisfactory Proficiency Testing Event Performance for All Other Testing Sites Compared With Hospital and Independent Laboratories From 1994
Image description not available.
1.
Clinical Laboratory Improvement Amendments of 1988 (CLIA).  Pub L No. 100-578, 42 USC 201 (1988).
2.
 Not Available  42 Federal Register.493.803-493.865 (1993).
3.
SAS Institute Inc.  SAS/STAT, Version 6: User's Guide.  4th ed. Cary, NC: SAS Institute Inc; 1989:851-879.
4.
Belk WP, Sunderman FW. A survey of the accuracy of chemical analysis in clinical laboratories.  Am J Clin Pathol.1947;17:853-861. Reprinted in: Arch Pathol Lab Med. 1988;112:320-326.
5.
Sunderman FW. The history of proficiency testing/quality control.  Clin Chem.1992;38:1205-1209.
6.
Hanson DJ. Improvements in medical laboratory performance.  Postgrad Med.1969;46:51-56.
7.
Lamotte LC. The impact of laboratory improvement programs on laboratory performance: the CLIA 67 experience.  Health Lab Sci.1977;14:213-223.
8.
Clinical Laboratory Improvement Act of 1967 (CLIA).  Pub L No. 90-174. 42 USC 216 (1967).
9.
Ross JW, Boone DJ. Assessing the effect of mistakes in the total testing process on the quality of patient care. In: Proceedings of the 1989 Institute on Critical Issues in Health Laboratory Practice: Improving the Quality of Health Management Through Clinician and Laboratorian Teamwork. Wilmington, Del: EI DuPont de Nemours Co Inc; 1991:173.
10.
Boone DJ, Steindel SJ, Herron R.  et al.  Transfusion medicine monitoring practices: a study of the College of American Pathologists/Centers for Disease Control and Prevention Outcomes Working Group.  Arch Pathol Lab Med.1995;119:999-1006.
11.
Nutting PA, Main DS, Fischer PM.  et al.  Problems in laboratory testing in primary care.  JAMA.1996;275:635-639.
12.
Cembrowski GS, Vanderlinde RE. Survey of special practices associated with College of American Pathologists proficiency testing in the Commonwealth of Pennsylvania.  Arch Pathol Lab Med.1988;112:374-376.
13.
Boone DJ, Hansen HJ, Hearn TL, Lewis DS, Dudley D. Laboratory evaluation and assistance efforts: mailed, on-site, and blind proficiency testing surveys conducted by the Centers for Disease Control.  Am J Public Health.1982;72:1364-1368.
14.
Lamotte LC, Guerrant GO, Lewis DS, Hall CT. Comparison of laboratory performance with blind and mail-distributed proficiency testing samples.  Public Health Rep.1977;92:554-560.
15.
Jenny RW, Jackson KY. Proficiency test performance as a predictor of accuracy of routine patient testing for theophylline.  Clin Chem.1993;39:76-81.
16.
Schalla WO, Blumer SO, Taylor RN.  et al.  HIV blind performance evaluation: a method for assessing HIV-antibody testing performance. In: Proceedings of the 1995 Institute: Frontiers in Laboratory Practice Research. Atlanta, Ga: Centers for Disease Control and Prevention; 1996: Abstract 24.
17.
Naito HK, Kwak YS. Matrix effects on proficiency testing materials: impact on accuracy of cholesterol measurements in laboratories in the nation's largest hospital system.  Arch Pathol Lab Med.1993;117:345-351.
18.
Ross JW, Myers GL, Gilmore BF, Cooper GR, Naito HR, Eckfeldt J. Matrix effects and the accuracy of cholesterol analysis.  Arch Pathol Lab Med.1993;117:393-400.
19.
Posner A. Problems formulating method-insensitive proficiency testing materials.  Arch Pathol Lab Med.1993;117:422-424.
20.
Kaufman HW, Gochman N. College of American Pathologists Conference XXIII on matrix effects and accuracy assessment in clinical chemistry: report of working group on method development.  Arch Pathol Lab Med.1993;117:427-428.
21.
Steindel SJ, Howanitz PJ, Renner SW, Sarewitz SJ. Short-term Studies of the Laboratory's Role in Quality Care: Laboratory Proficiency Testing Data Analysis and Critique.  Northfield, Ill: College of American Pathologists; 1995:1-11.
22.
Ehrmeyer SS, Burmeister BJ, Laessig RH, Hassemer DJ. Laboratory performance in a state proficiency testing program: what can a laboratorian take home?  J Clin Immunoassay.1994;17:223-230.
23.
Hoeltge GA, Duckworth JK. Review of proficiency testing performance of laboratories accredited by the College of American Pathologists.  Arch Pathol Lab Med.1987;111:1011-1014.
24.
Lanphear BJ, Burmeister BJ, Ehrmeyer SS, Laessig RH, Hassemer DJ. Review of actual proficiency testing performance under CLIA '67 (March 14, 1990) rules: perspective from the first year's data.  Clin Chem.1992;38:1254-1259.
25.
Merritt BR, McHugh RB, Kimball AC, Bauer H. A two-year study of clinical chemistry determinations in Minnesota hospitals.  Minn Med.1965;48:939-956.
26.
Gilbert RK. Progress and analytic goals in clinical chemistry.  Am J Clin Pathol.1975;63(suppl 6):960-973.
27.
Taylor RN, Fulford KM. Assessment of laboratory improvement by the Center for Disease Control Diagnostic Immunology Proficiency Testing Program.  J Clin Microbiol.1981;13:356-368.
28.
Jones RN, Edson DC. Antibiotic susceptibility testing accuracy: review of the College of American Pathologists microbiology survey, 1972-1983.  Arch Pathol Lab Med.1985;109:595-601.
29.
Rickman WJ, Monical C, Waxdal MJ. Improved precision in the enumeration and absolute numbers of lymphocyte phenotypes with long-term monthly proficiency testing.  Ann N Y Acad Sci.1993;677:53-58.
30.
Stanton N. Blood lead proficiency testing: overview of the federally sponsored program in the US.  J Int Fed Clin Chem.1993;5:158-161.
31.
Wood DE, Palmer J, Missett P, Whitby JL. Proficiency testing in parasitology: an educational tool to improve laboratory performance.  Am J Clin Pathol.1994;102:490-494.
32.
Tholen D, Lawson NS, Cohen T, Gilmore B. Proficiency test performance and experience with College of American Pathologists' programs.  Arch Pathol Lab Med.1995;119:307-311.
33.
Hain RF. Proficiency testing in the physician's office laboratory: an ounce of prevention.  South Med J.1972;65:608-610.
34.
Crawley R, Belsey R, Brock D, Baer DM. Regulation of physicians' office laboratories: the Idaho experience.  JAMA.1986;255:374-382.
35.
Ferris DG, Hamrick HJ, Pollock PG.  et al.  Physician office laboratory education and training in primary care residency programs.  Arch Fam Med.1995;4:34-39.
Toward Optimal Laboratory Use
February 11, 1998

Variation in Proficiency Testing Performance by Testing Site

Author Affiliations

From the Centers for Disease Control and Prevention, Public Health Practice Program Office, Division of Laboratory Systems, Atlanta, Ga.

 

Toward Optimal Laboratory Use section editor: George D. Lundberg, MD, Editor, JAMA.

JAMA. 1998;279(6):463-467. doi:10.1001/jama.279.6.463
Context.—

Context.— Congress enacted the Clinical Laboratory Improvement Amendments of 1988 (CLIA) to promote uniform quality and standards among all testing sites in the United States. The performance indicators specified in the legislation are proficiency testing (PT) performance and periodic inspections.

Objective.— To evaluate variation in PT performance by type of testing facility during the first year of compulsory participation under CLIA.

Design.— All 1994 PT score data electronically reported to the Health Care Financing Administration as a component of compliance with the CLIA regulations were obtained. Over 1.2 million PT event scores from 17058 unique testing sites were sorted into 2 groups based on the type of testing facility: hospitals and independent laboratories (HI) and all other testing sites (AOT).

Main Outcome Measures.— Satisfactory and unsatisfactory performance rates for HI and AOT for each analyte and/or test, according to the criteria specified by the CLIA regulations.

Results.— The aggregate rates of satisfactory event performance for all regulated analytes, tests, and specialties were 97% and 91% for the HI and AOT groups, respectively. The aggregate odds ratio for unsatisfactory PT event performance for the AOT group compared with the HI group was 2.89, with a range of 2.19 to 7.51 for the individual analytes.

Conclusion.— There was a consistent difference in PT performance during the first full year of compulsory PT under the CLIA regulations based on the type of testing facility performing the analysis. Traditional testing sites achieved higher rates of satisfactory performance than newly regulated, alternative testing sites.

PROFICIENCY TESTING (PT) is an external quality control tool where simulated patient samples are tested by participating laboratories and individual laboratory performance is assessed by comparison to the collective performance of all participants. Effective January 1, 1994, the regulations implementing the Clinical Laboratory Improvement Amendments of 1988 (CLIA) have required PT for a prescribed group of analytes, tests, and testing specialties for all testing sites performing moderate- or high-complexity testing. This study examines the variability in PT performance between traditional and alternative testing sites for the first full year of compulsory PT. Satisfactory and unsatisfactory challenge performance is evaluated by analyte, test, and testing specialty. The odds ratios for PT event failures for alternative testing sites compared with traditional laboratories are also reported.

The CLIA legislation established federal regulation of all sites performing testing on "materials derived from the human body for the purpose of providing information for the diagnosis, prevention, or treatment of any disease . . . [of] human beings."1 Although some state regulation of laboratories existed before CLIA was enacted, only traditional testing sites such as hospitals and independent laboratories, which were engaged in interstate commerce or receiving Medicare payments, were subject to federal regulation. Therefore, the CLIA legislation imposed uniform, site-neutral laboratory practice standards for the first time on the other 90% of sites performing clinical laboratory testing. Inherent in the site-neutrality model on which the CLIA regulations are based is this notion: the quality of laboratory test results, as measured by accuracy and reliability, should be equivalent irrespective of the testing site. The CLIA model specifies minimum standards for quality control, quality assurance, and personnel requirements, which were designed to ensure site-neutral test comparability. Proficiency testing performance and inspection of testing sites are the 2 quality indicators specified in the CLIA model.

The Health Care Financing Administration (HCFA) indicates there are 157002 sites currently performing clinical laboratory testing in the United States (HCFA Online Survey, Certification and Reporting database, unpublished data, January 1997). Only 10% of these testing sites are in traditional settings, ie, hospitals and independent laboratories. The remaining 90% of testing sites in the United States include a broad array of 21 different types of facilities and locations at which clinical laboratory testing is conducted in conjunction with other services. Physicians' office laboratories account for 64% of these alternative testing sites. Ancillary health care providers, such as nursing homes, home health agencies, blood banks, mobile health units, ambulatory surgery centers, hospice providers, community clinics, rehabilitation facilities, end-stage renal disease dialysis facilities, and residential care facilities for the mentally retarded, collectively account for an additional 22% of alternative testing sites.

Under CLIA regulations, only testing sites performing moderate- or high-complexity testing are required to participate in PT. The 55601 such testing sites participating in mandatory PT, which are not in a CLIA-exempt state, are required to have their performance monitored by either HCFA or a US Department of Health and Human Services (DHHS)–approved accrediting organization. During 1994, 40711 testing sites were being monitored by HCFA. This study examines the 17058 testing sites for which electronic PT score data were submitted to HCFA by approved PT providers as part of the testing sites' compliance with CLIA regulations. The study was conducted to determine the comparability of testing performed in traditional and alternative testing sites by examining the PT performance of the 2 groups during the first year of compulsory PT enrollment.

METHODS

The modern PT process consists of 3 yearly testing events for each analyte, test, or testing specialty for which the participant laboratories have enrolled. Each testing event involves the shipment of 5 samples from the PT provider to each participating testing facility, which subsequently tests the samples and reports the results back to the PT provider. The PT provider analyzes all reported results and grades sample results reported by the testing facilities. For regulatory purposes, an overall test event score is reported for each analyte, test, or testing specialty based on a facility's performance with each of the 5 samples; a score of 100% equates to satisfactory performance on 5 of 5 samples, and 80% represents satisfactory performance on 4 of 5 samples.

Proficiency testing event scores for the 3 testing events of 1994 were obtained from HCFA for all testing sites participating in the following DHHS-approved PT programs: the American Association of Bioanalysts, College of American Pathologists, External Comparative Evaluation for Laboratories, American Academy of Pediatrics, American Academy of Family Physicians, American Osteopathic Association, and Medical Laboratory Evaluation programs. Over 1.2 million individual test event scores were sorted into 2 groups: (1) hospitals and independent laboratories (HI) and (2) all other testing sites (AOT). Scores were grouped by merging the PT data with the HCFA Online Survey, Certification and Reporting database, a comprehensive database used by HCFA to administer the CLIA program. Satisfactory and unsatisfactory performance rates were determined for the 2 laboratory groups for each of the analytes, tests, or testing specialties according to the criteria specified in the CLIA regulations where failure to attain an overall testing event score of at least 80% is unsatisfactory performance.2 χ2test statistics and logit odds ratios were calculated for each analyte, test, or specialty using SAS statistical software.3 An aggregate odds ratio, including all reported testing events for all analytes, tests, and specialties, was also calculated for both laboratory groups.

RESULTS

A total of 17058 unique testing sites were included in this study. These testing sites represent all the participants in the 7 PT programs included in the study, which designated HCFA as a recipient of their PT results as part of their compliance with the CLIA program in 1994. Forty-three percent of the testing sites were in the HI group; the remaining 57% were in the AOT group. The percentage of unsatisfactory test event scores for the 30 most commonly offered, regulated analytes, tests, or specialties in AOT ranged from 1.3% to 5.6% for the HI group and from 3.6% to 15.0% the AOT group. The aggregate rate of satisfactory test event performance for all regulated analytes, tests, and specialties was 97% for the HI group and 91% for the AOT group. A complete listing of the percentage of unsatisfactory test event scores for the 30 most commonly offered, regulated analytes, tests, or specialties in AOT is shown by testing site group in Table 1. In each case, the percentage of unsatisfactory test event scores is higher for the AOT group than for the HI group. When the testing site is the unit of analysis instead of the analyte or test, 73% of HI and 68% of AOT passed all challenges undertaken in 1994. The overall site scores for all 1994 PT challenges are shown by testing group in Figure 1.

Odds ratios indicating the estimated relative risk for an unsuccessful PT challenge among the AOT group compared with the HI group were calculated for the 30 most commonly offered, regulated analytes, tests, and specialties among AOT. The odds ratios for these 30 procedures ranged from a high of 7.51 for potassium to a low of 2.19 for bacteriology; all 30 odds ratios were significant at a 95% confidence level. The aggregate odds ratio in favor of unsatisfactory PT performance for all regulated analytes, tests, and specialties for the AOT group when compared with the traditional laboratories of the HI group was 2.89; for an observed unsatisfactory event score, we estimated the odds were 2.89 times more likely that the score was received from an AOT than from an HI. A complete list of the logit odds ratios in favor of unsatisfactory PT performance for the 30 most commonly offered, regulated analytes, tests, and specialties among AOT compared with HI is shown in Table 2.

COMMENT

The results of this analysis indicate disparate PT performance between traditional laboratories and alternative testing sites. Having established a difference in the PT performance of these 2 groups, the implications of this finding must be considered. However, to adequately evaluate the findings of this report, it is essential to establish the context in which the observations were made. Therefore, a brief review of the history, attributes, limitations, and functions of PT is presented.

Organized PT in the United States originated in 1946 with a small regional program in Philadelphia, Pa.4 Soon after, the College of American Pathologists conducted the first 2 nationwide PT surveys in 1947 and 1948, and the national Sunderman Proficiency Testing Service was established in 1949. These early surveys demonstrated a disturbing degree of interlaboratory variation in test results among the participant laboratories.5 The professional laboratory community responded quickly by voluntarily espousing PT as an educational tool and an external quality control tool to help identify testing problems in the clinical laboratory.

The participation in, and growth of, PT programs paralleled the growth of clinical laboratory testing in general. This growth was reinforced by the collective experience of laboratory professionals and PT providers who found PT to be serving the important function of bringing to light otherwise unnoticed or undetected problems with laboratory testing as evidenced by this compelling finding: PT performance improved over time.6,7 This finding appeared to indicate improved interlaboratory quality, and presumably intralaboratory quality, due at least in part to participation in PT programs.

Proficiency testing assumed a new role as a federal regulatory tool with the introduction of CLIA in 1967.8 The impact of this new role was mitigated by 2 factors: the law applied only to laboratories engaged in interstate commerce, and many of these laboratories were already voluntarily participating in PT. Enactment of the CLIA legislation of 1988, however, broadened the role of PT further by creating a uniform set of standards for all sites performing clinical laboratory testing.

The 1988 CLIA legislation specifies that the standards applied to testing sites will "assure consistent performance by laboratories . . . of valid and reliable laboratory examinations. . . " and further stipulates the standards will take into consideration issues related to the complexity of the testing procedures performed by testing sites.1 Therefore, CLIA standards are based on the parallel principles of site neutrality and the complexity model of testing. The CLIA regulatory model stipulated in the legislation specifies the comparability of laboratory performance, which is inherent in the concept of site neutrality, is to be monitored by PT performance and periodic on-site inspections of testing facilities.1

The scope of PT is limited to the analytic portion of the total testing process. Proficiency testing cannot and does not directly address some important pre- and postanalytic steps such as specimen collection, processing and storage, and customary reporting procedures, which contribute to the overall accuracy and reliability of test results reported by a testing site. The relative importance of the preanalytic and postanalytic processes in the overall accuracy and reliability of final test results reported by a testing site should not be underestimated. In a review of incidents captured by one hospital laboratory's quality assurance program, Ross and Boone9 reported more than 90% of the detected problems occurred in either the pre- or postanalytic phase of the total testing process. A study of transfusion medicine monitoring practices by Boone et al10 demonstrated a similar pattern, with 96% of the reported defects occurring in either the pre- or postanalytic phase of testing. In a recent investigation of laboratory problems detected in office-based, primary care practices, Nutting et al11 reported a comparable pattern, with 93% of detected problems attributed to either pre- or postanalytic processes.

Also to be considered when evaluating PT performance is the testing event itself. A PT event is a nonrandom sample of the work performed at a given testing site and is therefore subject to all the limitations and biases inherent in such a process. The issue of how representative PT performance is of routine laboratory performance has been addressed by several investigators. In a survey conducted by Cembrowski and Vanderlinde,12 most respondents reported using extraordinary testing protocols, ie, above and beyond those used by the laboratory for routine patient specimens when testing PT samples. Studies conducted in 1977 and 1982 by the Centers for Disease Control and Prevention found participant laboratories' performance on mailed PT samples was superior to that observed when blind PT samples were tested by the same laboratories.13,14 These findings suggest PT performance represents the best analytical work a laboratory is capable of producing. A study by Jenny and Jackson,15 however, found PT performance to be a reliable predictor of routine patient testing for theophylline. Another recent study by the Model Performance Evaluation Program at the Centers for Disease Control and Prevention16 found comparable error rates for blind and open samples tested for human immunodeficiency virus antibodies. One plausible explanation for these seemingly contradictory findings is the impact of the overall accuracy of testing for a given analyte on the predictive value of PT, ie, the impact of the "prevalence" of inaccurate tests on the sensitivity and specificity of PT as an instrument to detect the inaccuracies.

Several additional variables unique to the PT process, and often unrelated to the testing site's routine performance level, can contribute to unsatisfactory PT performance. Examples of these include, but are not limited to, the nuances of PT report forms, use of differing units of measure, and matrix effects wherein the PT sample medium has an effect on analyte measurement.1720 Several investigators have noted the relative contributions of these variables on the outcome of a PT event.2124

Although there are limits to the sensitivity and specificity of PT in detecting substandard clinical laboratory performance, PT is clearly an important tool in achieving this end. However, the relationship of participation in a PT program to improved laboratory performance relies entirely on the response initiated in the testing site when a potential problem is identified by unsatisfactory PT performance. Throughout the 50-year history of PT in the United States, the relevant literature has been peppered with reports demonstrating improved PT performance over time.2532 These studies are testimony to the successful commitment of the laboratory community to implement PT as a tool to improve the overall quality of clinical laboratory testing. Some have argued that improved PT performance does not reflect improved clinical laboratory performance, but rather the process of laboratories becoming acclimated to the PT process and its intrinsic subtleties. Several empirical studies, however, indicate this is not the case. Hoeltge and Duckworth23 in 1987 reviewed the PT performance of laboratories accredited by the College of American Pathologists over a 2-year period. Analytes were targeted by the investigators for a review by the respective laboratory directors on the basis of repeated unacceptable results in the College of American Pathologists Survey Program. The study reported 50% of the investigations revealed a problem in instrumentation, methods, or other technical aspects of testing; only 4.4 of the target cases were attributed to the survey format or materials. The investigations spurred by unacceptable PT performance in this study led to improved performance with 88% of the targeted analytes on subsequent PT evaluations.23 In 1994, Ehrmeyer et al22 with the Wisconsin State Laboratory of Hygiene demonstrated similar findings with unsatisfactory PT performance, bringing attention to correctable problems in the laboratory testing process, and a pattern of improved PT performance after a prior unsatisfactory evaluation. A review of PT performance and experience with the College of American Pathologists' programs demonstrated "consistent and statistically significant improvement in performance for the first 3 to 4 years of proficiency testing."32 The report notes PT error rates continue to decline over time, suggesting the improved PT performance represents true improvement in laboratory performance and not acclimation to the PT process. Additional studies support the notion of PT participation contributing to an overall improvement in laboratory performance by demonstrating a positive association between participation in PT and decreasing interlaboratory SDs and coefficients of variation with PT samples.6,29,33

In evaluating the findings of our study, it appears there is room for improvement, particularly among the alternative testing sites. The unsatisfactory test event performance rates for the 3 most commonly offered and regulated tests and specialties among the AOT are particularly notable: glucose, 15.0%; hemoglobin, 9.1%; and bacteriology, 7.2%. The less than optimal PT performance of the alternative testing sites, most of which were not previously regulated, is not unexpected; other researchers have reported similar findings with newly regulated laboratories.22,34 Some of the observed difference in the performance of the alternative testing sites vs the hospital and independent laboratories is probably attributable to previously discussed factors, which are not directly related to the quality of daily performance. However, given the consistently superior performance of the traditional laboratories across all analytes, and the magnitude of the observed odds ratios, it is likely the disparity also represents a difference in the actual quality of daily laboratory performance between the 2 groups. It is important to consider why this performance disparity exists.

Like CLIA, prior state and federal clinical laboratory regulations have stressed the importance of personnel standards, quality control and quality assurance programs, and participation in PT. Therefore, the staffs of previously regulated laboratories generally include laboratory professionals who have had the benefit of specific training in each of these areas. Previously unregulated, alternative testing sites, however, may not have this advantage. HCFA indicates the staffs of alternative testing sites are less likely to include a laboratory professional than hospital and independent laboratories (HCFA Online Survey, Certification and Reporting database, unpublished data, January 1997). The absence of a laboratory professional in a testing site may leave the site in the undesirable position of having the best of intentions, but a lack of expertise to carry the intentions to fruition. Similarly, many alternative testing sites are directed by physicians who may not have been exposed to quality laboratory practice principles during their training. A recent study by Ferris et al35 showed clinical laboratory training was available in only 15% to 60% of primary care residency programs. Other economic, managerial, and technical factors, such as collective differences in the testing methods used by different types of testing facilities, may also contribute to the performance differential observed between the traditional laboratories and the alternative testing sites.

Like most investigations, this study presents new challenges. The common ground between the enactment of the CLIA legislation and the health care professionals affected by the CLIA standards is a fundamental commitment to deliver the best possible care to patients. Results of this study demonstrate the PT performance among the alternative testing sites is less satisfactory than that of traditional hospital and independent laboratories. Given the body of evidence in the peer-reviewed literature, it is prudent to theorize PT performance does at least partially reflect daily clinical laboratory performance. If all testing sites are to provide comparably accurate and reliable test results, there is clearly work to be done. The laboratory and health care community at large must work together to assure all individuals involved in the performance of clinical laboratory testing have the requisite knowledge and experience to provide optimally accurate and reliable test results. A "positive" PT result, ie, one indicating unsatisfactory performance, provides information that may not have been known otherwise. Just as a positive laboratory test result, in the hands of an adequately trained and knowledgeable clinician, can be an opportunity for positive intervention in a patient's condition, unsatisfactory PT can be an opportunity for the adequately trained and knowledgeable clinical laboratory testing site staff to improve the accuracy and reliability of patient test results. Both individual and specific groups of testing sites can benefit from continued monitoring of PT performance to help to assess the success or failure of intervention efforts to improve the overall quality of clinical laboratory testing.

References
1.
Clinical Laboratory Improvement Amendments of 1988 (CLIA).  Pub L No. 100-578, 42 USC 201 (1988).
2.
 Not Available  42 Federal Register.493.803-493.865 (1993).
3.
SAS Institute Inc.  SAS/STAT, Version 6: User's Guide.  4th ed. Cary, NC: SAS Institute Inc; 1989:851-879.
4.
Belk WP, Sunderman FW. A survey of the accuracy of chemical analysis in clinical laboratories.  Am J Clin Pathol.1947;17:853-861. Reprinted in: Arch Pathol Lab Med. 1988;112:320-326.
5.
Sunderman FW. The history of proficiency testing/quality control.  Clin Chem.1992;38:1205-1209.
6.
Hanson DJ. Improvements in medical laboratory performance.  Postgrad Med.1969;46:51-56.
7.
Lamotte LC. The impact of laboratory improvement programs on laboratory performance: the CLIA 67 experience.  Health Lab Sci.1977;14:213-223.
8.
Clinical Laboratory Improvement Act of 1967 (CLIA).  Pub L No. 90-174. 42 USC 216 (1967).
9.
Ross JW, Boone DJ. Assessing the effect of mistakes in the total testing process on the quality of patient care. In: Proceedings of the 1989 Institute on Critical Issues in Health Laboratory Practice: Improving the Quality of Health Management Through Clinician and Laboratorian Teamwork. Wilmington, Del: EI DuPont de Nemours Co Inc; 1991:173.
10.
Boone DJ, Steindel SJ, Herron R.  et al.  Transfusion medicine monitoring practices: a study of the College of American Pathologists/Centers for Disease Control and Prevention Outcomes Working Group.  Arch Pathol Lab Med.1995;119:999-1006.
11.
Nutting PA, Main DS, Fischer PM.  et al.  Problems in laboratory testing in primary care.  JAMA.1996;275:635-639.
12.
Cembrowski GS, Vanderlinde RE. Survey of special practices associated with College of American Pathologists proficiency testing in the Commonwealth of Pennsylvania.  Arch Pathol Lab Med.1988;112:374-376.
13.
Boone DJ, Hansen HJ, Hearn TL, Lewis DS, Dudley D. Laboratory evaluation and assistance efforts: mailed, on-site, and blind proficiency testing surveys conducted by the Centers for Disease Control.  Am J Public Health.1982;72:1364-1368.
14.
Lamotte LC, Guerrant GO, Lewis DS, Hall CT. Comparison of laboratory performance with blind and mail-distributed proficiency testing samples.  Public Health Rep.1977;92:554-560.
15.
Jenny RW, Jackson KY. Proficiency test performance as a predictor of accuracy of routine patient testing for theophylline.  Clin Chem.1993;39:76-81.
16.
Schalla WO, Blumer SO, Taylor RN.  et al.  HIV blind performance evaluation: a method for assessing HIV-antibody testing performance. In: Proceedings of the 1995 Institute: Frontiers in Laboratory Practice Research. Atlanta, Ga: Centers for Disease Control and Prevention; 1996: Abstract 24.
17.
Naito HK, Kwak YS. Matrix effects on proficiency testing materials: impact on accuracy of cholesterol measurements in laboratories in the nation's largest hospital system.  Arch Pathol Lab Med.1993;117:345-351.
18.
Ross JW, Myers GL, Gilmore BF, Cooper GR, Naito HR, Eckfeldt J. Matrix effects and the accuracy of cholesterol analysis.  Arch Pathol Lab Med.1993;117:393-400.
19.
Posner A. Problems formulating method-insensitive proficiency testing materials.  Arch Pathol Lab Med.1993;117:422-424.
20.
Kaufman HW, Gochman N. College of American Pathologists Conference XXIII on matrix effects and accuracy assessment in clinical chemistry: report of working group on method development.  Arch Pathol Lab Med.1993;117:427-428.
21.
Steindel SJ, Howanitz PJ, Renner SW, Sarewitz SJ. Short-term Studies of the Laboratory's Role in Quality Care: Laboratory Proficiency Testing Data Analysis and Critique.  Northfield, Ill: College of American Pathologists; 1995:1-11.
22.
Ehrmeyer SS, Burmeister BJ, Laessig RH, Hassemer DJ. Laboratory performance in a state proficiency testing program: what can a laboratorian take home?  J Clin Immunoassay.1994;17:223-230.
23.
Hoeltge GA, Duckworth JK. Review of proficiency testing performance of laboratories accredited by the College of American Pathologists.  Arch Pathol Lab Med.1987;111:1011-1014.
24.
Lanphear BJ, Burmeister BJ, Ehrmeyer SS, Laessig RH, Hassemer DJ. Review of actual proficiency testing performance under CLIA '67 (March 14, 1990) rules: perspective from the first year's data.  Clin Chem.1992;38:1254-1259.
25.
Merritt BR, McHugh RB, Kimball AC, Bauer H. A two-year study of clinical chemistry determinations in Minnesota hospitals.  Minn Med.1965;48:939-956.
26.
Gilbert RK. Progress and analytic goals in clinical chemistry.  Am J Clin Pathol.1975;63(suppl 6):960-973.
27.
Taylor RN, Fulford KM. Assessment of laboratory improvement by the Center for Disease Control Diagnostic Immunology Proficiency Testing Program.  J Clin Microbiol.1981;13:356-368.
28.
Jones RN, Edson DC. Antibiotic susceptibility testing accuracy: review of the College of American Pathologists microbiology survey, 1972-1983.  Arch Pathol Lab Med.1985;109:595-601.
29.
Rickman WJ, Monical C, Waxdal MJ. Improved precision in the enumeration and absolute numbers of lymphocyte phenotypes with long-term monthly proficiency testing.  Ann N Y Acad Sci.1993;677:53-58.
30.
Stanton N. Blood lead proficiency testing: overview of the federally sponsored program in the US.  J Int Fed Clin Chem.1993;5:158-161.
31.
Wood DE, Palmer J, Missett P, Whitby JL. Proficiency testing in parasitology: an educational tool to improve laboratory performance.  Am J Clin Pathol.1994;102:490-494.
32.
Tholen D, Lawson NS, Cohen T, Gilmore B. Proficiency test performance and experience with College of American Pathologists' programs.  Arch Pathol Lab Med.1995;119:307-311.
33.
Hain RF. Proficiency testing in the physician's office laboratory: an ounce of prevention.  South Med J.1972;65:608-610.
34.
Crawley R, Belsey R, Brock D, Baer DM. Regulation of physicians' office laboratories: the Idaho experience.  JAMA.1986;255:374-382.
35.
Ferris DG, Hamrick HJ, Pollock PG.  et al.  Physician office laboratory education and training in primary care residency programs.  Arch Fam Med.1995;4:34-39.
×