[Skip to Content]
Access to paid content on this site is currently suspended due to excessive activity being detected from your IP address 54.197.65.227. Please contact the publisher to request reinstatement.
[Skip to Content Landing]
Article
November 22, 1995

Quality of Chart Review for Quality of Care

Author Affiliations

Pennsylvania State University College of Medicine Hershey

JAMA. 1995;274(20):1585-1586. doi:10.1001/jama.1995.03530200021021
Abstract

To the Editor.  —Dr Ellerbeck and colleagues1 support the accuracy of their abstraction of clinical indicators from medical records using κ statistics based on duplicate reviews of 912 records. However, the κ statistic is inappropriate and uninformative in this application, since κ measures agreement, corrected for chance, between reviewers when the true finding is not known,2 whereas in chart abstraction the presence or absence of an indicator can be ascertained with certainty.More appropriate measures of accuracy are sensitivity and specificity.3 In this setting, sensitivity is the proportion of cases correctly identified when the indicator is present in the medical record, while specificity is the proportion of cases correctly identified as having no indicator present.Using κ can mislead. For example, suppose the prevalence of the indicator is 10% in a sample of 1000 records, the sensitivity of the record abstraction is 75% (75 correct of 100

First Page Preview View Large
First page PDF preview
First page PDF preview
×