[Skip to Content]
Access to paid content on this site is currently suspended due to excessive activity being detected from your IP address 54.163.92.62. Please contact the publisher to request reinstatement.
[Skip to Content Landing]
Article
November 11, 1992

The κ Statistic

Author Affiliations

University of California, San Diego

JAMA. 1992;268(18):2513. doi:10.1001/jama.1992.03490180045013
Abstract

To the Editor.  —Dr Goldman1 uses the κ statistic to evaluate peer assessments of quality of care. He concluded that "physician agreement regarding quality of care is only slightly better than the level expected by chance." Of course, the real concern is not in the level of agreement per se, but in what "agreement" implies. Common sense suggests that if there is a low level of agreement, then physician judgments about quality of care are inaccurate.Goldman's conclusion is based on the κ statistic, which is an index of interrater agreement. He used a standard2 that κ less than 0.40 represents "poor" agreement and a finding that few κ coefficients greater than 0.40 are reported in the literature on peer assessments. However, there are difficulties in the interpretation of κ3,4 and the reported κ statistics do not support the conclusion.The difficulties in the interpretation of the

×