To the Editor.
—Dr Goldman1 uses the κ statistic to evaluate peer assessments of quality of care. He concluded that "physician agreement regarding quality of care is only slightly better than the level expected by chance." Of course, the real concern is not in the level of agreement per se, but in what "agreement" implies. Common sense suggests that if there is a low level of agreement, then physician judgments about quality of care are inaccurate.Goldman's conclusion is based on the κ statistic, which is an index of interrater agreement. He used a standard2 that κ less than 0.40 represents "poor" agreement and a finding that few κ coefficients greater than 0.40 are reported in the literature on peer assessments. However, there are difficulties in the interpretation of κ3,4 and the reported κ statistics do not support the conclusion.The difficulties in the interpretation of the
Berry CC. The κ Statistic. JAMA. 1992;268(18):2513. doi:10.1001/jama.1992.03490180045013