[Skip to Content]
Access to paid content on this site is currently suspended due to excessive activity being detected from your IP address 54.163.129.96. Please contact the publisher to request reinstatement.
[Skip to Content Landing]
Article
November 11, 1992

The κ Statistic-Reply

Author Affiliations

Washington, DC

JAMA. 1992;268(18):2513-2514. doi:10.1001/jama.1992.03490180045014
Abstract

In Reply.  —Dr Berry appears to suggest that accuracy should be used in studies such as mine to evaluate the scientific usefulness of peer assessments. However, the calculation of accuracy or other measures of validity requires a "gold standard" against which these ratings can be measured. No such gold standard currently exists; instead, as discussed in my article, peer assessment is typically used as the standard against which the validity of other measures of quality is evaluated. Thus, the scientific value of peer ratings can only be studied by measuring agreement among reviewers. The absence of a gold standard is typical of studies in which observer variability is assessed through the use of κ. The only question usually facing investigators in these studies is what measure of agreement to use, not whether to measure agreement or accuracy.Berry also cites the often-noted1-5 relationship between prevalence and κ values as

×