Abstract
Percent agreement and Pearson's correlation coefficient are frequently used to represent inter-examiner reliability, but these measures can be misleading. The use of percent agreement to measure inter-examiner agreement should be discouraged, because it does not take into account the agreement due solely to chance. Caution must be used in the interpretation of Pearson's correlation, because it is unaffected by the presence of any systematic biases. Analyses of data from a reliability study show that even though percent agreement and kappa were consistently high among three examiners, the reliability measured by Pearson's correlation was inconsistent. This study shows that correlation and kappa can be used together to uncover non-random examiner error.

This publication has 16 references indexed in Scilit: