Top Cited Papers
Open Access
Abstract
Coefficients that assess the reliability of data-making processes, that is, coding text, transcribing interviews, or categorizing observations into analyzable terms, are mostly conceptualized in terms of the agreement a set of coders, observers, judges, or measuring instruments exhibit. When variation is low, reliability coefficients reveal their dependency on an often neglected phenomenon, the amount of information that reliability data provide about the reliability of the coding process or the data it generates. This paper explores the concept of reliability, simple agreement, three conceptions of chance to correct that agreement, and sources of information deficiency, and it develops two measures of information about reliability, akin to the power of a statistical test, intended as a companion to traditional reliability coefficients, especially Krippendorff's (2004 Krippendorff, K. 2004. Content analysis: An introduction to its methodology, 2nd, Thousand Oaks, CA: Sage. [Google Scholar] , pp. 221–250; Hayes & Krippendorff, 2007 Hayes, A. F. and Krippendorff, K. 2007. Answering the call for a standard reliability measure for coding data. Communication Methods and Measures, 1: 77–89. [Taylor & Francis Online] [Google Scholar] ) alpha.