Putting double marking to the test: a framework to assess if it is worth the trouble

Abstract
It is a challenge to assign a mark that accurately measures the quality of students' work in essay-type assessments that require an element of judgement and fairness by the markers. Double marking such assessments has been seen as a way of improving the reliability of the mark. The analysis approach often taken is to look for absolute agreement between markers instead of looking at all aspects of reliability. To develop an analytic process that will examine the components and meanings of reliability calculations that can be used for assessing the value of double marking a piece of work. An undergraduate case study assessment in General Practice was used as an illustration. Datasets of double marking were collected retrospectively for 1999-2000, and prospectively for 2002-03. An assessment of intermarker agreement and its effect on the reliability of the final mark for students was made, using methods dependent on the type of data collected and Generalisability Theory. The data were used to illustrate how to interpret the results of Bland and Altman plots, anova tables and Cohen's kappa calculations. Generalisability Theory was used to show that, while there was reasonable agreement between markers, the reliability of the mark for the student was still only moderate, probably due to unexplained variability elsewhere in the process. Possible reasons for this variability are discussed. A flowchart of the decisions and actions needed to judge whether a piece of work should be double marked has been constructed.