An Evidence-Based Approach to Distractor Generation in Multiple-Choice Language Tests

Abstract
The purpose of this project is to explore the feasibility of a new approach for producing evidence-based distractor sets. We use Common Wrong Answers (CWAs), and the associated performance data, generated by candidate responses to open gap-fill tasks, to produce distractor sets for multiple-choice gap-fill tasks based on the same texts. We then investigate whether these distractor sets are effective for use in language tests, in terms of empirical and qualitative review, and consider potential impacts on the production process for test material. This project explores a new and innovative method of content development, and raises the possibility of a new approach to item production that can semi-autogenerate test items in shorter periods of time without affecting quality or reliability. Although the approach is specific to one task type, it is hoped that further research will expand on the applications of the approach to deliver a version that may be operationalised for use across different task types in the development of language assessments.