Automatic Essay Assessment

Abstract
Computational techniques for scoring essays have recently come into use. Their bases and development methods raise both old and new measurement issues. However, coming principally from computer and cognitive sciences, they have received little attention from the educational measurement community. We briefly survey the state of the technology, then describe one such system, the Intelligent Essay Assessor (IEA). IEA is based largely on Latent Semantic Analysis (LSA), a machine-learning model that induces the semantic similarity of words and passages by analysis of large bodies of domain-relevant text. IEA's dominant variables are computed from comparisons with pre-scored essays of highly similar content as measured by LSA. Over many validation studies with a wide variety of topics and test-takers, IEA correlated with human graders as well as they correlated with each other. The technique also supports other educational applications. Critical measurement questions are posed and discussed.