Abstract
Self-assessment has been used widely in language testing research, but has produced variable results. In many quarters self-assessment is considered a viable alternative to formal second language assessment for placement and criterion-referenced interpretations, although variation in self-assessment validity coefficients suggests potential difficulty in accurate interpretation. This article first summarizes the research literature with the use of a formal meta-analysis conducted on 60 correlations reported in the second language testing literature. These are the basis for estimates of median effect sizes for second language speaking, listening, reading and writing tests. The second phase of the study is an empirical analysis of the validity of a self-assessment instrument. 236 ‘just-instructed’ English as a foreign language learners completed self-assessments of functional English skills derived from instructional materials and from general proficiency criteria. The learners’ teachers also provided assessments of each of the 236 learners. The criterion variable was an achievement test written to assess mastery of the just-completed course materials. Contrastive multiple regression analyses revealed differential validities for self-assessment compared to teacher assessment depending on the extent of learners’ experience with the language skill self-assessed.