To Score or Not to Score? A Simulation Study on the Performance of Test Scores, Plausible Values, and SEM, in Regression With Socio-Emotional Skill or Personality Scales as Predictors
Open Access
- 15 October 2021
- journal article
- research article
- Published by Frontiers Media SA in Frontiers in Psychology
Abstract
This article addresses a fundamental question in the study of socio-emotional skills, personality traits, and related constructs: “To score or not to score?” When researchers use test scores or scale scores (i.e., fallible point estimates of a skill or trait) as predictors in multiple regression, measurement error in these scores tends to attenuate regression coefficients for the skill and inflate those of the covariates. Unlike for cognitive assessments, it is not fully established how severe this bias can be in socio-emotional skill assessments, that is, how well test scores recover the true regression coefficients — compared with methods designed to account for measurement error: structural equation modeling (SEM) and plausible values (PV). The different types of scores considered in this study are standardized mean scores (SMS), regression factor scores (RFS), empirical Bayes modal (EBM) score, weighted maximum likelihood estimates (WLE), and expected a posteriori (EAP) estimates. We present a simulation study in which we compared these approaches under conditions typical of socio-emotional skill and personality assessments. We examined the performance of five types of test scores, PV, and SEM with regard to two outcomes: (1) percent bias in regression coefficient of the skill in predicting an outcome; and (2) percent bias in the regression coefficient of a covariate. We varied the number of items, factor loadings/item discriminations, sample size, and relative strength of the relationship of the skill with the outcome. Results revealed that whereas different types of test scores were highly correlated with each other, the ensuing bias in regression coefficients varied considerably. The magnitude of bias was highest for WLE with short scales of low reliability. Bias when using SMS or WLE test scores was sometimes large enough to lead to erroneous research conclusions with potentially adverse implications for policy and practice (up to 55% for the regression coefficient of the skill and 20% for that of the covariate). EAP, EBM, and RFS performed better, producing only small bias in some conditions. Additional analyses showed that the performance of test scores also depended on whether standardized or unstandardized scores were used. Only PV and SEM performed well in all scenarios and emerged as the clearly superior options. We recommend that researchers use SEM, and preferably PV, in studies on the (incremental) predictive power of socio-emotional skills.Keywords
This publication has 51 references indexed in Scilit:
- A Comparison of Factor Score Estimation Methods in the Presence of Missing Data: Reliability and an Application to Nicotine DependenceMultivariate Behavioral Research, 2013
- Why We (Usually) Don't Have to Worry About Multiple ComparisonsJournal of Research on Educational Effectiveness, 2012
- Comparative validity of Brief to Medium-Length Big Five and Big Six Personality Questionnaires.Psychological Assessment, 2011
- Analysis of a Two-Level Structural Equation Model With Missing DataSociological Methods & Research, 2010
- The Power of Personality: The Comparative Validity of Personality Traits, Socioeconomic Status, and Cognitive Ability for Predicting Important Life OutcomesPerspectives on Psychological Science, 2007
- Item factor analysis: Current approaches and future directions.Psychological Methods, 2007
- Embedding IRT in Structural Equation Models: A Comparison With Regression Based on IRT ScoresStructural Equation Modeling: A Multidisciplinary Journal, 2005
- Use of Structure Coefficients in Published Multiple Regression Articles: β is not EnoughEducational and Psychological Measurement, 2001
- Estimating Population Characteristics From Sparse Matrix Samples of Item ResponsesJournal of Educational Measurement, 1992
- Standardized versus Unstandardized Regression WeightsApplied Psychological Measurement, 1982