Abstract
Data from international large-scale assessments (ILSAs) of schooled populations indicate that boys have considerably poorer literacy skills than girls. New evidence from a household-based ILSA-Organisation for Economic Cooperation and Development Survey of Adult Skills (PIAAC)-indicates that the gender gap in literacy is negligible, even though its assessment framework is similar to that of one of the most widely used school-based assessments, the Program for International Student Assessment (PISA). Individual-level data from 15-, 16-, and 17-year-old teens in countries that administered both assessments were used to estimate and compare literacy gender gaps in the two assessments, after accounting for differences in target population, response rates, scoring scheme, test length, mode of delivery, prevalence of items involving different stimuli in the two assessments (e.g., types of texts), and cognitive processes test-takers need to engage in to solve assessment items (e.g., accessing and retrieving information or reflecting and evaluating information presented in the text). These differences explain only part of the differences across the two studies in estimated literacy gender gaps: Even when these factors are considered, gender gaps remain large in PISA and small (though imprecisely estimated) in PIAAC. The potential roles of test-taking motivation and administration conditions in explaining differences across the studies and implications for research and policy are discussed. Educational Impact and Implications Statement This work compares literacy gender gaps in the teenage years in two low-stakes international large-scale assessments: Program for International Student Assessment (PISA) and Organisation for Economic Cooperation and Development Survey of Adult Skills (PIAAC). Findings show that estimates of literacy gender gaps in the two assessments are different: Boys significantly underachieve compared with girls in PISA, but no gender gap could be identified in PIAAC. These results suggest that before embarking on major policy reforms designed to ensure that boys acquire literacy skills based on their poor showing in the context of large-scale assessments as well as school tests, it would be important to evaluate if and how assessments reflect all of what boys know and can do, if assessments are comprehensive enough to capture dimensions of literacy in which boys may be more proficient and, crucially, if the assessments provide incentives for boys to show test administrators what they know and can do.