Measurement Instruments for the Social Sciences
Journal Information

EISSN: 25238930
Published by:
Springer Science and Business Media LLC
Total articles ≅ 38
Latest articles in this journal
Published: 10 October 2022
Measurement Instruments for the Social Sciences, Volume 4, pp 1-24; https://doi.org/10.1186/s42409-022-00041-2
Abstract:
Religiousness and spirituality are important in the study of psychology for several reasons: They are central to identity and values; they have been reported as being positively associated with health and well-being; and they capture (and perhaps lead to) the largest measurable psychological differences between societies. At five items, the Duke University Religion Index (DUREL) is an efficient measure, which advantageously distinguishes between religious sentiment and activity, and between formal versus private involvement. This project extends its internal validation throughout the world, with formal tests of measurement invariance in three languages in Namibia (Study 1) and in a global sample of 26 countries (Study 2). Results confirmed a two-subscale factorial structure of Religious Activity (combining organizational and non-organizational activities) and Intrinsic Religiosity in Namibia and in half of the 26-country samples. In 13 other countries, fit was best for a one-factor model. Fit was problematic where there was too little intra-national variance: in China and Japan, where religious involvement is universally low, and in Tanzania, where it is universally high. Scalar measurement invariance was found for the one-factor structure across 13 samples and for the two-factor structure across 11 samples. External validation of the scale is examined using psychological and sociodemographic variables. This validation of the DUREL supports its use across contexts, facilitating increased attention to this important aspect of both personality and culture.
Published: 30 September 2022
Measurement Instruments for the Social Sciences, Volume 4, pp 1-23; https://doi.org/10.1186/s42409-022-00038-x
Abstract:
This paper examines the feasibility of ex-post harmonisation strategies using European Values Study (EVS) Wave 5 (2017–2020) and European Social Survey (ESS) Round 9 (2018–2019) data across 17 countries. The study shows an empirical assessment of the comparability of four items measuring religious behaviours (belonging to a religious denomination at present/in the past, religious services attendance, and praying), captured in both surveys. The novelty of this paper lies in the analytical comparison of religiosity indicators that are rarely assessed from a comparative perspective.The harmonisation strategy was based upon several analytical techniques that seek to determine similarities and differences between the selected items in terms of (a) their validity, by examining their correlations with a set of sociodemographic and substantive correlates, (b) their distributions, supplemented by visual comparisons and relevant statistical tests, and (c) item non-substantive shares. The findings pointed to the most consistency among the partial correlations, where individual religiosity produced the most differences between the surveys. Distributions produced the most discrepancies that also corresponded to less similarity across variable categories as gauged by Duncan’s index. This paper is descriptive and exploratory in its aim. It can be taken as a jumping-off point for future research where the time series of these two surveys, and potentially others, can be examined across aggregate levels (e.g. birth cohorts, countries).
Published: 24 September 2022
Measurement Instruments for the Social Sciences, Volume 4, pp 1-13; https://doi.org/10.1186/s42409-022-00040-3
Abstract:
Individuals hold normative ideas about the just distribution of goods and burdens within a social aggregate. These normative ideas guide the evaluation of existing inequalities and refer to four basic principles: (1) Equality stands for an equal distribution of rewards and burdens. While the principle of (2) need takes individual contributions into account, (3) equity suggests a distribution based on merit. The (4) entitlement principle suggests that ascribed (e.g., gender) and achieved status characteristics (e.g., occupational prestige) should determine the distribution of goods and burdens. Past research has argued that preferences for these principles vary with social position as well as the social structure of a society. The Basic Social Justice Orientations (BSJO) scale was developed to assess agreement with the four justice principles but so far has only been fielded in Germany. Round 9 of the European Social Survey (ESS R9 with data collected in 2018/2019) is the first time; four items of the BSJO scale (1 item per justice principle) were included in a cross-national survey program, offering the unique opportunity to study both within and between country variation. To facilitate substantive research on preference for equality, equity, need, and entitlement, this report provides evidence on measurement quality in 29 European countries from ESS R9. Analyzing response distributions, non-response, reliability, and associations with related variables, we find supportive evidence that the four items of the BSJO scale included in ESS R9 produce low non-response rates, estimate agreement with the four distributive principles reliably, and follow expected correlations with related concepts. Researchers should, however, remember that the BSJO scale, as implemented in the ESS R9, only provides manifest indicators, which therefore may not cover the full spectrum of the underlying distributive principles but focus on specific elements of it.
Published: 5 September 2022
Measurement Instruments for the Social Sciences, Volume 4, pp 1-11; https://doi.org/10.1186/s42409-022-00037-y
Abstract:
Contact-tracing smartphone apps that rely on users’ private data have been proposed as important tools in fighting the COVID-19 pandemic. The use of these apps, however, has sparked new debates on the value of data privacy. Several earlier studies have investigated citizens’ willingness to use such apps. We propose a set of questions as a new measurement instrument that goes beyond eliciting acceptance and aims at quantifying users’ willingness to pay (WTP) for data privacy. We assess some aspects of the measurement instrument pertaining to its validity. We find central assumptions of our theoretical model met, suggesting that the instrument serves as a good starting point for measuring WTP. For example, we found a rather low WTP for data privacy in times of a pandemic, with high consent rates to data sharing and a majority of people who would pay amounts of up to 10€ only to not have to share data. Nevertheless, there are several improvements to the instrument possible that should be addressed by future research. We also encourage researchers to field the refined version in larger samples including the offline population.
Published: 3 September 2022
Measurement Instruments for the Social Sciences, Volume 4, pp 1-20; https://doi.org/10.1186/s42409-022-00039-w
Abstract:
International large-scale assessments (LSAs), such as the Programme for International Student Assessment (PISA), provide essential information about the distribution of student proficiencies across a wide range of countries. The repeated assessments of the distributions of these cognitive domains offer policymakers important information for evaluating educational reforms and received considerable attention from the media. Furthermore, the analytical strategies employed in LSAs often define methodological standards for applied researchers in the field. Hence, it is vital to critically reflect on the conceptual foundations of analytical choices in LSA studies. This article discusses the methodological challenges in selecting and specifying the scaling model used to obtain proficiency estimates from the individual student responses in LSA studies. We distinguish design-based inference from model-based inference. It is argued that for the official reporting of LSA results, design-based inference should be preferred because it allows for a clear definition of the target of inference (e.g., country mean achievement) and is less sensitive to specific modeling assumptions. More specifically, we discuss five analytical choices in the specification of the scaling model: (1) specification of the functional form of item response functions, (2) the treatment of local dependencies and multidimensionality, (3) the consideration of test-taking behavior for estimating student ability, and the role of country differential items functioning (DIF) for (4) cross-country comparisons and (5) trend estimation. This article’s primary goal is to stimulate discussion about recently implemented changes and suggested refinements of the scaling models in LSA studies.
Published: 21 June 2022
Measurement Instruments for the Social Sciences, Volume 4, pp 1-12; https://doi.org/10.1186/s42409-022-00036-z
Abstract:
Question order effect refers to the phenomenon that previous questions may affect the cognitive response process and respondents’ answers. Previous questions generate a context or frame in which questions are interpreted. At the same time, in online surveys, the visual design may also shift responses. Past empirical research has yielded considerable evidence supporting the impact of question order on measurement, but few studies have investigated how question order effects vary with the visual design. Our main research question was whether question order effects are different on item-by-item formats compared to grid formats. The study uses data from an online survey experiment conducted on a non-probability-based online panel in Hungary, in 2019. We used the welfare-related questions of the 8’th wave of ESS. We manipulated the questionnaire by changing the position of a question that calls forth negative stereotypes about such social benefits and services. We further varied the visual design by presenting the questions in separate pages (item-by-item) or one grid. The results show that placing the priming questions right before the target item significantly changed respondents’ attitudes in a negative way, but the effect was significant only when questions were presented on separate pages. A possible reason behind this finding may be that respondents engage in a deeper cognition when questions are presented separately. On the other hand, the grid format was robust against question order, in addition, we found little evidence of stronger satisficing on grids. The findings highlight that mixing item-by-item and grids formats in online surveys may introduce measurement inequivalence, especially when question order effects are expected.
Published: 12 May 2022
Measurement Instruments for the Social Sciences, Volume 4, pp 1-8; https://doi.org/10.1186/s42409-022-00033-2
Abstract:
This article describes the context, development, objectives, and content of three instruments. They stem from two questionnaires, used in the ERiK-Surveys 2020 and the Corona-KiTa-Study (CKS), two multi-perspective surveys which were developed by the German Youth Institute, to measure quality as well as challenges and solutions of the Corona pandemic in early childhood education and care (ECEC). The three instruments focus on (1) childcare center directors’ subjective level of information about pandemic-related regulations in the ERiK questionnaire and the extent of implementation of (2) hygiene and (3) protective measures in ECEC in the CKS questionnaire. First analyses suggest good performance and quality of the instruments. Further analyses (e.g., regarding validity and reliability) will be carried out. The instruments seem to be promising for future research, for example regarding medical questions in the field of ECEC.
Published: 15 April 2022
Measurement Instruments for the Social Sciences, Volume 4, pp 1-6; https://doi.org/10.1186/s42409-022-00034-1
Abstract:
This study examined the validity of the Japanese version of the Benign and Malicious Envy Scale (BeMaS) with Japanese undergraduate student and non-student samples. Previous studies have identified two types of envy, benign and malicious, that motivate different types of behavior. However, the validity of the BeMaS, developed to measure two types of dispositional envy, has not been adequately confirmed in East Asian countries. Furthermore, it is unclear whether the two-factor structure of BeMaS is identical across various samples. Thus, in this study, we specified the Japanese words describing envy, namely, urayamashii or netamashii, suitable for the Japanese BeMaS. Additionally, we tested the validity of the scale’s two-factor model across undergraduate students and non-student samples. The questionnaire survey results showed that the validity of BeMaS’s two-factor structural model was confirmed in both samples and the goodness of fit was better for urayamashii than for netamashii. Moreover, measurement invariance across the two samples was established in configural and metric models.
Published: 21 March 2022
Measurement Instruments for the Social Sciences, Volume 4, pp 1-8; https://doi.org/10.1186/s42409-022-00032-3
Abstract:
We explore mask-wearing behavior during the coronavirus pandemic using the Self-Appraisal of Masking Instrument (SAMI). We situate this survey-based instrument within a theory in which the decision to mask reflects social identity, an associated identity standard, and appraisals that generate feelings about oneself. Analyses of SAMI’s empirical properties reveal that masking-specific emotional reactions are distinct from emotional reports related to current events and politics (discriminant validity). We also uncover evidence of predictive validity: expressed feelings about masking predict future voting more than 6 months later. We recommend SAMI to researchers interested in studying mask resistance in an increasingly polarized political climate, and the intuition behind SAMI could prove useful in other research contexts in which health decisions reflect a conscious comparison to standards held by those who share an identity or will otherwise pass judgment.
Correction
Published: 23 February 2022
Measurement Instruments for the Social Sciences, Volume 4, pp 1-1; https://doi.org/10.1186/s42409-022-00031-4