Journal of Clinical Epidemiology

Journal Information
ISSN / EISSN : 0895-4356 / 1878-5921
Published by: Elsevier BV (10.1016)
Total articles ≅ 8,403
Current Coverage
SCOPUS
MEDICUS
MEDLINE
PUBMED
SCIE
Archived in
EBSCO
SHERPA/ROMEO
Filter:

Latest articles in this journal

G.F.N. Berkelmans, S.H. Read, S. Gudbjörnsdottir, S.H. Wild, S. Franzen, Y. van der Graaf, , , N.P. Paynter, J.A.N. Dorresteijn
Published: 20 January 2022
Journal of Clinical Epidemiology; https://doi.org/10.1016/j.jclinepi.2022.01.011

The publisher has not yet granted permission to display this abstract.
, , Lindy Boyette, Corine Latour, Lieuwe de Haan, Jos Twisk
Published: 18 January 2022
Journal of Clinical Epidemiology; https://doi.org/10.1016/j.jclinepi.2022.01.005

Abstract:
OBJECTIVE To compare estimates of effect and variability resulting from standard linear regression analysis and hierarchical multilevel analysis with cross-classified multilevel analysis under various scenarios. STUDY DESIGN AND SETTING We performed a simulation study based on a data structure from an observational study in clinical mental health care. We used a Markov chain Monte Carlo (MCMC) approach to simulate 18 scenarios, varying sample sizes, cluster sizes, effect sizes and between group variances. For each scenario, we performed standard linear regression, multilevel regression with random intercept on patient level, multilevel regression with random intercept on nursing team level and cross-classified multilevel analysis. RESULTS Applying cross-classified multilevel analyses had negligible influence on the effect estimates. However, ignoring cross-classification led to underestimation of the standard errors of the covariates at the two cross-classified levels and to invalidly narrow confidence intervals. This may lead to incorrect statistical inference. Varying sample size, cluster size, effect size and variance had no meaningful influence on these findings. CONCLUSION In case of cross-classified data structures, the use of a cross-classified multilevel model helps estimating valid precision of effects, and thereby, support correct inferences.
, Eva Draborg, Jane Andreasen, Carsten Bogh Juhl, Jennifer Yost, Klara Brunnhuber, Karen A. Robinson, Hans Lund
Published: 15 January 2022
Journal of Clinical Epidemiology; https://doi.org/10.1016/j.jclinepi.2022.01.007

The publisher has not yet granted permission to display this abstract.
Published: 15 January 2022
Journal of Clinical Epidemiology; https://doi.org/10.1016/j.jclinepi.2022.01.010

Abstract:
Objectives Interrupted Time Series (ITS) are a type of non-randomised design commonly used to evaluate public health policy interventions, and the impact of exposures, at the population level. Meta-analysis may be used to combine results from ITS across studies (in the context of systematic reviews) or across sites within the same study. We aimed to examine the statistical approaches, methods, and completeness of reporting in reviews that meta-analyse results from ITS. Study Design and Settings Eight electronic databases were searched to identify reviews (published 2000-2019) that meta-analysed at least two ITS. Characteristics of the included reviews, the statistical methods used to analyse the ITS and meta-analyse their results, effect measures, and risk of bias assessment tools were extracted. Results Of the 4213 identified records, 54 reviews were included. Nearly all reviews (94%) used two-stage meta-analysis, most commonly fitting a random effects model (69%). Among the 41 reviews that re-analysed the ITS, linear regression (39%) and ARIMA (20%) were most commonly used; 38% adjusted for autocorrelation. The most common effect measure meta-analysed was an immediate level-change (46/54). Reporting of the statistical methods and ITS characteristics was often incomplete. Conclusion Improvement is needed in the conduct and reporting of reviews that meta-analyse results from ITS.
Back to Top Top