Skip to main content
All Reviews
PsychologySkim
beginner

The puzzling relationship between multi-laboratory replications and meta-analyses of the published literature

Molly Lewis et al. (2022)

Published
Feb 1, 2022
Journal
Royal Society Open Science · Vol. 9 · No. 2
DOI
10.1098/rsos.211499

At a Glance

Meta-analyses and MLRs disagree — but they're still correlated (r = 0.72), so don't throw the baby out.

Summary

A re-analysis and commentary on Kvarven et al. (2019). Lewis et al. show that while meta-analytic estimates are systematically larger, they're strongly correlated with MLR estimates (r = 0.72) — meaning meta-analyses are informative, not worthless. Using sensitivity analyses for publication bias (worst-case selection models), they find that publication bias alone cannot fully account for the discrepancy in 8 of 15 cases. They consider alternatives: genuine effect heterogeneity from minor methodological differences, differential intervention fidelity, and possible context-sensitivity of social-psychological effects. The core conclusion: the discrepancy is real and still largely unexplained.

Method Snapshot

re-analysis

Background

Read Kvarven et al. (2019) first

An alternative view of the results of the previous article (Kvarven et al. (2019)): that it’s not the meta-analyses that are to blame, but something else. Useful for general context and variety.

ES