Skip to main content
All Reviews
PsychologyMust Read
beginner

Estimating the reproducibility of psychological science

Open Science Collaboration (2015)

Published
Aug 28, 2015
Journal
Science · Vol. 349 · No. 6251

At a Glance

100 replications, 5 indicators, 1 uncomfortable conclusion — psychology's published effects are substantially weaker than reported.

SummaryAI

270 researchers replicated 100 studies from three major psychology journals (PSCI, JPSP, JEP:LMC). Using five indicators of replication success: only 36% of replications achieved p < .05 (vs. 97% of originals); mean replication effect size was half the original; 47% of original effect sizes fell within the replication 95% CI; 39% were subjectively rated as replicated; and meta-analytically combining both studies left 68% significant.

Cognitive psychology effects replicated better than social (50% vs. 25% by significance). Replication success was predicted by strength of original evidence (lower p-values and larger effect sizes) rather than by team expertise or replication quality. The authors attribute much of the gap to publication and reporting biases inflating original estimates, and emphasize that the results don't establish any individual effect as true or false but reveal that the field's cumulative evidence base is less certain than assumed.

Method Snapshot

Pre-registered replications

Background

Basic statistics

Interesting results (poor reproducibility; weak effect), but the conclusions are overly cautious, possibly due to fear of the community’s reaction

ES