Akademska digitalna zbirka SLovenije - logo
E-resources
Full text
Peer reviewed
  • Estimating the Difference B...
    Polanin, Joshua R.; Tanner-Smith, Emily E.; Hennessy, Emily A.

    Review of educational research, 03/2016, Volume: 86, Issue: 1
    Journal Article

    Practitioners and policymakers rely on meta-analyses to inform decision making around the allocation of resources to individuals and organizations. It is therefore paramount to consider the validity of these results. A well-documented threat to the validity of research synthesis results is the presence of publication bias, a phenomenon where studies with large and/or statistically significant effects, relative to studies with small or null effects, are more likely to be published. We investigated this phenomenon empirically by reviewing meta-analyses published in top-tier journals between 1986 and 2013 that quantified the difference between effect sizes from published and unpublished research. We reviewed 383 meta-analyses of which 81 had sufficient information to calculate an effect size. Results indicated that published studies yielded larger effect sizes than those from unpublished studies (d; = 0.18, 95% confidence interval 0.10, 0.25). Moderator analyses revealed that the difference was larger in meta-analyses that included a wide range of unpublished literature. We conclude that intervention researchers require continued support to publish null findings and that meta-analyses should include unpublished studies to mitigate the potential bias from publication status.