Akademska digitalna zbirka SLovenije - logo
E-resources
Peer reviewed Open access
  • p-Hacking and Publication B...
    Friese, Malte; Frankenbach, Julius

    Psychological methods, 08/2020, Volume: 25, Issue: 4
    Journal Article

    Science depends on trustworthy evidence. Thus, a biased scientific record is of questionable value because it impedes scientific progress, and the public receives advice on the basis of unreliable evidence that has the potential to have far-reaching detrimental consequences. Meta-analysis is a technique that can be used to summarize research evidence. However, meta-analytic effect size estimates may themselves be biased, threatening the validity and usefulness of meta-analyses to promote scientific progress. Here, we offer a large-scale simulation study to elucidate how p-hacking and publication bias distort meta-analytic effect size estimates under a broad array of circumstances that reflect the reality that exists across a variety of research areas. The results revealed that, first, very high levels of publication bias can severely distort the cumulative evidence. Second, p-hacking and publication bias interact: At relatively high and low levels of publication bias, p-hacking does comparatively little harm, but at medium levels of publication bias, p-hacking can considerably contribute to bias, especially when the true effects are very small or are approaching zero. Third, p-hacking can severely increase the rate of false positives. A key implication is that, in addition to preventing p-hacking, policies in research institutions, funding agencies, and scientific journals need to make the prevention of publication bias a top priority to ensure a trustworthy base of evidence. Translational Abstract In recent years, the trustworthiness of psychological science has been questioned. A major concern is that many research findings are less robust than the published evidence suggests. Several reasons may contribute to this state of affairs. Two prominently discussed reasons are that (a) researchers use questionable research practices (so called p-hacking) when they analyze the data of their empirical studies, and (b) studies that revealed results consistent with expectations are more likely published than studies that "failed" (publication bias). The present large-scale simulation study estimates the extent to which meta-analytic effect sizes are biased by different degrees of p-hacking and publication bias, considering several factors of influence that may impact on this bias (e.g., the true effect of the phenomenon of interest). Results show that both p-hacking and publication bias contribute to a potentially severely biased impression of the overall evidence. This is especially the case when the true effect that is investigated is very small or does not exist at all. Severe publication bias alone can exert considerable bias; p-hacking exerts considerable bias only when there is also publication bias. However, p-hacking can severely increase the rate of false positives, that is, findings that suggest that a study found a real effect when, in reality, no effect exists. A key implication of the present study is that, in addition to preventing p-hacking, policies in research institutions, funding agencies, and scientific journals need to make the prevention of publication bias a top priority to ensure a trustworthy base of evidence.