NUK - logo
E-viri
Recenzirano Odprti dostop
  • From Discovery to Justifica...
    Witte, Erich H; Zenker, Frank

    Frontiers in psychology, 10/2017, Letnik: 8, Številka: OCT
    Journal Article

    The gold standard for an empirical science is the replicability of its research results. But the estimated average replicability rate of key-effects that top-tier psychology journals report falls between 36 and 39% (objective vs. subjective rate; Open Science Collaboration, 2015). So the standard mode of applying null-hypothesis significance testing (NHST) fails to adequately separate stable from random effects. Therefore, NHST does not fully convince as a statistical inference strategy. We argue that the replicability crisis is "home-made" because more sophisticated strategies can deliver results the successful replication of which is sufficiently probable. Thus, we can overcome the replicability crisis by integrating empirical results into genuine research programs. Instead of continuing to narrowly evaluate only the stability of data against random fluctuations ( ), such programs evaluate rival hypotheses against stable data ( ).