Scholars traditionally receive career credit for a paper based on where in the author list they appear, but position in an author list often carries little information about what the contribution of ...each researcher was. "Contributorship" refers to a movement to formally document the nature of each researcher's contribution to a project. We discuss the emerging CRediT standard for documenting contributions and describe a web-based app and R package called tenzing that is designed to facilitate its use. tenzing can make it easier for researchers on a project to plan and record their planned contributions and to document those contributions in a journal article.
Attitude research has capitalized on evaluative conditioning procedures to gain insight into how evaluations are formed and may be changed. In evaluative conditioning, a conditioned stimulus (CS; ...e.g., an unfamiliar soda brand) is paired with an unconditioned stimulus (US) of affective value (e.g., a pleasant picture). Following this pairing, a change in CS liking may be observed (e.g., the soda brand is liked better). A question with far-reaching theoretical and practical implications is whether the change in CS liking is found when participants feel they do not remember the CS–US pairings at the time an evaluation is produced about the CS. Here, we introduce a new conditional judgment procedure—the two-button-sets (TBS) task—for probing evaluative conditioning effects without feelings of remembering about the valence of the US paired with the CS. In three experiments, the TBS is (1) is successfully validated; it is also used to (2) provide preliminary information on the feeling of remembering question, and (3) to examine an affect-consistent bias in memory judgments for CS–US pairings. Results do not support evaluative effects in the absence of feelings of remembering, and they oppose the view that affect-consistent bias is limited to memory uncertainty. We discuss these findings in light of previous evidence and of dual-learning models of attitudes. We also discuss limitations and research avenues related to the new procedure.
The credibility of scientific claims depends upon the transparency of the research products upon which they are based (e.g., study protocols, data, materials, and analysis scripts). As psychology ...navigates a period of unprecedented introspection, user-friendly tools and services that support open science have flourished. However, the plethora of decisions and choices involved can be bewildering. Here we provide a practical guide to help researchers navigate the process of preparing and sharing the products of their research (e.g., choosing a repository, preparing their research products for sharing, structuring folders, etc.). Being an open scientist means adopting a few straightforward research management practices, which lead to less error prone, reproducible research workflows. Further, this adoption can be piecemeal – each incremental step towards complete transparency adds positive value. Transparent research practices not only improve the efficiency of individual researchers, they enhance the credibility of the knowledge generated by the scientific community.
Informed Bayesian survival analysis Bartoš, František; Aust, Frederik; Haaf, Julia M
BMC medical research methodology,
09/2022, Letnik:
22, Številka:
1
Journal Article
Recenzirano
Odprti dostop
Abstract
Background
We provide an overview of Bayesian estimation, hypothesis testing, and model-averaging and illustrate how they benefit parametric survival analysis. We contrast the Bayesian ...framework to the currently dominant frequentist approach and highlight advantages, such as seamless incorporation of historical data, continuous monitoring of evidence, and incorporating uncertainty about the true data generating process.
Methods
We illustrate the application of the outlined Bayesian approaches on an example data set, retrospective re-analyzing a colon cancer trial. We assess the performance of Bayesian parametric survival analysis and maximum likelihood survival models with AIC/BIC model selection in fixed-n and sequential designs with a simulation study.
Results
In the retrospective re-analysis of the example data set, the Bayesian framework provided evidence for the absence of a positive treatment effect of adding Cetuximab to FOLFOX6 regimen on disease-free survival in patients with resected stage III colon cancer. Furthermore, the Bayesian sequential analysis would have terminated the trial 10.3 months earlier than the standard frequentist analysis. In a simulation study with sequential designs, the Bayesian framework on average reached a decision in almost half the time required by the frequentist counterparts, while maintaining the same power, and an appropriate false-positive rate. Under model misspecification, the Bayesian framework resulted in higher false-negative rate compared to the frequentist counterparts, which resulted in a higher proportion of undecided trials. In fixed-n designs, the Bayesian framework showed slightly higher power, slightly elevated error rates, and lower bias and RMSE when estimating treatment effects in small samples. We found no noticeable differences for survival predictions. We have made the analytic approach readily available to other researchers in the RoBSA R package.
Conclusions
The outlined Bayesian framework provides several benefits when applied to parametric survival analyses. It uses data more efficiently, is capable of considerably shortening the length of clinical trials, and provides a richer set of inferences.
The article proposes a view of evaluative conditioning (EC) as resulting from judgments based on learning instances stored in memory. It is based on the formal episodic memory model MINERVA 2. ...Additional assumptions specify how the information retrieved from memory is used to inform specific evaluative dependent measures. The present approach goes beyond previous accounts in that it uses a well-specified formal model of episodic memory; it is however more limited in scope as it aims to explain EC phenomena that do not involve reasoning processes. The article illustrates how the memory-based-judgment view accounts for several empirical findings in the EC literature that are often discussed as evidence for dual-process models of attitude learning. It sketches novel predictions, discusses limitations of the present approach, and identifies challenges and opportunities for its future development.
The
multibridge
R package allows a Bayesian evaluation of informed hypotheses
H
r
applied to frequency data from an independent binomial or multinomial distribution.
multibridge
uses bridge sampling ...to efficiently compute Bayes factors for the following hypotheses concerning the latent category proportions
𝜃
: (a) hypotheses that postulate equality constraints (e.g.,
𝜃
1
=
𝜃
2
=
𝜃
3
); (b) hypotheses that postulate inequality constraints (e.g.,
𝜃
1
<
𝜃
2
<
𝜃
3
or
𝜃
1
>
𝜃
2
>
𝜃
3
); (c) hypotheses that postulate combinations of inequality constraints and equality constraints (e.g.,
𝜃
1
<
𝜃
2
=
𝜃
3
); and (d) hypotheses that postulate combinations of (a)–(c) (e.g.,
𝜃
1
< (
𝜃
2
=
𝜃
3
),
𝜃
4
). Any informed hypothesis
H
r
may be compared against the encompassing hypothesis
H
e
that all category proportions vary freely, or against the null hypothesis
H
0
that all category proportions are equal.
multibridge
facilitates the fast and accurate comparison of large models with many constraints and models for which relatively little posterior mass falls in the restricted parameter space. This paper describes the underlying methodology and illustrates the use of
multibridge
through fully reproducible examples.
Power priors for replication studies Pawel, Samuel; Aust, Frederik; Held, Leonhard ...
Test (Madrid, Spain),
03/2024, Letnik:
33, Številka:
1
Journal Article
Recenzirano
Odprti dostop
The ongoing replication crisis in science has increased interest in the methodology of replication studies. We propose a novel Bayesian analysis approach using power priors: The likelihood of the ...original study’s data is raised to the power of
α
, and then used as the prior distribution in the analysis of the replication data. Posterior distribution and Bayes factor hypothesis tests related to the power parameter
α
quantify the degree of compatibility between the original and replication study. Inferences for other parameters, such as effect sizes, dynamically borrow information from the original study. The degree of borrowing depends on the conflict between the two studies. The practical value of the approach is illustrated on data from three replication studies, and the connection to hierarchical modeling approaches explored. We generalize the known connection between normal power priors and normal hierarchical models for fixed parameters and show that normal power prior inferences with a beta prior on the power parameter
α
align with normal hierarchical model inferences using a generalized beta prior on the relative heterogeneity variance
I
2
. The connection illustrates that power prior modeling is unnatural from the perspective of hierarchical modeling since it corresponds to specifying priors on a relative rather than an absolute heterogeneity scale.
In the field of evaluative conditioning (EC), two opposing theories—propositional single-process theory versus dual-process theory—are currently being discussed in the literature. The present set of ...experiments test a crucial prediction to adjudicate between these two theories: Dual-process theory postulates that evaluative conditioning can occur without awareness of the contingency between conditioned stimulus (CS) and unconditioned stimulus (US); in contrast, single-process propositional theory postulates that EC requires CS-US contingency awareness. In a set of three studies, we experimentally manipulate contingency awareness by presenting the CSs very briefly, thereby rendering it unlikely to be processed consciously. We address potential issues with previous studies on EC with subliminal or near-threshold CSs that limited their interpretation. Across two experiments, we consistently found an EC effect for CSs presented for 1000 ms and consistently failed to find an EC effect for briefly presented CSs. In a third pre-registered experiment, we again found evidence for an EC effect with CSs presented for 1000 ms, and we found some indication for an EC effect for CSs presented for 20 ms.
Evaluative conditioning is one of the most widely studied procedures for establishing and changing attitudes. The surveillance task is a highly cited evaluative-conditioning paradigm and one that is ...claimed to generate attitudes without awareness. The potential for evaluative-conditioning effects to occur without awareness continues to fuel conceptual, theoretical, and applied developments. Yet few published studies have used this task, and most are characterized by small samples and small effect sizes. We conducted a high-powered (N = 1,478 adult participants), preregistered close replication of the original surveillance-task study (Olson & Fazio, 2001). We obtained evidence for a small evaluative-conditioning effect when “aware” participants were excluded using the original criterion—therefore replicating the original effect. However, no such effect emerged when three other awareness criteria were used. We suggest that there is a need for caution when using evidence from the surveillance-task effect to make theoretical and practical claims about “unaware” evaluative-conditioning effects.
Nonserious answering behavior increases noise and reduces experimental power; it is therefore one of the most important threats to the validity of online research. A simple way to address the problem ...is to ask respondents about the seriousness of their participation and to exclude self-declared nonserious participants from analysis. To validate this approach, a survey was conducted in the week prior to the German 2009 federal election to the Bundestag. Serious participants answered a number of attitudinal and behavioral questions in a more consistent and predictively valid manner than did nonserious participants. We therefore recommend routinely employing seriousness checks in online surveys to improve data validity.