Given that an effect size of d = .4 is a good first estimate of the smallest effect size of interest in psychological research, we already need over 50 participants for a simple comparison of two ...within-participants conditions if we want to run a study with 80% power. This is more than current practice. In addition, as soon as a between-groups variable or an interaction is involved, numbers of 100, 200, and even more participants are needed. As long as we do not accept these facts, we will keep on running underpowered studies with unclear results. Addressing the issue requires a change in the way research is evaluated by supervisors, examiners, reviewers, and editors. The present paper describes reference numbers needed for the designs most often used by psychologists, including single-variable between-groups and repeated-measures designs with two and three levels, two-factor designs involving two repeated-measures variables or one between-groups variable and one repeated-measures variable (split-plot design). The numbers are given for the traditional, frequentist analysis with p < .05 and Bayesian analysis with BF > 10. These numbers provide researchers with a standard to determine (and justify) the sample size of an upcoming study. The article also describes how researchers can improve the power of their study by including multiple observations per condition per participant.
The purpose of this study is to construct and implement lessons emphasizing “model versatilization,” and to clarify the effectiveness and challenges of the lessons through the realities regarding ...students’ mathematical modeling competencies. “Model versatilization” means activities intended to construct a mathematical model that can be used for various structurally similar real events through the process of creating mathematical models and expanding the range of applicability of these mathematical models. To achieve this purpose, the lessons, which emphasize “model versatilization,” were structured based on a series of modeling materials developed and the framework of the designed lessons. Lesson protocols, students’ worksheets and learning impressions were then analyzed. As a result, students performed modeling competencies related to an awareness of similarities of events in problems of other real events that were common to the mathematical model created for one real event problem, thus demonstrating one aspect of the effectiveness of the lessons.
We present an early version of a Susceptible–Exposed–Infected–Recovered–Deceased (SEIRD) mathematical model based on partial differential equations coupled with a heterogeneous diffusion model. The ...model describes the spatio-temporal spread of the COVID-19 pandemic, and aims to capture dynamics also based on human habits and geographical features. To test the model, we compare the outputs generated by a finite-element solver with measured data over the Italian region of Lombardy, which has been heavily impacted by this crisis between February and April 2020. Our results show a strong qualitative agreement between the simulated forecast of the spatio-temporal COVID-19 spread in Lombardy and epidemiological data collected at the municipality level. Additional simulations exploring alternative scenarios for the relaxation of lockdown restrictions suggest that reopening strategies should account for local population densities and the specific dynamics of the contagion. Thus, we argue that data-driven simulations of our model could ultimately inform health authorities to design effective pandemic-arresting measures and anticipate the geographical allocation of crucial medical resources.
We propose a compartmental mathematical model for the spread of the COVID-19 disease with special focus on the transmissibility of super-spreaders individuals. We compute the basic reproduction ...number threshold, we study the local stability of the disease free equilibrium in terms of the basic reproduction number, and we investigate the sensitivity of the model with respect to the variation of each one of its parameters. Numerical simulations show the suitability of the proposed COVID-19 model for the outbreak that occurred in Wuhan, China.
Mutations in ubiquitously expressed presenilin genes (PSENs) lead to early-onset familial Alzheimer's disease (FAD), but patients carrying the mutation also suffer from heart diseases. To elucidate ...the cardiac myocyte specific effects of PSEN ΔE9, we studied cardiomyocytes derived from induced pluripotent stem cells (iPSC-CMs) from patients carrying AD-causing PSEN1 exon 9 deletion (PSEN1 ΔE9). When compared with their isogenic controls, PSEN1 ΔE9 cardiomyocytes showed increased sarcoplasmic reticulum (SR) Ca2+ leak that was resistant to blockage of ryanodine receptors (RyRs) by tetracaine or inositol-3-reseceptors (IP3Rs) by 2-ABP. The SR Ca2+ leak did not affect electrophysiological properties of the hiPSC-CMs, but according to experiments and in silico simulations the leak induces a diastolic buildup of Ca2+ near the perinuclear SR and reduces the releasable Ca2+ during systole. This demonstrates that PSEN1 ΔE9 induced SR Ca2+ leak has specific effects in iPSC-CMs, reflecting their unique structural and calcium signaling features. The results shed light on the physiological and pathological mechanisms of PSEN1 in cardiac myocytes and explain the intricacies of comorbidity associated with AD-causing mutations in PSEN1.
Display omitted
•PSEN1 ΔE9 mutation causes sarcoplasmic reticulum Ca2+ leak in hiPSC-cardiomyocytes.•Increased Ca2+-leak causes a diastolic buildup of Ca2+ near the perinuclear SR.•SR Ca2+-leak reduces the releasable Ca2+ during systole.•The leak does not affect electrical properties of the hiPSC-cardiomyocytes.•In silico simulations reveal complex autoregulation of SR Ca2+ loading in iPSC-CMs.
The mathematical modelling of coronavirus disease-19 (COVID-19) pandemic has been attempted by a wide range of researchers from the very beginning of cases in India. Initial analysis of available ...models revealed large variations in scope, assumptions, predictions, course, effect of interventions, effect on health-care services, and so on. Thus, a rapid review was conducted for narrative synthesis and to assess correlation between predicted and actual values of cases in India.
A comprehensive, two-step search strategy was adopted, wherein the databases such as Medline, google scholar, MedRxiv, and BioRxiv were searched. Later, hand searching for the articles and contacting known modelers for unpublished models was resorted. The data from the included studies were extracted by the two investigators independently and checked by third researcher.
Based on the literature search, 30 articles were included in this review. As narrative synthesis, data from the studies were summarized in terms of assumptions, model used, predictions, main recommendations, and findings. The Pearson’s correlation coefficient (r) between predicted and actual values (n = 20) was 0.7 (p = 0.002) with R2 = 0.49. For Susceptible, Infected, Recovered (SIR) and its variant models (n = 16) ‘r’ was 0.65 (p = 0.02). The correlation for long-term predictions could not be assessed due to paucity of information.
Review has shown the importance of assumptions and strong correlation between short-term projections but uncertainties for long-term predictions. Thus, short-term predictions may be revised as more and more data become available. The assumptions too need to expand and firm up as the pandemic evolves.
Joint modeling of decisions and neural activation poses the potential to provide significant advances in linking brain and behavior. However, methods of joint modeling have been limited by ...difficulties in estimation, often due to high dimensionality and simultaneous estimation challenges. In the current article, we propose a method of model estimation that draws on state-of-the-art Bayesian hierarchical modeling techniques and uses factor analysis as a means of dimensionality reduction and inference at the group level. This hierarchical factor approach can adopt any model for the individual and distill the relationships of its parameters across individuals through a factor structure. We demonstrate the significant dimensionality reduction gained by factor analysis and good parameter recovery, and illustrate a variety of factor loading constraints that can be used for different purposes and research questions, as well as three applications of the method to previously analyzed data. We conclude that this method provides a flexible and usable approach with interpretable outcomes that are primarily data-driven, in contrast to the largely hypothesis-driven methods often used in joint modeling. Although we focus on joint modeling methods, this model-based estimation approach could be used for any high dimensional modeling problem. We provide open-source code and accompanying tutorial documentation to make the method accessible to any researchers. (PsycInfo Database Record (c) 2024 APA, all rights reserved) (Source: journal abstract)