The use of scientific computing tools is currently customary for solving problems at several complexity levels in Applied Sciences. The great need for reliable software in the scientific community ...conveys a continuous stimulus to develop new and better performing numerical methods that are able to grasp the particular features of the problem at hand. This has been the case for many different settings of numerical analysis, and this Special Issue aims at covering some important developments in various areas of application.
A practical introduction to epidemiology, biostatistics, and research methodology for the whole health care community This comprehensive text, which has been extensively revised with new material and ...additional topics, utilizes a practical slant to introduce health professionals and students to epidemiology, biostatistics, and research methodology. It draws examples from a wide range of topics, covering all of the main contemporary health research methods, including survival analysis, Cox regression, and systematic reviews and meta-analysis—the explanation of which go beyond introductory concepts. This second edition of Quantitative Methods for Health Research: A Practical Interactive Guide to Epidemiology and Statistics also helps develop critical skills that will prepare students to move on to more advanced and specialized methods. A clear distinction is made between knowledge and concepts that all students should ensure they understand, and those that can be pursued further by those who wish to do so. Self-assessment exercises throughout the text help students explore and reflect on their understanding. A program of practical exercises in SPSS (using a prepared data set) helps to consolidate the theory and develop skills and confidence in data handling, analysis, and interpretation. Highlights of the book include: * Combining epidemiology and bio-statistics to demonstrate the relevance and strength of statistical methods * Emphasis on the interpretation of statistics using examples from a variety of public health and health care situations to stress relevance and application * Use of concepts related to examples of published research to show the application of methods and balance between ideals and the realities of research in practice * Integration of practical data analysis exercises to develop skills and confidence * Supplementation by a student companion website which provides guidance on data handling in SPSS and study data sets as referred to in the text Quantitative Methods for Health Research, Second Edition is a practical learning resource for students, practitioners and researchers in public health, health care and related disciplines, providing both a course book and a useful introductory reference.
This book is the first in the market to treat single- and multi-period risk measures (risk functionals) in a thorough, comprehensive manner. It combines the treatment of properties of the risk ...measures with the related aspects of decision making under risk.
This book describes the new generation of discrete choice methods, focusing on the many advances that are made possible by simulation. Researchers use these statistical methods to examine the choices ...that consumers, households, firms, and other agents make. Each of the major models is covered: logit, generalized extreme value, or GEV (including nested and cross-nested logits), probit, and mixed logit, plus a variety of specifications that build on these basics. Simulation-assisted estimation procedures are investigated and compared, including maximum simulated likelihood, method of simulated moments, and method of simulated scores. Procedures for drawing from densities are described, including variance reduction techniques such as anithetics and Halton draws. Recent advances in Bayesian procedures are explored, including the use of the Metropolis-Hastings algorithm and its variant Gibbs sampling. No other book incorporates all these fields, which have arisen in the past 20 years. The procedures are applicable in many fields, including energy, transportation, environmental studies, health, labor, and marketing.
Abstract Despite having important biological implications, insertion, and deletion (indel) events are often disregarded or mishandled during phylogenetic inference. In multiple sequence alignment, ...indels are represented as gaps and are estimated without considering the distinct evolutionary history of insertions and deletions. Consequently, indels are usually excluded from subsequent inference steps, such as ancestral sequence reconstruction and phylogenetic tree search. Here, we introduce indel-aware parsimony (indelMaP), a novel way to treat gaps under the parsimony criterion by considering insertions and deletions as separate evolutionary events and accounting for long indels. By identifying the precise location of an evolutionary event on the tree, we can separate overlapping indel events and use affine gap penalties for long indel modeling. Our indel-aware approach harnesses the phylogenetic signal from indels, including them into all inference stages. Validation and comparison to state-of-the-art inference tools on simulated data show that indelMaP is most suitable for densely sampled datasets with closely to moderately related sequences, where it can reach alignment quality comparable to probabilistic methods and accurately infer ancestral sequences, including indel patterns. Due to its remarkable speed, our method is well suited for epidemiological datasets, eliminating the need for downsampling and enabling the exploitation of the additional information provided by dense taxonomic sampling. Moreover, indelMaP offers new insights into the indel patterns of biologically significant sequences and advances our understanding of genetic variability by considering gaps as crucial evolutionary signals rather than mere artefacts.
Theoretical guarantees for causal inference using propensity scores are partially based on the scores behaving like conditional probabilities. However, scores between zero and one do not necessarily ...behave like probabilities, especially when output by flexible statistical estimators. We perform a simulation study to assess the error in estimating the average treatment effect before and after applying a simple and well-established postprocessing method to calibrate the propensity scores. We observe that postcalibration reduces the error in effect estimation and that larger improvements in calibration result in larger improvements in effect estimation. Specifically, we find that expressive tree-based estimators, which are often less calibrated than logistic regression-based models initially, tend to show larger improvements relative to logistic regression-based models. Given the improvement in effect estimation and that postcalibration is computationally cheap, we recommend its adoption when modeling propensity scores with expressive models.