Clustered survival data are encountered in many scientific disciplines including human and veterinary medicine, biology, epidemiology, public health, and demography. Frailty models provide a powerful ...tool to analyse clustered survival data. In contrast to the large number of research publications on frailty models, relatively few statistical software packages contain frailty models.It is difficult for statistical practitioners and graduate students to understand frailty models from the existing literature. This book provides an in-depth discussion and explanation of the basics of frailty model methodology for such readers. The discussion includes parametric and semiparametric frailty models and accelerated failure time models. Common techniques to fit frailty models include the EM-algorithm, penalised likelihood techniques, Laplacian integration and Bayesian techniques. More advanced frailty models for hierarchical data are also included.Real-life examples are used to demonstrate how particular frailty models can be fitted and how the results should be interpreted. The programs to fit all the worked-out examples in the book are available on the Springer website with most of the programs developed in the freeware packages R and Winbugs. The book starts with a brief overview of some basic concepts in classical survival analysis, collecting what is needed for the reading on the more complex frailty models.
How powerful new methods in nonlinear control engineering can be applied to neuroscience, from fundamental model formulation to advanced medical applications.
In this paper, we provide efficient estimators and honest confidence bands for a variety of treatment effects including local average (LATE) and local quantile treatment effects (LQTE) in data-rich ...environments. We can handle very many control variables, endogenous receipt of treatment, heterogeneous treatment effects, and function-valued outcomes. Our framework covers the special case of exogenous receipt of treatment, either conditional on controls or unconditionally as in randomized control trials. In the latter case, our approach produces efficient estimators and honest bands for (functional) average treatment effects (ATE) and quantile treatment effects (QTE). To make informative inference possible, we assume that key reduced-form predictive relationships are approximately sparse. This assumption allows the use of regularization and selection methods to estimate those relations, and we provide methods for postregularization and post-selection inference that are uniformly valid (honest) across a wide range of models. We show that a key ingredient enabling honest inference is the use of orthogonal or doubly robust moment conditions in estimating certain reducedform functional parameters. We illustrate the use of the proposed methods with an application to estimating the effect of 401(k) eligibility and participation on accumulated assets. The results on program evaluation are obtained as a consequence of more general results on honest inference in a general moment-condition framework, which arises from structural equation models in econometrics. Here, too, the crucial ingredient is the use of orthogonal moment conditions, which can be constructed from the initial moment conditions. We provide results on honest inference for (function-valued) parameters within this general framework where any high-quality, machine learning methods (e.g., boosted trees, deep neural networks, random forest, and their aggregated and hybrid versions) can be used to learn the nonparametric/high-dimensional components of the model. These include a number of supporting auxiliary results that are of major independent interest: namely, we (1) prove uniform validity of a multiplier bootstrap, (2) offer a uniformly valid functional delta method, and (3) provide results for sparsitybased estimation of regression functions for function-valued outcomes.
Statistics in Medicine, Third Edition makes medical statistics easy to understand by students, practicing physicians, and researchers. The book begins with databases from clinical medicine and uses ...such data to give multiple worked-out illustrations of every method. The text opens with how to plan studies from conception to publication and what to do with your data, and follows with step-by-step instructions for biostatistical methods from the simplest levels (averages, bar charts) progressively to the more sophisticated methods now being seen in medical articles (multiple regression, noninferiority testing). Examples are given from almost every medical specialty and from dentistry, nursing, pharmacy, and health care management. A preliminary guide is given to tailor sections of the text to various lengths of biostatistical courses.
Key Features: *User-friendly format includes medical examples, step-by-step methods, and check-yourself exercises appealing to readers with little or no statistical background, across medical and biomedical disciplines *Facilitates stand-alone methods rather than a required sequence of reading and references to prior text. * Covers trial randomization, treatment ethics in medical research, imputation of missing data, evidence-based medical decisions, how to interpret medical articles, noninferiority testing, meta-analysis, screening number needed to treat, and epidemiology. * Fills the gap left in all other medical statistics books between the reader's knowledge of how to go about research and the book's coverage of how to analyze results of that research. New in this Edition: * New chapters on planning research, managing data and analysis, Bayesian statistics, measuring association and agreement, and questionnaires and surveys. * New sections on what tests and descriptive statistics to choose, false discovery rate, interim analysis, bootstrapping, Bland-Altman plots, Markov chain Monte Carlo (MCMC), and Deming regression. * Expanded coverage on probability, statistical methods and tests relatively new to medical research, ROC curves, experimental design, and survival analysis. *35 Databases in Excel format used in the book and can be downloaded and transferred into whatever format is needed along with PowerPoint slides of figures, tables, and graphs from the book included on the companion site, http://www.elsevierdirect.com/companion.jsp?ISBN=9780123848642 *Medical subject index offers additional search capabilities.
Bayesian models Hobbs, N. Thompson; Hooten, Mevin B
2015., 20150804, 2015, 2015-08-04
eBook
Bayesian modeling has become an indispensable tool for ecological research because it is uniquely suited to deal with complexity in a statistically coherent way. This textbook provides a ...comprehensive and accessible introduction to the latest Bayesian methods—in language ecologists can understand. Unlike other books on the subject, this one emphasizes the principles behind the computations, giving ecologists a big-picture understanding of how to implement this powerful statistical approach. Bayesian Models is an essential primer for non-statisticians. It begins with a definition of probability and develops a step-by-step sequence of connected ideas, including basic distribution theory, network diagrams, hierarchical models, Markov chain Monte Carlo, and inference from single and multiple models. This unique book places less emphasis on computer coding, favoring instead a concise presentation of the mathematical statistics needed to understand how and why Bayesian analysis works. It also explains how to write out properly formulated hierarchical Bayesian models and use them in computing, research papers, and proposals. This primer enables ecologists to understand the statistical principles behind Bayesian modeling and apply them to research, teaching, policy, and management. -Presents the mathematical and statistical foundations of Bayesian modeling in language accessible to non-statisticians -Covers basic distribution theory, network diagrams, hierarchical models, Markov chain Monte Carlo, and more -Deemphasizes computer coding in favor of basic principles -Explains how to write out properly factored statistical expressions representing Bayesian models
This article introduces a new procedure for analyzing the quantile co-movement of a large number of financial time series based on a large-scale panel data model with factor structures. The proposed ...method attempts to capture the unobservable heterogeneity of each of the financial time series based on sensitivity to explanatory variables and to the unobservable factor structure. In our model, the dimension of the common factor structure varies across quantiles, and the explanatory variables is allowed to depend on the factor structure. The proposed method allows for both cross-sectional and serial dependence, and heteroscedasticity, which are common in financial markets.
We propose new estimation procedures for both frequentist and Bayesian frameworks. Consistency and asymptotic normality of the proposed estimator are established. We also propose a new model selection criterion for determining the number of common factors together with theoretical support.
We apply the method to analyze the returns for over 6000 international stocks from over 60 countries during the subprime crisis, European sovereign debt crisis, and subsequent period. The empirical analysis indicates that the common factor structure varies across quantiles. We find that the common factors for the quantiles and the common factors for the mean are different.
Supplementary materials
for this article are available online.
The process of discovery in science and technology may require investigation of a large number of features, such as factors, genes or molecules. In Screening, statistically designed experiments and ...analyses of the resulting data sets are used to identify efficiently the few features that determine key properties of the system under study. This book brings together accounts by leading international experts that are essential reading for those working in fields such as industrial quality improvement, engineering research and development, genetic and medical screening, drug discovery, and computer simulation of manufacturing systems or economic models. Our aim is to promote cross-fertilization of ideas and methods through detailed explanations, a variety of examples and extensive references. Topics cover both physical and computer simulated experiments. They include screening methods for detecting factors that affect the value of a response or its variability, and for choosing between various different response models. Screening for disease in blood samples, for genes linked to a disease and for new compounds in the search for effective drugs are also described. Statistical techniques include Bayesian and frequentist methods of data analysis, algorithmic methods for both the design and analysis of experiments, and the construction of fractional factorial designs and orthogonal arrays. The material is accessible to graduate and research statisticians, and to engineers and chemists with a working knowledge of statistical ideas and techniques. It will be of interest to practitioners and researchers who wish to learn about useful methodologies from within their own area as well as methodologies that can be translated from one area to another.