Clustered survival data are encountered in many scientific disciplines including human and veterinary medicine, biology, epidemiology, public health, and demography. Frailty models provide a powerful ...tool to analyse clustered survival data. In contrast to the large number of research publications on frailty models, relatively few statistical software packages contain frailty models.It is difficult for statistical practitioners and graduate students to understand frailty models from the existing literature. This book provides an in-depth discussion and explanation of the basics of frailty model methodology for such readers. The discussion includes parametric and semiparametric frailty models and accelerated failure time models. Common techniques to fit frailty models include the EM-algorithm, penalised likelihood techniques, Laplacian integration and Bayesian techniques. More advanced frailty models for hierarchical data are also included.Real-life examples are used to demonstrate how particular frailty models can be fitted and how the results should be interpreted. The programs to fit all the worked-out examples in the book are available on the Springer website with most of the programs developed in the freeware packages R and Winbugs. The book starts with a brief overview of some basic concepts in classical survival analysis, collecting what is needed for the reading on the more complex frailty models.
In this paper, we provide efficient estimators and honest confidence bands for a variety of treatment effects including local average (LATE) and local quantile treatment effects (LQTE) in data-rich ...environments. We can handle very many control variables, endogenous receipt of treatment, heterogeneous treatment effects, and function-valued outcomes. Our framework covers the special case of exogenous receipt of treatment, either conditional on controls or unconditionally as in randomized control trials. In the latter case, our approach produces efficient estimators and honest bands for (functional) average treatment effects (ATE) and quantile treatment effects (QTE). To make informative inference possible, we assume that key reduced-form predictive relationships are approximately sparse. This assumption allows the use of regularization and selection methods to estimate those relations, and we provide methods for postregularization and post-selection inference that are uniformly valid (honest) across a wide range of models. We show that a key ingredient enabling honest inference is the use of orthogonal or doubly robust moment conditions in estimating certain reducedform functional parameters. We illustrate the use of the proposed methods with an application to estimating the effect of 401(k) eligibility and participation on accumulated assets. The results on program evaluation are obtained as a consequence of more general results on honest inference in a general moment-condition framework, which arises from structural equation models in econometrics. Here, too, the crucial ingredient is the use of orthogonal moment conditions, which can be constructed from the initial moment conditions. We provide results on honest inference for (function-valued) parameters within this general framework where any high-quality, machine learning methods (e.g., boosted trees, deep neural networks, random forest, and their aggregated and hybrid versions) can be used to learn the nonparametric/high-dimensional components of the model. These include a number of supporting auxiliary results that are of major independent interest: namely, we (1) prove uniform validity of a multiplier bootstrap, (2) offer a uniformly valid functional delta method, and (3) provide results for sparsitybased estimation of regression functions for function-valued outcomes.
Neuropsychedelia Langlitz, Nicolas
2012., 20121008, 2012, 2012-11-07
eBook
Neuropsychedelia examines the revival of psychedelic science since the "Decade of the Brain." After the breakdown of this previously prospering area of psychopharmacology, and in the wake of clashes ...between counterculture and establishment in the late 1960s, a new generation of hallucinogen researchers used the hype around the neurosciences in the 1990s to bring psychedelics back into the mainstream of science and society. This book is based on anthropological fieldwork and philosophical reflections on life and work in two laboratories that have played key roles in this development: a human lab in Switzerland and an animal lab in California. It sheds light on the central transnational axis of the resurgence connecting American psychedelic culture with the home country of LSD. In the borderland of science and religion, Neuropsychedelia explores the tensions between the use of hallucinogens to model psychoses and to evoke spiritual experiences in laboratory settings. Its protagonists, including the anthropologist himself, struggle to find a place for the mystical under conditions of late-modern materialism.
Bayesian models Hobbs, N. Thompson; Hooten, Mevin B
2015., 20150804, 2015, 2015-08-04
eBook
Bayesian modeling has become an indispensable tool for ecological research because it is uniquely suited to deal with complexity in a statistically coherent way. This textbook provides a ...comprehensive and accessible introduction to the latest Bayesian methods—in language ecologists can understand. Unlike other books on the subject, this one emphasizes the principles behind the computations, giving ecologists a big-picture understanding of how to implement this powerful statistical approach. Bayesian Models is an essential primer for non-statisticians. It begins with a definition of probability and develops a step-by-step sequence of connected ideas, including basic distribution theory, network diagrams, hierarchical models, Markov chain Monte Carlo, and inference from single and multiple models. This unique book places less emphasis on computer coding, favoring instead a concise presentation of the mathematical statistics needed to understand how and why Bayesian analysis works. It also explains how to write out properly formulated hierarchical Bayesian models and use them in computing, research papers, and proposals. This primer enables ecologists to understand the statistical principles behind Bayesian modeling and apply them to research, teaching, policy, and management. -Presents the mathematical and statistical foundations of Bayesian modeling in language accessible to non-statisticians -Covers basic distribution theory, network diagrams, hierarchical models, Markov chain Monte Carlo, and more -Deemphasizes computer coding in favor of basic principles -Explains how to write out properly factored statistical expressions representing Bayesian models
This article introduces a new procedure for analyzing the quantile co-movement of a large number of financial time series based on a large-scale panel data model with factor structures. The proposed ...method attempts to capture the unobservable heterogeneity of each of the financial time series based on sensitivity to explanatory variables and to the unobservable factor structure. In our model, the dimension of the common factor structure varies across quantiles, and the explanatory variables is allowed to depend on the factor structure. The proposed method allows for both cross-sectional and serial dependence, and heteroscedasticity, which are common in financial markets.
We propose new estimation procedures for both frequentist and Bayesian frameworks. Consistency and asymptotic normality of the proposed estimator are established. We also propose a new model selection criterion for determining the number of common factors together with theoretical support.
We apply the method to analyze the returns for over 6000 international stocks from over 60 countries during the subprime crisis, European sovereign debt crisis, and subsequent period. The empirical analysis indicates that the common factor structure varies across quantiles. We find that the common factors for the quantiles and the common factors for the mean are different.
Supplementary materials
for this article are available online.
Many historical processes are dynamic. Populations grow and decline. Empires expand and collapse. Religions spread and wither. Natural scientists have made great strides in understanding dynamical ...processes in the physical and biological worlds using a synthetic approach that combines mathematical modeling with statistical analyses. Taking up the problem of territorial dynamics--why some polities at certain times expand and at other times contract--this book shows that a similar research program can advance our understanding of dynamical processes in history.
Peter Turchin develops hypotheses from a wide range of social, political, economic, and demographic factors: geopolitics, factors affecting collective solidarity, dynamics of ethnic assimilation/religious conversion, and the interaction between population dynamics and sociopolitical stability. He then translates these into a spectrum of mathematical models, investigates the dynamics predicted by the models, and contrasts model predictions with empirical patterns. Turchin's highly instructive empirical tests demonstrate that certain models predict empirical patterns with a very high degree of accuracy. For instance, one model accounts for the recurrent waves of state breakdown in medieval and early modern Europe. And historical data confirm that ethno-nationalist solidarity produces an aggressively expansive state under certain conditions (such as in locations where imperial frontiers coincide with religious divides). The strength of Turchin's results suggests that the synthetic approach he advocates can significantly improve our understanding of historical dynamics.
A review of the current theories of the visual cortex and the biological data on which they are based, this book presents a unified computational approach to understanding the structure, development, ...and function of the visual cortex.
Reconstructing phylogenies through Bayesian methods has many benefits, which include providing a mathematically sound framework, providing realistic estimates of uncertainty and being able to ...incorporate different sources of information based on formal principles. Bayesian phylogenetic analyses are popular for interpreting nucleotide sequence data, however for such studies one needs to specify a site model and associated substitution model. Often, the parameters of the site model is of no interest and an ad-hoc or additional likelihood based analysis is used to select a single site model.
bModelTest allows for a Bayesian approach to inferring and marginalizing site models in a phylogenetic analysis. It is based on trans-dimensional Markov chain Monte Carlo (MCMC) proposals that allow switching between substitution models as well as estimating the posterior probability for gamma-distributed rate heterogeneity, a proportion of invariable sites and unequal base frequencies. The model can be used with the full set of time-reversible models on nucleotides, but we also introduce and demonstrate the use of two subsets of time-reversible substitution models.
With the new method the site model can be inferred (and marginalized) during the MCMC analysis and does not need to be pre-determined, as is now often the case in practice, by likelihood-based methods. The method is implemented in the bModelTest package of the popular BEAST 2 software, which is open source, licensed under the GNU Lesser General Public License and allows joint site model and tree inference under a wide range of models.
This paper shows consistency of a two-step estimation of the factors in a dynamic approximate factor model when the panel of time series is large (
n
large). In the first step, the parameters of the ...model are estimated from an OLS on principal components. In the second step, the factors are estimated via the Kalman smoother. The analysis develops the theory for the estimator considered in
Giannone et al. (2004) and
Giannone et al. (2008) and for the many empirical papers using this framework for nowcasting.
Poisson Autoregression Fokianos, Konstantinos; Rahbek, Anders; Tjøstheim, Dag
Journal of the American Statistical Association,
12/2009, Letnik:
104, Številka:
488
Journal Article
Recenzirano
Odprti dostop
In this article we consider geometric ergodicity and likelihood-based inference for linear and nonlinear Poisson autoregression. In the linear case, the conditional mean is linked linearly to its ...past values, as well as to the observed values of the Poisson process. This also applies to the conditional variance, making possible interpretation as an integer-valued generalized autoregressive conditional heteroscedasticity process. In a nonlinear conditional Poisson model, the conditional mean is a nonlinear function of its past values and past observations. As a particular example, we consider an exponential autoregressive Poisson model for time series. Under geometric ergodicity, the maximum likelihood estimators are shown to be asymptotically Gaussian in the linear model. In addition, we provide a consistent estimator of their asymptotic covariance matrix. Our approach to verifying geometric ergodicity proceeds via Markov theory and irreducibility. Finding transparent conditions for proving ergodicity turns out to be a delicate problem in the original model formulation. This problem is circumvented by allowing a perturbation of the model. We show that as the perturbations can be chosen to be arbitrarily small, the differences between the perturbed and nonperturbed versions vanish as far as the asymptotic distribution of the parameter estimates is concerned. This article has supplementary material online.