Clustered survival data are encountered in many scientific disciplines including human and veterinary medicine, biology, epidemiology, public health, and demography. Frailty models provide a powerful ...tool to analyse clustered survival data. In contrast to the large number of research publications on frailty models, relatively few statistical software packages contain frailty models.It is difficult for statistical practitioners and graduate students to understand frailty models from the existing literature. This book provides an in-depth discussion and explanation of the basics of frailty model methodology for such readers. The discussion includes parametric and semiparametric frailty models and accelerated failure time models. Common techniques to fit frailty models include the EM-algorithm, penalised likelihood techniques, Laplacian integration and Bayesian techniques. More advanced frailty models for hierarchical data are also included.Real-life examples are used to demonstrate how particular frailty models can be fitted and how the results should be interpreted. The programs to fit all the worked-out examples in the book are available on the Springer website with most of the programs developed in the freeware packages R and Winbugs. The book starts with a brief overview of some basic concepts in classical survival analysis, collecting what is needed for the reading on the more complex frailty models.
In this paper, we provide efficient estimators and honest confidence bands for a variety of treatment effects including local average (LATE) and local quantile treatment effects (LQTE) in data-rich ...environments. We can handle very many control variables, endogenous receipt of treatment, heterogeneous treatment effects, and function-valued outcomes. Our framework covers the special case of exogenous receipt of treatment, either conditional on controls or unconditionally as in randomized control trials. In the latter case, our approach produces efficient estimators and honest bands for (functional) average treatment effects (ATE) and quantile treatment effects (QTE). To make informative inference possible, we assume that key reduced-form predictive relationships are approximately sparse. This assumption allows the use of regularization and selection methods to estimate those relations, and we provide methods for postregularization and post-selection inference that are uniformly valid (honest) across a wide range of models. We show that a key ingredient enabling honest inference is the use of orthogonal or doubly robust moment conditions in estimating certain reducedform functional parameters. We illustrate the use of the proposed methods with an application to estimating the effect of 401(k) eligibility and participation on accumulated assets. The results on program evaluation are obtained as a consequence of more general results on honest inference in a general moment-condition framework, which arises from structural equation models in econometrics. Here, too, the crucial ingredient is the use of orthogonal moment conditions, which can be constructed from the initial moment conditions. We provide results on honest inference for (function-valued) parameters within this general framework where any high-quality, machine learning methods (e.g., boosted trees, deep neural networks, random forest, and their aggregated and hybrid versions) can be used to learn the nonparametric/high-dimensional components of the model. These include a number of supporting auxiliary results that are of major independent interest: namely, we (1) prove uniform validity of a multiplier bootstrap, (2) offer a uniformly valid functional delta method, and (3) provide results for sparsitybased estimation of regression functions for function-valued outcomes.
Neuropsychedelia Langlitz, Nicolas
2012., 20121008, 2012, 2012-11-07
eBook
Neuropsychedelia examines the revival of psychedelic science since the "Decade of the Brain." After the breakdown of this previously prospering area of psychopharmacology, and in the wake of clashes ...between counterculture and establishment in the late 1960s, a new generation of hallucinogen researchers used the hype around the neurosciences in the 1990s to bring psychedelics back into the mainstream of science and society. This book is based on anthropological fieldwork and philosophical reflections on life and work in two laboratories that have played key roles in this development: a human lab in Switzerland and an animal lab in California. It sheds light on the central transnational axis of the resurgence connecting American psychedelic culture with the home country of LSD. In the borderland of science and religion, Neuropsychedelia explores the tensions between the use of hallucinogens to model psychoses and to evoke spiritual experiences in laboratory settings. Its protagonists, including the anthropologist himself, struggle to find a place for the mystical under conditions of late-modern materialism.
This article introduces a new procedure for analyzing the quantile co-movement of a large number of financial time series based on a large-scale panel data model with factor structures. The proposed ...method attempts to capture the unobservable heterogeneity of each of the financial time series based on sensitivity to explanatory variables and to the unobservable factor structure. In our model, the dimension of the common factor structure varies across quantiles, and the explanatory variables is allowed to depend on the factor structure. The proposed method allows for both cross-sectional and serial dependence, and heteroscedasticity, which are common in financial markets.
We propose new estimation procedures for both frequentist and Bayesian frameworks. Consistency and asymptotic normality of the proposed estimator are established. We also propose a new model selection criterion for determining the number of common factors together with theoretical support.
We apply the method to analyze the returns for over 6000 international stocks from over 60 countries during the subprime crisis, European sovereign debt crisis, and subsequent period. The empirical analysis indicates that the common factor structure varies across quantiles. We find that the common factors for the quantiles and the common factors for the mean are different.
Supplementary materials
for this article are available online.
Reconstructing phylogenies through Bayesian methods has many benefits, which include providing a mathematically sound framework, providing realistic estimates of uncertainty and being able to ...incorporate different sources of information based on formal principles. Bayesian phylogenetic analyses are popular for interpreting nucleotide sequence data, however for such studies one needs to specify a site model and associated substitution model. Often, the parameters of the site model is of no interest and an ad-hoc or additional likelihood based analysis is used to select a single site model.
bModelTest allows for a Bayesian approach to inferring and marginalizing site models in a phylogenetic analysis. It is based on trans-dimensional Markov chain Monte Carlo (MCMC) proposals that allow switching between substitution models as well as estimating the posterior probability for gamma-distributed rate heterogeneity, a proportion of invariable sites and unequal base frequencies. The model can be used with the full set of time-reversible models on nucleotides, but we also introduce and demonstrate the use of two subsets of time-reversible substitution models.
With the new method the site model can be inferred (and marginalized) during the MCMC analysis and does not need to be pre-determined, as is now often the case in practice, by likelihood-based methods. The method is implemented in the bModelTest package of the popular BEAST 2 software, which is open source, licensed under the GNU Lesser General Public License and allows joint site model and tree inference under a wide range of models.
This paper shows consistency of a two-step estimation of the factors in a dynamic approximate factor model when the panel of time series is large (
n
large). In the first step, the parameters of the ...model are estimated from an OLS on principal components. In the second step, the factors are estimated via the Kalman smoother. The analysis develops the theory for the estimator considered in
Giannone et al. (2004) and
Giannone et al. (2008) and for the many empirical papers using this framework for nowcasting.
Poisson Autoregression Fokianos, Konstantinos; Rahbek, Anders; Tjøstheim, Dag
Journal of the American Statistical Association,
12/2009, Letnik:
104, Številka:
488
Journal Article
Recenzirano
Odprti dostop
In this article we consider geometric ergodicity and likelihood-based inference for linear and nonlinear Poisson autoregression. In the linear case, the conditional mean is linked linearly to its ...past values, as well as to the observed values of the Poisson process. This also applies to the conditional variance, making possible interpretation as an integer-valued generalized autoregressive conditional heteroscedasticity process. In a nonlinear conditional Poisson model, the conditional mean is a nonlinear function of its past values and past observations. As a particular example, we consider an exponential autoregressive Poisson model for time series. Under geometric ergodicity, the maximum likelihood estimators are shown to be asymptotically Gaussian in the linear model. In addition, we provide a consistent estimator of their asymptotic covariance matrix. Our approach to verifying geometric ergodicity proceeds via Markov theory and irreducibility. Finding transparent conditions for proving ergodicity turns out to be a delicate problem in the original model formulation. This problem is circumvented by allowing a perturbation of the model. We show that as the perturbations can be chosen to be arbitrarily small, the differences between the perturbed and nonperturbed versions vanish as far as the asymptotic distribution of the parameter estimates is concerned. This article has supplementary material online.
In this paper, we focus on a model specification problem in spatial econometric models when an empiricist needs to choose from a pool of candidates for the spatial weights matrix. We propose a model ...selection (MS) procedure for the matrix exponential spatial specification (MESS), when the true spatial weights matrix may not be in the set of candidate spatial weights matrices. We show that the selection estimator is asymptotically optimal in the sense that asymptotically it is as efficient as the infeasible estimator that uses the best candidate spatial weights matrix. The proposed selection procedure is also consistent in the sense that when the data generating process involves spatial effects, it chooses the true spatial weights matrix with probability approaching one in large samples. We also propose a model averaging (MA) estimator that compromises across a set of candidate models. We show that it is asymptotically optimal. We further flesh out how to extend the proposed selection and averaging schemes to higher order specifications and to the MESS with heteroscedasticity. Our Monte Carlo simulation results indicate that the MS and MA estimators perform well in finite samples. We also illustrate the usefulness of the proposed MS and MA schemes in a spatially augmented economic growth model.
Bayesian Analysis of DSGE Models An, Sungbae; Schorfheide, Frank
Econometric reviews,
01/2007, Letnik:
26, Številka:
2-4
Journal Article
Recenzirano
Odprti dostop
This paper reviews Bayesian methods that have been developed in recent years to estimate and evaluate dynamic stochastic general equilibrium (DSGE) models. We consider the estimation of linearized ...DSGE models, the evaluation of models based on Bayesian model checking, posterior odds comparisons, and comparisons to vector autoregressions, as well as the non-linear estimation based on a second-order accurate model solution. These methods are applied to data generated from correctly specified and misspecified linearized DSGE models and a DSGE model that was solved with a second-order perturbation method.
Large Bayesian vector auto regressions Bańbura, Marta; Giannone, Domenico; Reichlin, Lucrezia
Journal of applied econometrics (Chichester, England),
01/2010, Letnik:
25, Številka:
1
Journal Article
Recenzirano
This paper shows that vector auto regression (VAR) with Bayesian shrinkage is an appropriate tool for large dynamic models. We build on the results of De Mol and co-workers (2008) and show that, when ...the degree of shrinkage is set in relation to the cross-sectional dimension, the forecasting performance of small monetary VARs can be improved by adding additional macroeconomic variables and sectoral information. In addition, we show that large VARs with shrinkage produce credible impulse responses and are suitable for structural analysis.