Clustered survival data are encountered in many scientific disciplines including human and veterinary medicine, biology, epidemiology, public health, and demography. Frailty models provide a powerful ...tool to analyse clustered survival data. In contrast to the large number of research publications on frailty models, relatively few statistical software packages contain frailty models.It is difficult for statistical practitioners and graduate students to understand frailty models from the existing literature. This book provides an in-depth discussion and explanation of the basics of frailty model methodology for such readers. The discussion includes parametric and semiparametric frailty models and accelerated failure time models. Common techniques to fit frailty models include the EM-algorithm, penalised likelihood techniques, Laplacian integration and Bayesian techniques. More advanced frailty models for hierarchical data are also included.Real-life examples are used to demonstrate how particular frailty models can be fitted and how the results should be interpreted. The programs to fit all the worked-out examples in the book are available on the Springer website with most of the programs developed in the freeware packages R and Winbugs. The book starts with a brief overview of some basic concepts in classical survival analysis, collecting what is needed for the reading on the more complex frailty models.
Full text
Available for:
FIS, FZAB, GEOZS, GIS, IJS, IMTLJ, KILJ, KISLJ, MFDPS, NUK, OBVAL, OILJ, PNG, SAZU, SBCE, SBJE, SBMB, SBNM, UKNU, UL, UM, UPUK, VKSCE, ZAGLJ
How powerful new methods in nonlinear control engineering can be applied to neuroscience, from fundamental model formulation to advanced medical applications.
This handbook and ready reference presents a combination of statistical, information-theoretic, and data analysis methods to meet the challenge of designing empirical models involving molecular ...descriptors within bioinformatics. The topics range from investigating information processing in chemical and biological networks to studying statistical and information-theoretic techniques for analyzing chemical structures to employing data analysis and machine learning techniques for QSAR/QSPR. The high-profile international author and editor team ensures excellent coverage of the topic, making this a must-have for everyone working in chemoinformatics and structure-oriented drug design.
In this paper, we provide efficient estimators and honest confidence bands for a variety of treatment effects including local average (LATE) and local quantile treatment effects (LQTE) in data-rich ...environments. We can handle very many control variables, endogenous receipt of treatment, heterogeneous treatment effects, and function-valued outcomes. Our framework covers the special case of exogenous receipt of treatment, either conditional on controls or unconditionally as in randomized control trials. In the latter case, our approach produces efficient estimators and honest bands for (functional) average treatment effects (ATE) and quantile treatment effects (QTE). To make informative inference possible, we assume that key reduced-form predictive relationships are approximately sparse. This assumption allows the use of regularization and selection methods to estimate those relations, and we provide methods for postregularization and post-selection inference that are uniformly valid (honest) across a wide range of models. We show that a key ingredient enabling honest inference is the use of orthogonal or doubly robust moment conditions in estimating certain reducedform functional parameters. We illustrate the use of the proposed methods with an application to estimating the effect of 401(k) eligibility and participation on accumulated assets. The results on program evaluation are obtained as a consequence of more general results on honest inference in a general moment-condition framework, which arises from structural equation models in econometrics. Here, too, the crucial ingredient is the use of orthogonal moment conditions, which can be constructed from the initial moment conditions. We provide results on honest inference for (function-valued) parameters within this general framework where any high-quality, machine learning methods (e.g., boosted trees, deep neural networks, random forest, and their aggregated and hybrid versions) can be used to learn the nonparametric/high-dimensional components of the model. These include a number of supporting auxiliary results that are of major independent interest: namely, we (1) prove uniform validity of a multiplier bootstrap, (2) offer a uniformly valid functional delta method, and (3) provide results for sparsitybased estimation of regression functions for function-valued outcomes.
Full text
Available for:
BFBNIB, FZAB, GIS, IJS, INZLJ, KILJ, NLZOH, NMLJ, NUK, OILJ, PNG, SAZU, SBCE, SBMB, UL, UM, UPUK, ZRSKP
Neuropsychedelia Langlitz, Nicolas
2012., 20121008, 2012, 2012-11-07
eBook
Neuropsychedelia examines the revival of psychedelic science since the "Decade of the Brain." After the breakdown of this previously prospering area of psychopharmacology, and in the wake of clashes ...between counterculture and establishment in the late 1960s, a new generation of hallucinogen researchers used the hype around the neurosciences in the 1990s to bring psychedelics back into the mainstream of science and society. This book is based on anthropological fieldwork and philosophical reflections on life and work in two laboratories that have played key roles in this development: a human lab in Switzerland and an animal lab in California. It sheds light on the central transnational axis of the resurgence connecting American psychedelic culture with the home country of LSD. In the borderland of science and religion, Neuropsychedelia explores the tensions between the use of hallucinogens to model psychoses and to evoke spiritual experiences in laboratory settings. Its protagonists, including the anthropologist himself, struggle to find a place for the mystical under conditions of late-modern materialism.
Statistics in Medicine, Third Edition makes medical statistics easy to understand by students, practicing physicians, and researchers. The book begins with databases from clinical medicine and uses ...such data to give multiple worked-out illustrations of every method. The text opens with how to plan studies from conception to publication and what to do with your data, and follows with step-by-step instructions for biostatistical methods from the simplest levels (averages, bar charts) progressively to the more sophisticated methods now being seen in medical articles (multiple regression, noninferiority testing). Examples are given from almost every medical specialty and from dentistry, nursing, pharmacy, and health care management. A preliminary guide is given to tailor sections of the text to various lengths of biostatistical courses.
Key Features: *User-friendly format includes medical examples, step-by-step methods, and check-yourself exercises appealing to readers with little or no statistical background, across medical and biomedical disciplines *Facilitates stand-alone methods rather than a required sequence of reading and references to prior text. * Covers trial randomization, treatment ethics in medical research, imputation of missing data, evidence-based medical decisions, how to interpret medical articles, noninferiority testing, meta-analysis, screening number needed to treat, and epidemiology. * Fills the gap left in all other medical statistics books between the reader's knowledge of how to go about research and the book's coverage of how to analyze results of that research. New in this Edition: * New chapters on planning research, managing data and analysis, Bayesian statistics, measuring association and agreement, and questionnaires and surveys. * New sections on what tests and descriptive statistics to choose, false discovery rate, interim analysis, bootstrapping, Bland-Altman plots, Markov chain Monte Carlo (MCMC), and Deming regression. * Expanded coverage on probability, statistical methods and tests relatively new to medical research, ROC curves, experimental design, and survival analysis. *35 Databases in Excel format used in the book and can be downloaded and transferred into whatever format is needed along with PowerPoint slides of figures, tables, and graphs from the book included on the companion site, http://www.elsevierdirect.com/companion.jsp?ISBN=9780123848642 *Medical subject index offers additional search capabilities.
This article introduces a new procedure for analyzing the quantile co-movement of a large number of financial time series based on a large-scale panel data model with factor structures. The proposed ...method attempts to capture the unobservable heterogeneity of each of the financial time series based on sensitivity to explanatory variables and to the unobservable factor structure. In our model, the dimension of the common factor structure varies across quantiles, and the explanatory variables is allowed to depend on the factor structure. The proposed method allows for both cross-sectional and serial dependence, and heteroscedasticity, which are common in financial markets.
We propose new estimation procedures for both frequentist and Bayesian frameworks. Consistency and asymptotic normality of the proposed estimator are established. We also propose a new model selection criterion for determining the number of common factors together with theoretical support.
We apply the method to analyze the returns for over 6000 international stocks from over 60 countries during the subprime crisis, European sovereign debt crisis, and subsequent period. The empirical analysis indicates that the common factor structure varies across quantiles. We find that the common factors for the quantiles and the common factors for the mean are different.
Supplementary materials
for this article are available online.
Full text
Available for:
BFBNIB, GIS, IJS, KISLJ, NUK, PNG, UL, UM, UPUK
Many historical processes are dynamic. Populations grow and decline. Empires expand and collapse. Religions spread and wither. Natural scientists have made great strides in understanding dynamical ...processes in the physical and biological worlds using a synthetic approach that combines mathematical modeling with statistical analyses. Taking up the problem of territorial dynamics--why some polities at certain times expand and at other times contract--this book shows that a similar research program can advance our understanding of dynamical processes in history.
Peter Turchin develops hypotheses from a wide range of social, political, economic, and demographic factors: geopolitics, factors affecting collective solidarity, dynamics of ethnic assimilation/religious conversion, and the interaction between population dynamics and sociopolitical stability. He then translates these into a spectrum of mathematical models, investigates the dynamics predicted by the models, and contrasts model predictions with empirical patterns. Turchin's highly instructive empirical tests demonstrate that certain models predict empirical patterns with a very high degree of accuracy. For instance, one model accounts for the recurrent waves of state breakdown in medieval and early modern Europe. And historical data confirm that ethno-nationalist solidarity produces an aggressively expansive state under certain conditions (such as in locations where imperial frontiers coincide with religious divides). The strength of Turchin's results suggests that the synthetic approach he advocates can significantly improve our understanding of historical dynamics.
A review of the current theories of the visual cortex and the biological data on which they are based, this book presents a unified computational approach to understanding the structure, development, ...and function of the visual cortex.
Full text
Available for:
FIS, FZAB, GEOZS, GIS, IJS, IMTLJ, KILJ, KISLJ, MFDPS, NUK, OBVAL, OILJ, PNG, SAZU, SBCE, SBJE, SBMB, SBNM, UKNU, UL, UM, UPUK, VKSCE, ZAGLJ