ABSTRACT
The openness of the future is rightly considered one of the qualifying aspects of the temporality of modern society. The open future, which does not yet exist in the present, implies radical ...unpredictability. This article discusses how, in the last few centuries, the resulting uncertainty has been managed with probabilistic tools that compute present information about the future in a controlled way. The probabilistic approach has always been plagued by three fundamental problems: performativity, the need for individualization, and the opacity of predictions. We contrast this approach with recent forms of algorithmic forecasting, which seem to turn these problems into resources and produce an innovative form of prediction. But can a predicted future still be an open future? We explore this specific contemporary modality of historical futures by examining the recent debate about the notion of actionability in precision medicine, which focuses on a form of individualized prediction that enables direct intervention in the future it predicts.
The quest for a practical index of relative body weight that began shortly after actuaries reported the increased mortality of their overweight policyholders culminated after World War II, when the ...relationship between weight and cardiovascular disease became the subject of epidemiological studies. It became evident then that the best index was the ratio of the weight in kilograms divided by the square of the height in meters, or the Quetelet Index described in 1832. Adolphe Quetelet (1796–1874) was a Belgian mathematician, astronomer and statistician, who developed a passionate interest in probability calculus that he applied to study human physical characteristics and social aptitudes. His pioneering cross-sectional studies of human growth led him to conclude that other than the spurts of growth after birth and during puberty, ‘the weight increases as the square of the height’, known as the Quetelet Index until it was termed the Body Mass Index in 1972 by Ancel Keys (1904–2004). For his application of comparative statistics to social conditions and moral issues, Quetelet is considered a founder of the social sciences. His principal work, ‘A Treatise of Man and the development of his faculties’ published in 1835 is considered ‘one of the greatest books of the 19th century’. A tireless promoter of statistical data collection based on standard methods and definitions, Quetelet organized in 1853 the first International Statistical Congress, which launched the development of ‘a uniform nomenclature of the causes of death applicable to all countries’, progenitor of the current International Classification of Diseases.
Quantitative risk analysis (QRA) is a systematic approach for evaluating likelihood, consequences, and risk of adverse events. QRA based on event (ETA) and fault tree analyses (FTA) employs two basic ...assumptions. The first assumption is related to likelihood values of input events, and the second assumption is regarding interdependence among the events (for ETA) or basic events (for FTA). Traditionally, FTA and ETA both use crisp probabilities; however, to deal with uncertainties, the probability distributions of input event likelihoods are assumed. These probability distributions are often hard to come by and even if available, they are subject to incompleteness (partial ignorance) and imprecision. Furthermore, both FTA and ETA assume that events (or basic events) are independent. In practice, these two assumptions are often unrealistic. This article focuses on handling uncertainty in a QRA framework of a process system. Fuzzy set theory and evidence theory are used to describe the uncertainties in the input event likelihoods. A method based on a dependency coefficient is used to express interdependencies of events (or basic events) in ETA and FTA. To demonstrate the approach, two case studies are discussed.
The current weather forecasting paradigm is deterministic, based on numerical models. Multiple estimates of the current state of the atmosphere are used to generate an ensemble of deterministic ...predictions. Ensemble forecasts, while providing information on forecast uncertainty, are often uncalibrated. Bayesian model averaging (BMA) is a statistical ensemble postprocessing method that creates calibrated predictive probability density functions (PDFs). Probabilistic wind forecasting offers two challenges: a skewed distribution, and observations that are coarsely discretized. We extend BMA to wind speed, taking account of these challenges. This method provides calibrated and sharp probabilistic forecasts. Comparisons are made between several formulations.
Mediation is one of the few mechanisms the international community can deploy that will affect civil wars. This article introduces the dataset on mediation in civil wars — termed the Civil War ...Mediation (CWM) dataset. This is the first dataset to focus solely on civil war mediation. These data contribute to the present state of quantitative research on mediation in three important respects: the data are collected for the period of 1946—2004, are organized by mediation cases and by civil war episode, and provide detailed information about mediation incidences. The article first presents a few variables included in the dataset that are motivated by theoretical arguments from the literature. After a presentation of summary statistics, attention is turned to using the CWM data to explore the determinants of mediation. Mediation is shown to be a function of war type (territorial and internationalized wars are more likely to be mediated), war duration (the longer the war the higher the probability of mediation), supply-side factors (the number of democracies in the world and the global polity average), and stratum (subsequent wars are less likely to be mediated). Battle-related deaths also seem to increase the chances of mediation, though the relationship is only weakly significant. The article concludes with suggestions for future research that can benefit from the dataset.
This paper proposes a new class of asymmetric Student-
t
(AST) distributions, and investigates its properties, gives procedures for estimation, and indicates applications in financial econometrics. ...We derive analytical expressions for the cdf, quantile function, moments, and quantities useful in financial econometric applications such as the Expected Shortfall. A stochastic representation of the distribution is also given. Although the AST density does not satisfy the usual regularity conditions for maximum likelihood estimation, we establish consistency, asymptotic normality and efficiency of ML estimators and derive an explicit analytical expression for the asymptotic covariance matrix. A Monte Carlo study indicates generally good finite-sample conformity with these asymptotic properties.
Bayesian hierarchical models are attractive structures for conducting regression analyses when the data are subject to missingness. However, the requisite probability calculus is challenging and ...Monte Carlo methods typically are employed. We develop an alternative approach based on deterministic variational Bayes approximations. Both parametric and nonparametric regression are considered. Attention is restricted to the more challenging case of missing predictor data. We demonstrate that variational Bayes can achieve good accuracy, but with considerably less computational overhead. The main ramification is fast approximate Bayesian inference in parametric and nonparametric regression models with missing data. Supplemental materials accompany the online version of this article.
This paper illustrates the development of an object-oriented Bayesian network (OOBN) to integrate the safety risks contributing to an in-flight loss-of-control aviation accident. With the creation of ...a probabilistic model, inferences about changes to the states of the accident shaping or causal factors can be drawn quantitatively. These predictive safety inferences derive from qualitative reasoning to conclusions based on data, assumptions, and/or premises, and enable an analyst to identify the most prominent causal factors leading to a risk factor prioritization. Such an approach facilitates a mitigation portfolio study and assessment. The model also facilitates the computation of sensitivity values based on perturbations to the estimates in the conditional probability tables. Such computations lead to identifying the most sensitive causal factors with respect to an accident probability. This approach may lead to vulnerability discovery of emerging causal factors for which mitigations do not yet exist that then informs possible future R&D efforts. To illustrate the benefits of an OOBN in a large and complex aviation accident model, the in-flight loss-of-control accident framework model is presented.
This paper presents a new estimator for the mixed proportional hazard model that allows for a nonparametric baseline hazard and time-varying regressors. In particular, this paper allows for discrete ...measurement of the durations as happens often in practice. The integrated baseline hazard and all parameters are estimated at the regular rate, N, where N is the number of individuals. A hazard model is a natural framework for time-varying regressors. In particular, if a flow or a transition probability depends on a regressor that changes with time, a hazard model avoids the curse of dimensionality that would arise from interacting the regressors at each point in time with one another. This paper also presents a new test to detect unobserved heterogeneity.