Stein's method has offered a completely novel way of evaluating the quality of normal approximations. This volume contains thorough coverage of the method's fundamentals. It includes a large number ...of recent developments in both theory and applications.
The problem of forecasting the appearance of new strong sectors in the region shall be considered. Based on the methods of probabilistic and statistical modeling, a model has been built that makes it ...possible to assess the occurrence probability of a new strong sector in the region, taking into account the characteristics of the structure of the economy. The possibility of building such a model is based on the assumption that the appearance and development of sectors are largely due to the evolution of past economic activity. The model uses the indicators of nesting of structures of strong sectors of regional economies entered by the authors. These values are based on the probabilistic interpretation and properties of the matrix elements, which assesses the economic complexity, in accordance with the traditional approach. The condition for the appearance of a certain strong sector in the structure of the economy of a particular region with a probability exceeding 0.5 shall be obtained. This condition used to form a list of sectors recommended for priority development in the region. For each region in its structure, the occurrence probability of a specific sector as a strong one was estimated. On the basis of ordering the sectors by the value of these probabilities and assessing their potential contribution to socio-economic development, an expert assessment of the feasibility of developing a new strong sector in the region can be given. Research is focused on the development of theories of localized specialization and economic diversification.
Extant research suggests that the desirability of an outcome influences the way an individual makes a prediction. The current research investigated how an outcome's desirability influences the extent ...to which an individual evaluates its probability when making a prediction. Two studies were conducted using a single binary prediction based on the urn model. Individuals predicted which color--red or blue--a ball drawn from a bag would be, while being aware of the proportion of each color in the bag. The results of the first study indicated that individuals predicted the more probable outcome regardless of the probabilities of two outcomes. However, when the less probable outcome was more desirable, the proportion of predictions became significantly correlated and better calibrated to the actual probability. This result was interpreted as showing that, when motivated to predict the more desirable but less probable outcome, individuals evaluate its probability more effortfully. This interpretation was tested in the second study. When the probabi ity- matching motivation was implemented, the proportion of individuals who predicted the less probable outcome increased significantly. However, when the less probable outcome was more desirable, the same motivation did not significantly increase the proportion of such individuals. Taken together, these results imply that individuals likely process the same probability informatio differently based on whether this information is useful for predicting a desirable outcome. KEYWORDS cognitive effort choice-making binary-choice binary-prediction urn model probability-matching optimistic bias
Combining probability forecasts Ranjan, Roopesh; Gneiting, Tilmann
Journal of the Royal Statistical Society. Series B, Statistical methodology,
2010, 2010-01, 20100101, January 2010, 2010-01-01, Letnik:
72, Številka:
1
Journal Article
Recenzirano
Odprti dostop
Linear pooling is by far the most popular method for combining probability forecasts. However, any non-trivial weighted average of two or more distinct, calibrated probability forecasts is ...necessarily uncalibrated and lacks sharpness. In view of this, linear pooling requires recalibration, even in the ideal case in which the individual forecasts are calibrated. Towards this end, we propose a beta-transformed linear opinion pool for the aggregation of probability forecasts from distinct, calibrated or uncalibrated sources. The method fits an optimal non-linearly recalibrated forecast combination, by compositing a beta transform and the traditional linear opinion pool. The technique is illustrated in a simulation example and in a case-study on statistical and National Weather Service probability of precipitation forecasts.
We study the asymptotic behaviour of the maximum interpoint distance of random points in a d-dimensional ellipsoid with a unique major axis. Instead of investigating only a fixed number of n points ...as n tends to ∞, we consider the much more general setting in which the random points are the supports of appropriately defined Poisson processes. Our main result covers the case of uniformly distributed points.
Scoring rules assess the quality of probabilistic forecasts, by assigning a numerical score based on the predictive distribution and on the event or value that materializes. A scoring rule is proper ...if the forecaster maximizes the expected score for an observation drawn from the distributionF if he or she issues the probabilistic forecast F, rather than G ≠ F. It is strictly proper if the maximum is unique. In prediction problems, proper scoring rules encourage the forecaster to make careful assessments and to be honest. In estimation problems, strictly proper scoring rules provide attractive loss and utility functions that can be tailored to the problem at hand. This article reviews and develops the theory of proper scoring rules on general probability spaces, and proposes and discusses examples thereof. Proper scoring rules derive from convex functions and relate to information measures, entropy functions, and Bregman divergences. In the case of categorical variables, we prove a rigorous version of the Savage representation. Examples of scoring rules for probabilistic forecasts in the form of predictive densities include the logarithmic, spherical, pseudospherical, and quadratic scores. The continuous ranked probability score applies to probabilistic forecasts that take the form of predictive cumulative distribution functions. It generalizes the absolute error and forms a special case of a new and very general type of score, the energy score. Like many other scoring rules, the energy score admits a kernel representation in terms of negative definite functions, with links to inequalities of Hoeffding type, in both univariate and multivariate settings. Proper scoring rules for quantile and interval forecasts are also discussed. We relate proper scoring rules to Bayes factors and to cross-validation, and propose a novel form of cross-validation known as random-fold cross-validation. A case study on probabilistic weather forecasts in the North American Pacific Northwest illustrates the importance of propriety. We note optimum score approaches to point and quantile estimation, and propose the intuitively appealing interval score as a utility function in interval estimation that addresses width as well as coverage.
Probabilistic forecasts, calibration and sharpness Gneiting, Tilmann; Balabdaoui, Fadoua; Raftery, Adrian E
Journal of the Royal Statistical Society. Series B, Statistical methodology,
April 2007, Letnik:
69, Številka:
2
Journal Article
Recenzirano
Odprti dostop
Probabilistic forecasts of continuous variables take the form of predictive densities or predictive cumulative distribution functions. We propose a diagnostic approach to the evaluation of predictive ...performance that is based on the paradigm of maximizing the sharpness of the predictive distributions subject to calibration. Calibration refers to the statistical consistency between the distributional forecasts and the observations and is a joint property of the predictions and the events that materialize. Sharpness refers to the concentration of the predictive distributions and is a property of the forecasts only. A simple theoretical framework allows us to distinguish between probabilistic calibration, exceedance calibration and marginal calibration. We propose and study tools for checking calibration and sharpness, among them the probability integral transform histogram, marginal calibration plots, the sharpness diagram and proper scoring rules. The diagnostic approach is illustrated by an assessment and ranking of probabilistic forecasts of wind speed at the Stateline wind energy centre in the US Pacific Northwest. In combination with cross-validation or in the time series context, our proposal provides very general, nonparametric alternatives to the use of information criteria for model diagnostics and model selection.
This study presents a novel algorithm for image classification based on a quasi-Bayesian approach and the extraction of probability density functions (PDFs). First, representative PDFs are extracted ...from each image using its features. Next, a measure is developed to evaluate the similarity between the extracted PDFs. Finally, an algorithm is established for determining prior probabilities using fuzzy clustering techniques. By combining these improvements, we develop a more efficient algorithm for classifying image data. An image is assigned to a specific group if it has the highest value of prior probability and a similar level to that group. We explain the proposed algorithm step-by-step with a numerical example and clearly demonstrate its convergence. When applied to multiple image datasets, the proposed algorithm has shown stability and efficiency, outperforming many other statistical and machine learning methods. Additionally, we have developed a Matlab procedure to apply the proposed algorithm to real image datasets. These applications demonstrate the potential of research in various fields related to the digital revolution and artificial intelligence.
This paper proposes a unified and efficient direct probability integral method (DPIM) to calculate the probability density function (PDF) of responses for linear and nonlinear stochastic structures ...under static and dynamic loads. Firstly, based on the principle of probability conservation, the probability density integral equation (PDIE) equivalent to the probability density differential equation is derived for stochastic system. We highlight that, for time dependent stochastic system the PDIE is satisfied at each time instant. Secondly, the novel DPIM is proposed to solve PDIE directly by means of the point selection technique based on generalized F discrepancy and the smoothing of Dirac delta function. Moreover, the difference and connection among the DPIM, the existing probability density evolution method and probability transformation method are examined. Finally, four typical examples for stochastic response analysis, including the linear and nonlinear systems subjected to static and dynamic loads, demonstrate the high computational efficiency and accuracy of proposed DPIM.
•Direct probability integral method is proposed based on probability conservation.•Probability density integral equation is satisfied at each instant of dynamic system.•Typical examples indicate high efficiency and accuracy of the proposed method.•The present method facilitates stochastic response analysis of general systems.