We establish a general existence and uniqueness of integrable adapted solutions to scalar backward stochastic differential equations with integrable parameters, where the generator g has an ...iteratedlogarithmic uniform continuity in the second unknown variable z. The result improves our previous one in 12.
Elicitation is a key task for subjectivist Bayesians. Although skeptics hold that elicitation cannot (or perhaps should not) be done, in practice it brings statisticians closer to their clients and ...subject-matter expert colleagues. This article reviews the state of the art, reflecting the experience of statisticians informed by the fruits of a long line of psychological research into how people represent uncertain information cognitively and how they respond to questions about that information. In a discussion of the elicitation process, the first issue to address is what it means for an elicitation to be successful; that is, what criteria should be used. Our answer is that a successful elicitation faithfully represents the opinion of the person being elicited. It is not necessarily "true" in some objectivistic sense, and cannot be judged in that way. We see that elicitation as simply part of the process of statistical modeling. Indeed, in a hierarchical model at which point the likelihood ends and the prior begins is ambiguous. Thus the same kinds of judgment that inform statistical modeling in general also inform elicitation of prior distributions. The psychological literature suggests that people are prone to certain heuristics and biases in how they respond to situations involving uncertainty. As a result, some of the ways of asking questions about uncertain quantities are preferable to others, and appear to be more reliable. However, data are lacking on exactly how well the various methods work, because it is unclear, other than by asking using an elicitation method, just what the person believes. Consequently, one is reduced to indirect means of assessing elicitation methods. The tool chest of methods is growing. Historically, the first methods involved choosing hyperparameters using conjugate prior families, at a time when these were the only families for which posterior distributions could be computed. Modern computational methods, such as Markov chain Monte Carlo, have freed elicitation from this constraint. As a result, now both parametric and nonparametric methods are available for low-dimensional problems. High-dimensional problems are probably best thought of as lacking another hierarchical level, which has the effect of reducing the as-yet-unelicited parameter space. Special considerations apply to the elicitation of group opinions. Informal methods, such as Delphi, encourage the participants to discuss the issue in the hope of reaching consensus. Formal methods, such as weighted averages or logarithmic opinion pools, each have mathematical characteristics that are uncomfortable. Finally, there is the question of what a group opinion even means, because it is not necessarily the opinion of any participant.
We study the convergence of Langevin-Simulated Annealing type algorithms with multiplicative noise, i.e. for V : R d → R V : \mathbb {R}^d \to \mathbb {R} a potential function to minimize, we ...consider the stochastic differential equation d Y t = − σ σ ⊤ ∇ V ( Y t ) dY_t = - \sigma \sigma ^\top \nabla V(Y_t) d t + a ( t ) σ ( Y t ) d W t + a ( t ) 2 Υ ( Y t ) d t dt + a(t)\sigma (Y_t)dW_t + a(t)^2\Upsilon (Y_t)dt , where ( W t ) (W_t) is a Brownian motion, where σ : R d → M d ( R ) \sigma : \mathbb {R}^d \to \mathcal {M}_d(\mathbb {R}) is an adaptive (multiplicative) noise, where a : R + → R + a : \mathbb {R}^+ \to \mathbb {R}^+ is a function decreasing to 0 0 and where Υ \Upsilon is a correction term. This setting can be applied to optimization problems arising in Machine Learning; allowing σ \sigma to depend on the position brings faster convergence in comparison with the classical Langevin equation d Y t = − ∇ V ( Y t ) d t + σ d W t dY_t = -\nabla V(Y_t)dt + \sigma dW_t . The case where σ \sigma is a constant matrix has been extensively studied; however little attention has been paid to the general case. We prove the convergence for the L 1 L^1 -Wasserstein distance of Y t Y_t and of the associated Euler scheme Y ¯ t \bar {Y}_t to some measure ν ⋆ \nu ^\star which is supported by argmin ( V ) \operatorname {argmin}(V) and give rates of convergence to the instantaneous Gibbs measure ν a ( t ) \nu _{a(t)} of density ∝ exp ( − 2 V ( x ) / a ( t ) 2 ) \propto \exp (-2V(x)/a(t)^2) . To do so, we first consider the case where a a is a piecewise constant function. We find again the classical schedule a ( t ) = A log − 1 / 2 ( t ) a(t) = A\log ^{-1/2}(t) . We then prove the convergence for the general case by giving bounds for the Wasserstein distance to the stepwise constant case using ergodicity properties.
Making and Evaluating Point Forecasts Gneiting, Tilmann
Journal of the American Statistical Association,
06/2011, Letnik:
106, Številka:
494
Journal Article
Recenzirano
Odprti dostop
Typically, point forecasting methods are compared and assessed by means of an error measure or scoring function, with the absolute error and the squared error being key examples. The individual ...scores are averaged over forecast cases, to result in a summary measure of the predictive performance, such as the mean absolute error or the mean squared error. I demonstrate that this common practice can lead to grossly misguided inferences, unless the scoring function and the forecasting task are carefully matched. Effective point forecasting requires that the scoring function be specified ex ante, or that the forecaster receives a directive in the form of a statistical functional, such as the mean or a quantile of the predictive distribution. If the scoring function is specified ex ante, the forecaster can issue the optimal point forecast, namely, the Bayes rule. If the forecaster receives a directive in the form of a functional, it is critical that the scoring function be consistent for it, in the sense that the expected score is minimized when following the directive. A functional is elicitable if there exists a scoring function that is strictly consistent for it. Expectations, ratios of expectations and quantiles are elicitable. For example, a scoring function is consistent for the mean functional if and only if it is a Bregman function. It is consistent for a quantile if and only if it is generalized piecewise linear. Similar characterizations apply to ratios of expectations and to expectiles. Weighted scoring functions are consistent for functionals that adapt to the weighting in peculiar ways. Not all functionals are elicitable; for instance, conditional value-at-risk is not, despite its popularity in quantitative finance.
The usual theory of asset pricing in finance assumes that the financial strategies, i.e. the quantity of risky assets to invest, are realvalued so that they are not integer-valued in general, see the ...Black and Scholes model for instance. This is clearly contrary to what it is possible to do in the real world. Surprisingly, it seems that there is no contribution in that direction in the literature. In this paper, we show that, in discrete-time, it is possible to evaluate the minimal super-hedging price when we restrict ourselves to integer-valued strategies. To do so, we only consider terminal claims that are continuous piecewise affine functions of the underlying asset. We formulate a dynamic programming principle that can be directly implemented on an historical data and which also provides the optimal integer-valued strategy.
Imaginary geometry I: interacting SLEs Miller, Jason; Sheffield, Scott
Probability theory and related fields,
04/2016, Letnik:
164, Številka:
3-4
Journal Article
Recenzirano
Odprti dostop
Fix constants
χ
>
0
and
θ
∈
0
,
2
π
)
, and let
h
be an instance of the Gaussian free field on a planar domain. We study flow lines of the vector field
e
i
(
h
/
χ
+
θ
)
starting at a fixed boundary ...point of the domain. Letting
θ
vary, one obtains a family of curves that look locally like
SLE
κ
processes with
κ
∈
(
0
,
4
)
(where
χ
=
2
κ
-
κ
2
), which we interpret as the rays of a random geometry with purely imaginary curvature. We extend the fundamental existence and uniqueness results about these paths to the case that the paths intersect the boundary. We also show that flow lines of different angles cross each other at most once but (in contrast to what happens when
h
is smooth) may bounce off of each other after crossing. Flow lines of the same angle started at different points merge into each other upon intersecting, forming a tree structure. We construct so-called
counterflow lines
(
SLE
16
/
κ
) within the same geometry using ordered “light cones” of points accessible by angle-restricted trajectories and develop a robust theory of flow and counterflow line interaction. The theory leads to new results about
SLE
. For example, we prove that
SLE
κ
(
ρ
)
processes are almost surely continuous random curves, even when they intersect the boundary, and establish Duplantier duality for general
SLE
16
/
κ
(
ρ
)
processes.
Imprecise probabilities in engineering analyses Beer, Michael; Ferson, Scott; Kreinovich, Vladik
Mechanical systems and signal processing,
May-June 2013, 2013-5-00, 20130501, Letnik:
37, Številka:
1-2
Journal Article
Recenzirano
Odprti dostop
Probabilistic uncertainty and imprecision in structural parameters and in environmental conditions and loads are challenging phenomena in engineering analyses. They require appropriate mathematical ...modeling and quantification to obtain realistic results when predicting the behavior and reliability of engineering structures and systems. But the modeling and quantification is complicated by the characteristics of the available information, which involves, for example, sparse data, poor measurements and subjective information. This raises the question whether the available information is sufficient for probabilistic modeling or rather suggests a set-theoretical approach. The framework of imprecise probabilities provides a mathematical basis to deal with these problems which involve both probabilistic and non-probabilistic information. A common feature of the various concepts of imprecise probabilities is the consideration of an entire set of probabilistic models in one analysis. The theoretical differences between the concepts mainly concern the mathematical description of the set of probabilistic models and the connection to the probabilistic models involved. This paper provides an overview on developments which involve imprecise probabilities for the solution of engineering problems. Evidence theory, probability bounds analysis with p-boxes, and fuzzy probabilities are discussed with emphasis on their key features and on their relationships to one another.
This paper was especially prepared for this special issue and reflects, in various ways, the thinking and presentation preferences of the authors, who are also the guest editors for this special issue.
► This is an introductory overview our special issue on imprecise probabilities. ► It highlights selected IP approaches with engineering applications. ► It is suitable for engineering and also non-engineering readers.
We present the first global analysis of parton distribution functions (PDFs) at approximate N Formula omittedLO in the strong coupling constant Formula omitted, extending beyond the current highest ...NNLO achieved in PDF fits. To achieve this, we present a general formalism for the inclusion of theoretical uncertainties associated with the perturbative expansion in the strong coupling. We demonstrate how using the currently available knowledge surrounding the next highest order (N Formula omittedLO) in Formula omitted can provide consistent, justifiable and explainable approximate N Formula omittedLO (aN Formula omittedLO) PDFs. This includes estimates for uncertainties due the currently unknown N Formula omittedLO ingredients, but also implicitly some missing higher order uncertainties (MHOUs) beyond these. Specifically, we approximate the splitting functions, transition matrix elements, coefficient functions and K-factors for multiple processes to N Formula omittedLO. Crucially, these are constrained to be consistent with the wide range of already available information about N Formula omittedLO to match the complete result at this order as accurately as possible. Using this approach we perform a fully consistent approximate N Formula omittedLO global fit within the MSHT framework. This relies on an expansion of the Hessian procedure used in previous MSHT fits to allow for sources of theoretical uncertainties. These are included as nuisance parameters in a global fit, controlled by knowledge and intuition based prior distributions. We analyse the differences between our aN Formula omittedLO PDFs and the standard NNLO PDF set, and study the impact of using aN Formula omittedLO PDFs on the LHC production of a Higgs boson at this order. Finally, we provide guidelines on how these PDFs should be used in phenomenological investigations.