The elicitation of scientific and technical judgments from experts, in the form of subjective probability distributions, can be a valuable addition to other forms of evidence in support of public ...policy decision making. This paper explores when it is sensible to perform such elicitation and how that can best be done. A number of key issues are discussed, including topics on which there are, and are not, experts who have knowledge that provides a basis for making informed predictive judgments; the inadequacy of only using qualitative uncertainty language; the role of cognitive heuristics and of overconfidence; the choice of experts; the development, refinement, and iterative testing of elicitation protocols that are designed to help experts to consider systematically all relevant knowledge when they make their judgments; the treatment of uncertainty about model functional form; diversity of expert opinion; and when it does or does not make sense to combine judgments from different experts. Although it may be tempting to view expert elicitation as a low-cost, low-effort alternative to conducting serious research and analysis, it is neither. Rather, expert elicitation should build on and use the best available research and analysis and be undertaken only when, given those, the state of knowledge will remain insufficient to support timely informed assessment and decision making.
We develop several statistical tests of the determinant of the diffusion coefficient of a stochastic differential equation, based on discrete observations on a time interval 0, T sampled with a time ...step ∆. Our main contribution is to control the test Type I and Type II errors in a non asymptotic setting, i.e. when the number of observations and the time step are fixed. The test statistics are calculated from the process increments. In dimension 1, the density of the test statistic is explicit. In dimension 2, the test statistic has no explicit density but upper and lower bounds are proved. We also propose a multiple testing procedure in dimension greater than 2. Every test is proved to be of a given non-asymptotic level and separability conditions to control their power are also provided. A numerical study illustrates the properties of the tests for stochastic processes with known or estimated drifts.
This volume provides an introduction to stochastic differential equations with jumps, in both theory and application. The book is accessible and contains many new results on numerical methods but ...also innovative methodologies in quantitative finance.
Receiver operating characteristic (ROC) analysis is performed by a curve, called ROC curve, plotted based on detection probability, <inline-formula> <tex-math notation="LaTeX">P_{\text {D}} ...</tex-math></inline-formula>, versus false alarm probability, <inline-formula> <tex-math notation="LaTeX">P_{\text {F}} </tex-math></inline-formula>, and has been widely used as an evaluation tool for signal detection. Specifically, the area under an ROC curve (AUC) is calculated and used as a detection measure. Unfortunately, finding distributions of <inline-formula> <tex-math notation="LaTeX">P_{\text {D}} </tex-math></inline-formula> and <inline-formula> <tex-math notation="LaTeX">P_{\text {F}} </tex-math></inline-formula> to generate a continuous ROC curve is practically infeasible. This article investigates approaches to generating a discrete 2D ROC curve of (<inline-formula> <tex-math notation="LaTeX">P_{\text {D}} </tex-math></inline-formula>,<inline-formula> <tex-math notation="LaTeX">P_{\text {F}} </tex-math></inline-formula>) without appealing for probability distributions. Since <inline-formula> <tex-math notation="LaTeX">P_{\text {D}} </tex-math></inline-formula> and <inline-formula> <tex-math notation="LaTeX">P_{\text {F}} </tex-math></inline-formula> are determined by the same threshold <inline-formula> <tex-math notation="LaTeX">\tau </tex-math></inline-formula> to specify a detector, an ROC curve of (<inline-formula> <tex-math notation="LaTeX">P_{\text {D}} </tex-math></inline-formula>,<inline-formula> <tex-math notation="LaTeX">P_{\text {F}} </tex-math></inline-formula>) can only be used to evaluate the effectiveness of a detector but not target detectability (TD) and also not background suppressibility (BS). To address this issue, a 3D ROC curve is generated as a function of (<inline-formula> <tex-math notation="LaTeX">P_{\text {D}} </tex-math></inline-formula>,<inline-formula> <tex-math notation="LaTeX">P_{\text {F}} </tex-math></inline-formula>, <inline-formula> <tex-math notation="LaTeX">\tau </tex-math></inline-formula>) by introducing a specific threshold parameter <inline-formula> <tex-math notation="LaTeX">\tau </tex-math></inline-formula> as a third independent variable. As a result, a 3D ROC curve along with its derived three 2D ROC curves of (<inline-formula> <tex-math notation="LaTeX">P_{\text {D}} </tex-math></inline-formula>,<inline-formula> <tex-math notation="LaTeX">P_{\text {F}} </tex-math></inline-formula>), (<inline-formula> <tex-math notation="LaTeX">P_{\text {D}} </tex-math></inline-formula>,<inline-formula> <tex-math notation="LaTeX">\tau </tex-math></inline-formula>), and (<inline-formula> <tex-math notation="LaTeX">P_{\text {F}} </tex-math></inline-formula>,<inline-formula> <tex-math notation="LaTeX">\tau </tex-math></inline-formula>) can further be used to design new quantitative measures to evaluate the effectiveness of a detector and its TD and BS. To demonstrate the full utility of 3D ROC analysis in target detection, extensive experiments are performed on two types of targets, prior target detection and anomaly detection, to conduct a comprehensive analysis on 3D ROC curves using new designed detection measures to evaluate target/anomaly detection performance.
We are interested in the time discretization of stochastic differential equations with additive d-dimensional Brownian noise and L q − L ρ drift coefficient when the condition d ρ + 2 q < 1, under ...which Krylov and Röckner 26 proved existence of a unique strong solution, is met. We show weak convergence with order 1 2 (1 − (d ρ + 2 q)) which corresponds to half the distance to the threshold for the Euler scheme with randomized time variable and cutoffed drift coefficient so that its contribution on each time-step does not dominate the Brownian contribution. More precisely, we prove that both the diffusion and this Euler scheme admit transition densities and that the difference between these densities is bounded from above by the time-step to this order multiplied by some centered Gaussian density.
We study the convergence rate of the optimal quantization for a probability measure sequence $(\mu_{n})_{n\in\mathbb{N}^{*}}$ on $\mathbb{R}^{d}$ converging in the Wasserstein distance in two ...aspects: the first one is the convergence rate of optimal quantizer $x^{(n)}\in(\mathbb{R}^{d})^{K}$ of $\mu_{n}$ at level $K$; the other one is the convergence rate of the distortion function valued at $x^{(n)}$, called the "performance" of $x^{(n)}$. Moreover, we also study the mean performance of the optimal quantization for the empirical measure of a distribution $\mu$ with finite second moment but possibly unbounded support. As an application, we show that the mean performance for the empirical measure of the multidimensional normal distribution $\mathcal{N}(m, \Sigma)$ and of distributions with hyper-exponential tails behave like $\mathcal{O}(\frac{\log n}{\sqrt{n}})$. This extends the results from BDL08 obtained for compactly supported distribution. We also derive an upper bound which is sharper in the quantization level $K$ but suboptimal in $n$ by applying results in FG15.
Benford's Law Miller, Steven J
2015, 2015., 20150609, 2015-06-09
eBook
Benford's law states that the leading digits of many data sets are not uniformly distributed from one through nine, but rather exhibit a profound bias. This bias is evident in everything from ...electricity bills and street addresses to stock prices, population numbers, mortality rates, and the lengths of rivers. This work demonstrates the many useful techniques that arise from the law, showing how truly multidisciplinary it is, and encouraging collaboration.
ABSTRACT We present a comprehensive analysis of the structural properties and luminosities of the 23 dwarf spheroidal galaxies that fall within the footprint of the Pan-Andromeda Archaeological ...Survey (PAndAS). These dwarf galaxies represent the large majority of Andromeda's known satellite dwarf galaxies and cover a wide range in luminosity ( or ) and surface brightness ( mag arcsec−2). We confirm most previous measurements, but we find And XIX to be significantly larger than before ( , ) and cannot derive parameters for And XXVII as it is likely not a bound stellar system. We also significantly revise downward the luminosities of And XV and And XVI, which are now or . Finally, we provide the first detailed analysis of Cas II/And XXX, a fairly faint system ( ) of typical size ( ), located in close proximity to the two bright elliptical dwarf galaxies NGC 147 and NGC 185. Combined with the set of homogeneous distances published in an earlier contribution, our analysis dutifully tracks all relevant sources of uncertainty in the determination of the properties of the dwarf galaxies from the PAndAS photometric catalog. We further publish the posterior probability distribution functions of all the parameters we fit for in the form of MCMC chains available online; these inputs should be used in any analysis that aims to remain truthful to the data and properly account for covariance between parameters.