Human probability judgments are both variable and subject to systematic biases. Most probability judgment models treat variability and bias separately: a deterministic model explains the origin of ...bias, to which a noise process is added to generate variability. But these accounts do not explain the characteristic inverse U-shaped signature linking mean and variance in probability judgments. By contrast, models based on sampling generate the mean and variance of judgments in a unified way: the variability in the response is an inevitable consequence of basing probability judgments on a small sample of remembered or simulated instances of events. We consider two recent sampling models, in which biases are explained either by the sample accumulation being further corrupted by retrieval noise (the Probability Theory + Noise account) or as a Bayesian adjustment to the uncertainty implicit in small samples (the Bayesian sampler). While the mean predictions of these accounts closely mimic one another, they differ regarding the predicted relationship between mean and variance. We show that these models can be distinguished by a novel linear regression method that analyses this crucial mean-variance signature. First, the efficacy of the method is established using model recovery, demonstrating that it more accurately recovers parameters than complex approaches. Second, the method is applied to the mean and variance of both existing and new probability judgment data, confirming that judgments are based on a small number of samples that are adjusted by a prior, as predicted by the Bayesian sampler.
Public Significance Statement
Human probability judgments play a crucial role in everyday reasoning and decision-making. But it can be difficult to distinguish between different theoretical models of the mental processes determining such judgments. This study introduces a new method which uses the relationships between the mean and the variance of probability judgments to discriminate between theoretical models. Applying the method provides new evidence for a theory based on mental sampling coupled with a Bayesian adjustment of the sampled proportions, as well as a simple and accurate way to estimate model parameters for individuals. This sheds new light on many of the reoccurring biases in human probability judgment.
In a recent article, El Karoui et al. (Proc Natl Acad Sci 110(36):14557–14562,
2013
) study the distribution of robust regression estimators in the regime in which the number of parameters
p
is of ...the same order as the number of samples
n
. Using numerical simulations and ‘highly plausible’ heuristic arguments, they unveil a striking new phenomenon. Namely, the regression coefficients contain an extra Gaussian noise component that is not explained by classical concepts such as the Fisher information matrix. We show here that that this phenomenon can be characterized rigorously using techniques that were developed by the authors for analyzing the Lasso estimator under high-dimensional asymptotics. We introduce an
approximate message passing
(AMP) algorithm to compute M-estimators and deploy
state evolution
to evaluate the operating characteristics of AMP and so also M-estimates. Our analysis clarifies that the ‘extra Gaussian noise’ encountered in this problem is fundamentally similar to phenomena already studied for regularized least squares in the setting
n
<
p
.
This letter presents a detailed performance analysis of the intelligent reflecting surface (IRS) aided single-input single-output communication systems, taking into account of the direct link between ...the transmitter and receiver. A closed-form upper bound is derived for the ergodic capacity, and an accurate approximation is obtained for the outage probability. In addition, simplified expressions are presented in the asymptotic regime. Numerical results are provided to validate the correctness of the theoretical analysis. It is found that increasing the number of reflecting elements can significantly boost the ergodic capacity and outage probability performance, and a strong line-of-sight component is also beneficial. In addition, it is desirable to deploy the IRS close to the transmitter or receiver, rather than in the middle.
Summary
1. Understanding the factors affecting species occurrence is a pre‐eminent focus of applied ecological research. However, direct information about species occurrence is lacking for many ...species. Instead, researchers sometimes have to rely on so‐called presence‐only data (i.e. when no direct information about absences is available), which often results from opportunistic, unstructured sampling. maxent is a widely used software program designed to model and map species distribution using presence‐only data.
2. We provide a critical review of maxent as applied to species distribution modelling and discuss how it can lead to inferential errors. A chief concern is that maxent produces a number of poorly defined indices that are not directly related to the actual parameter of interest – the probability of occurrence (ψ). This focus on an index was motivated by the belief that it is not possible to estimate ψ from presence‐only data; however, we demonstrate that ψ is identifiable using conventional likelihood methods under the assumptions of random sampling and constant probability of species detection.
3. The model is implemented in a convenient r package which we use to apply the model to simulated data and data from the North American Breeding Bird Survey. We demonstrate that maxent produces extreme under‐predictions when compared to estimates produced by logistic regression which uses the full (presence/absence) data set. We note that maxent predictions are extremely sensitive to specification of the background prevalence, which is not objectively estimated using the maxent method.
4. As with maxent, formal model‐based inference requires a random sample of presence locations. Many presence‐only data sets, such as those based on museum records and herbarium collections, may not satisfy this assumption. However, when sampling is random, we believe that inference should be based on formal methods that facilitate inference about interpretable ecological quantities instead of vaguely defined indices.
Many probability forecasts are revised as new information becomes available, generating a time series of forecasts for a single event. Although methods for evaluating probability forecasts have been ...extensively studied, they apply to a single forecast per event. This paper is the first to evaluate probability forecasts that are made—and therefore revised—at many lead times for a single event. I postulate a norm for multi-period probability-forecasting systems and derive properties that should hold regardless of the forecasting process. I use these properties to develop methods for evaluating a forecasting system based on a sample. I apply these methods to the National Hurricane Center’s wind-speed probability forecasts and to statistical election forecasts, finding evidence that both can be improved using the current set of predictors.
This paper was accepted by Manel Baucells, decision analysis.
Reichenbach’s principle asserts that if two observed variables are found to be correlated, then there should be a causal explanation of these correlations. Furthermore, if the explanation is in terms ...of a common cause, then the conditional probability distribution over the variables given the complete common cause should factorize. The principle is generalized by the formalism of causal models, in which the causal relationships among variables constrain the form of their joint probability distribution. In the quantum case, however, the observed correlations in Bell experiments cannot be explained in the manner Reichenbach’s principle would seem to demand. Motivated by this, we introduce a quantum counterpart to the principle. We demonstrate that under the assumption that quantum dynamics is fundamentally unitary, if a quantum channel with input A and outputs B and C is compatible with A being a complete common cause of B and C , then it must factorize in a particular way. Finally, we show how to generalize our quantum version of Reichenbach’s principle to a formalism for quantum causal models and provide examples of how the formalism works.
In Dempster–Shafer evidence theory, how to use the basic probability assignment (BPA) in decision‐making is a significant issue. The transformation of BPA into a probability distribution function is ...one of the common and feasible schemes. To overcome the problems of the existing methods, we propose a marginal probability transformation method based on the Shapley value approach. The proposed method allocates BPA values in terms of how much an element contributes to a set, which is an equitable and effective distribution mechanism. Furthermore, we use probabilistic information content to evaluate the effect of each transformation method. Moreover, some numerical examples are used to demonstrate the efficiency and feasibility of the proposed method. Further, two applications, target recognition, fault diagnosis are used to verify the superiority and effectiveness of the proposed method in practice.
In this paper, an exact expression for the first probability density function of the solution stochastic process to a randomized homogeneous linear second-order complex differential equation is ...determined. To complete the probabilistic analysis, the first probability density functions of the real and complex contributions of the solution stochastic process are also calculated. To compute the densities, the random variable transformation method is applied under general hypothesis, all coefficients and initial conditions are absolutely continuous complex random variables. The capability of the theoretical results established is demonstrated by several numerical examples. Finally, we show the applicability of the method in engineering, by analysing the solution of a randomized simple harmonic oscillator, defined on the complex domain, and comparing our results with those obtained by using Monte Carlo simulations.