To understand decision-making behavior in simple, controlled environments, Bayesian models are often useful. First, optimal behavior is always Bayesian. Second, even when behavior deviates from ...optimality, the Bayesian approach offers candidate models to account for suboptimalities. Third, a realist interpretation of Bayesian models opens the door to studying the neural representation of uncertainty. In this tutorial, we review the principles of Bayesian models of decision making and then focus on five case studies with exercises. We conclude with reflections and future directions.
In this primer, Ma presents the basics of Bayesian models of decision making, with an emphasis on perception. The paper contains examples, exercises, and solutions to exercises.
Hybrid supercapacitors generally show high power and long life spans but inferior energy densities, which are mainly caused by carbon negative electrodes with low specific capacitances. To improve ...the energy densities, the traditional methods include optimizing pore structures and modifying pseudocapacitive groups on the carbon materials. Here, another promising way is suggested, which has no adverse effects to the carbon materials, that is, constructing electron‐rich regions on the electrode surfaces for absorbing cations as much as possible. For this aim, a series of hierarchical porous carbon materials are produced by calcinating carbon dots–hydrogel composites, which have controllable surface states including electron‐rich regions. The optimal sample is employed as the negative electrode to fabricate hybrid supercapacitors, which show remarkable specific energy densities (up to 62.8–90.1 Wh kg−1) in different systems.
Robust carbon negative electrodes for hybrid supercapacitors are fabricated by a new promising method, that is, constructing electron‐rich regions on the electrode surfaces for absorbing cations as much as possible. Correspondingly, hierarchical porous carbon materials are produced by calcinating carbon dots–hydrogel composites, which have controllable surface states including electron‐rich regions.
Organisms must act in the face of sensory, motor, and reward uncertainty stemming from a pandemonium of stochasticity and missing information. In many tasks, organisms can make better decisions if ...they have at their disposal a representation of the uncertainty associated with task-relevant variables. We formalize this problem using Bayesian decision theory and review recent behavioral and neural evidence that the brain may use knowledge of uncertainty, confidence, and probability.
Humans can meaningfully report their confidence in a perceptual or cognitive decision. It is widely believed that these reports reflect the Bayesian probability that the decision is correct, but this ...hypothesis has not been rigorously tested against non-Bayesian alternatives. We use two perceptual categorization tasks in which Bayesian confidence reporting requires subjects to take sensory uncertainty into account in a specific way. We find that subjects do take sensory uncertainty into account when reporting confidence, suggesting that brain areas involved in reporting confidence can access low-level representations of sensory uncertainty, a prerequisite of Bayesian inference. However, behavior is not fully consistent with the Bayesian hypothesis and is better described by simple heuristic models that use uncertainty in a non-Bayesian way. Both conclusions are robust to changes in the uncertainty manipulation, task, response modality, model comparison metric, and additional flexibility in the Bayesian model. Our results suggest that adhering to a rational account of confidence behavior may require incorporating implementational constraints.
Decision confidence reflects our ability to evaluate the quality of decisions and guides subsequent behavior. Experiments on confidence reports have almost exclusively focused on two-alternative ...decision-making. In this realm, the leading theory is that confidence reflects the probability that a decision is correct (the posterior probability of the chosen option). There is, however, another possibility, namely that people are less confident if the best two options are closer to each other in posterior probability, regardless of how probable they are in absolute terms. This possibility has not previously been considered because in two-alternative decisions, it reduces to the leading theory. Here, we test this alternative theory in a three-alternative visual categorization task. We found that confidence reports are best explained by the difference between the posterior probabilities of the best and the next-best options, rather than by the posterior probability of the chosen (best) option alone, or by the overall uncertainty (entropy) of the posterior distribution. Our results upend the leading notion of decision confidence and instead suggest that confidence reflects the observer's subjective probability that they made the best possible decision.
Probability has played a central role in models of perception for more than a century, but a look at probabilistic concepts in the literature raises many questions. Is being Bayesian the same as ...being optimal? Are recent Bayesian models fundamentally different from classic signal detection theory models? Do findings of near-optimal inference provide evidence that neurons compute with probability distributions? This review aims to disentangle these concepts and to classify empirical evidence accordingly.
Animals perform near-optimal probabilistic inference in a wide range of psychophysical tasks. Probabilistic inference requires trial-to-trial representation of the uncertainties associated with task ...variables and subsequent use of this representation. Previous work has implemented such computations using neural networks with hand-crafted and task-dependent operations. We show that generic neural networks trained with a simple error-based learning rule perform near-optimal probabilistic inference in nine common psychophysical tasks. In a probabilistic categorization task, error-based learning in a generic network simultaneously explains a monkey's learning curve and the evolution of qualitative aspects of its choice behavior. In all tasks, the number of neurons required for a given level of performance grows sublinearly with the input population size, a substantial improvement on previous implementations of probabilistic inference. The trained networks develop a novel sparsity-based probabilistic population code. Our results suggest that probabilistic inference emerges naturally in generic neural networks trained with error-based learning rules.Behavioural tasks often require probability distributions to be inferred about task specific variables. Here, the authors demonstrate that generic neural networks can be trained using a simple error-based learning rule to perform such probabilistic computations efficiently without any need for task specific operations.
Bayesian models of behavior suggest that organisms represent uncertainty associated with sensory variables. However, the neural code of uncertainty remains elusive. A central hypothesis is that ...uncertainty is encoded in the population activity of cortical neurons in the form of likelihood functions. We tested this hypothesis by simultaneously recording population activity from primate visual cortex during a visual categorization task in which trial-to-trial uncertainty about stimulus orientation was relevant for the decision. We decoded the likelihood function from the trial-to-trial population activity and found that it predicted decisions better than a point estimate of orientation. This remained true when we conditioned on the true orientation, suggesting that internal fluctuations in neural activity drive behaviorally meaningful variations in the likelihood function. Our results establish the role of population-encoded likelihood functions in mediating behavior and provide a neural underpinning for Bayesian models of perception.
The fate of scientific hypotheses often relies on the ability of a computational model to explain the data, quantified in modern statistical approaches by the likelihood function. The log-likelihood ...is the key element for parameter estimation and model evaluation. However, the log-likelihood of complex models in fields such as computational biology and neuroscience is often intractable to compute analytically or numerically. In those cases, researchers can often only estimate the log-likelihood by comparing observed data with synthetic observations generated by model simulations. Standard techniques to approximate the likelihood via simulation either use summary statistics of the data or are at risk of producing substantial biases in the estimate. Here, we explore another method, inverse binomial sampling (IBS), which can estimate the log-likelihood of an entire data set efficiently and without bias. For each observation, IBS draws samples from the simulator model until one matches the observation. The log-likelihood estimate is then a function of the number of samples drawn. The variance of this estimator is uniformly bounded, achieves the minimum variance for an unbiased estimator, and we can compute calibrated estimates of the variance. We provide theoretical arguments in favor of IBS and an empirical assessment of the method for maximum-likelihood estimation with simulation-based models. As case studies, we take three model-fitting problems of increasing complexity from computational and cognitive neuroscience. In all problems, IBS generally produces lower error in the estimated parameters and maximum log-likelihood values than alternative sampling methods with the same average number of samples. Our results demonstrate the potential of IBS as a practical, robust, and easy to implement method for log-likelihood evaluation when exact techniques are not available.