Three questions have been prominent in the study of visual working memory limitations: (a) What is the nature of mnemonic precision (e.g., quantized or continuous)? (b) How many items are remembered? ...(c) To what extent do spatial binding errors account for working memory failures? Modeling studies have typically focused on comparing possible answers to a single one of these questions, even though the result of such a comparison might depend on the assumed answers to both others. Here, we consider every possible combination of previously proposed answers to the individual questions. Each model is then a point in a 3-factor model space containing a total of 32 models, of which only 6 have been tested previously. We compare all models on data from 10 delayed-estimation experiments from 6 laboratories (for a total of 164 subjects and 131,452 trials). Consistently across experiments, we find that (a) mnemonic precision is not quantized but continuous and not equal but variable across items and trials; (b) the number of remembered items is likely to be variable across trials, with a mean of 6.4 in the best model (median across subjects); (c) spatial binding errors occur but explain only a small fraction of responses (16.5% at set size 8 in the best model). We find strong evidence against all 6 documented models. Our results demonstrate the value of factorial model comparison in working memory.
Change detection is a classic paradigm that has been used for decades to argue that working memory can hold no more than a fixed number of items ("item-limit models"). Recent findings force us to ...consider the alternative view that working memory is limited by the precision in stimulus encoding, with mean precision decreasing with increasing set size ("continuous-resource models"). Most previous studies that used the change detection paradigm have ignored effects of limited encoding precision by using highly discriminable stimuli and only large changes. We conducted two change detection experiments (orientation and color) in which change magnitudes were drawn from a wide range, including small changes. In a rigorous comparison of five models, we found no evidence of an item limit. Instead, human change detection performance was best explained by a continuous-resource model in which encoding precision is variable across items and trials even at a given set size. This model accounts for comparison errors in a principled, probabilistic manner. Our findings sharply challenge the theoretical basis for most neural studies of working memory capacity.
Decisions are accompanied by a degree of confidence that a selected option is correct. A sequential sampling framework explains the speed and accuracy of decisions and extends naturally to the ...confidence that the decision rendered is likely to be correct. However, discrepancies between confidence and accuracy suggest that confidence might be supported by mechanisms dissociated from the decision process. Here we show that this discrepancy can arise naturally because of simple processing delays. When participants were asked to report choice and confidence simultaneously, their confidence, reaction time and a perceptual decision about motion were explained by bounded evidence accumulation. However, we also observed revisions of the initial choice and/or confidence. These changes of mind were explained by a continuation of the mechanism that led to the initial choice. Our findings extend the sequential sampling framework to vacillation about confidence and invites caution in interpreting dissociations between confidence and accuracy.
Demanding tasks often require a series of decisions to reach a goal. Recent progress in perceptual decision-making has served to unite decision accuracy, speed, and confidence in a common framework ...of bounded evidence accumulation, furnishing a platform for the study of such multi-stage decisions. In many instances, the strategy applied to each decision, such as the speed-accuracy trade-off, ought to depend on the accuracy of the previous decisions. However, as the accuracy of each decision is often unknown to the decision maker, we hypothesized that subjects may carry forward a level of confidence in previous decisions to affect subsequent decisions. Subjects made two perceptual decisions sequentially and were rewarded only if they made both correctly. The speed and accuracy of individual decisions were explained by noisy evidence accumulation to a terminating bound. We found that subjects adjusted their speed-accuracy setting by elevating the termination bound on the second decision in proportion to their confidence in the first. The findings reveal a novel role for confidence and a degree of flexibility, hitherto unknown, in the brain’s ability to rapidly and precisely modify the mechanisms that control the termination of a decision.
Display omitted
•Many tasks require a series of correct decisions to reach a goal•Confidence in a decision affects the termination criterion for the next decision•Use of confidence to change the speed-accuracy trade-off can increase reward
van den Berg et al. show that when making a sequence of decisions to achieve a goal, the subjective confidence in the accuracy of the first decision precisely and rapidly alters the decision-making process of the second decision.
The ability to search efficiently for a target in a cluttered environment is one of the most remarkable functions of the nervous system. This task is difficult under natural circumstances, as the ...reliability of sensory information can vary greatly across space and time and is typically a priori unknown to the observer. In contrast, visual-search experiments commonly use stimuli of equal and known reliability. In a target detection task, we randomly assigned high or low reliability to each item on a trial-by-trial basis. An optimal observer would weight the observations by their trial-to-trial reliability and combine them using a specific nonlinear integration rule. We found that humans were near-optimal, regardless of whether distractors were homogeneous or heterogeneous and whether reliability was manipulated through contrast or shape. We present a neural-network implementation of near-optimal visual search based on probabilistic population coding. The network matched human performance.
The Alabama parenting questionnaire (APQ) is a commonly used instrument for assessing parenting practices and evaluating treatment outcomes of parent‐training interventions targeting child conduct ...problems. In the present study we translated and developed a Swedish version of the APQ parent version and tested it on a community sample of 799 parents of children between 6 and 15 years with diverse socioeconomic backgrounds. Data were collected through an online survey distributed through school newsletters and social media. Exploratory factor analysis (EFA) suggested a five‐factor model with 23 items. Four of these factors correspond to the subscales suggested in the original version of the APQ: inconsistent discipline, poor monitoring, involvement, and positive parenting. The fifth subscale from the original APQ, corporal punishment, did not show up as a factor in our data sample. Instead, a new factor, which we refer to as contingency management, was revealed. A confirmatory factor analysis further suggested some misalignment between the original APQ subscale structure and our sample, which we interpret as a signal that the instrument may need refinement to better reflect contemporary parenting methods in diverse cultural contexts. Despite this limitation, and with the exclusion of the corporal punishment subscale, which should be employed judiciously, our results suggest that the Swedish version of the APQ can be a useful instrument in measuring parenting practices in Sweden. We present norm data stratified by child age, which practitioners and researchers can use as a reference for assessment of parenting practices in the Swedish population.
The taxonomy and systematic relationships among species of Solanum section Petota are complicated and the section seems overclassified. Many of the presumed (sub)species from South America are very ...similar and they are able to exchange genetic material. We applied a population genetic approach to evaluate support for subgroups within this material, using AFLP data. Our approach is based on the following assumptions: (i) accessions that may exchange genetic material can be analyzed as if they are part of one gene pool, and (ii) genetic differentiation among species is expected to be higher than within species.
A dataset of 566 South-American accessions (encompassing 89 species and subspecies) was analyzed in two steps. First, with the program STRUCTURE 2.2 in an 'unsupervised' procedure, individual accessions were assigned to inferred clusters based on genetic similarity. The results showed that the South American members of section Petota could be arranged in 16 clusters of various size and composition. Next, the accessions within the clusters were grouped by maximizing the partitioning of genetic diversity among subgroups (i.e., maximizing Fst values) for all available individuals of the accessions (2767 genotypes). This two-step approach produced an optimal partitioning into 44 groups.Some of the species clustered as genetically distinct groups, either on their own, or combined with one or more other species. However, accessions of other species were distributed over more than one cluster, and did not form genetically distinct units.
We could not find any support for 43 species (almost half of our dataset). For 28 species some level of support could be found varying from good to weak. For 18 species no conclusions could be drawn as the number of accessions included in our dataset was too low. These molecular data should be combined with data from morphological surveys, with geographical distribution data, and with information from crossing experiments to identify natural units at the species level. However, the data do indicate which taxa or combinations of taxa are clearly supported by a distinct set of molecular marker data, leaving other taxa unsupported. Therefore, the approach taken provides a general method to evaluate the taxonomic system in any species complex for which molecular data are available.
It is commonly believed that visual short-term memory (VSTM) consists of a fixed number of "slots" in which items can be stored. An alternative theory in which memory resource is a continuous ...quantity distributed over all items seems to be refuted by the appearance of guessing in human responses. Here, we introduce a model in which resource is not only continuous but also variable across items and trials, causing random fluctuations in encoding precision. We tested this model against previous models using two VSTM paradigms and two feature dimensions. Our model accurately accounts for all aspects of the data, including apparent guessing, and outperforms slot models in formal model comparison. At the neural level, variability in precision might correspond to variability in neural population gain and doubly stochastic stimulus representation. Our results suggest that VSTM resource is continuous and variable rather than discrete and fixed and might explain why subjective experience of VSTM is not all or none.
Encoding precision in visual working memory decreases with the number of encoded items. Here, we propose a normative theory for such set size effects: the brain minimizes a weighted sum of an ...error-based behavioral cost and a neural encoding cost. We construct a model from this theory and find that it predicts set size effects. Notably, these effects are mediated by probing probability, which aligns with previous empirical findings. The model accounts well for effects of both set size and probing probability on encoding precision in nine delayed-estimation experiments. Moreover, we find support for the prediction that the total amount of invested resource can vary non-monotonically with set size. Finally, we show that it is sometimes optimal to encode only a subset or even none of the relevant items in a task. Our findings raise the possibility that cognitive "limitations" arise from rational cost minimization rather than from constraints.
Optimal Bayesian models have been highly successful in describing human performance on perceptual decision-making tasks, such as cue combination and visual search. However, recent studies have argued ...that these models are often overly flexible and therefore lack explanatory power. Moreover, there are indications that neural computation is inherently imprecise, which makes it implausible that humans would perform optimally on any non-trivial task. Here, we reconsider human performance on a visual-search task by using an approach that constrains model flexibility and tests for computational imperfections. Subjects performed a target detection task in which targets and distractors were tilted ellipses with orientations drawn from Gaussian distributions with different means. We varied the amount of overlap between these distributions to create multiple levels of external uncertainty. We also varied the level of sensory noise, by testing subjects under both short and unlimited display times. On average, empirical performance-measured as d'-fell 18.1% short of optimal performance. We found no evidence that the magnitude of this suboptimality was affected by the level of internal or external uncertainty. The data were well accounted for by a Bayesian model with imperfections in its computations. This "imperfect Bayesian" model convincingly outperformed the "flawless Bayesian" model as well as all ten heuristic models that we tested. These results suggest that perception is founded on Bayesian principles, but with suboptimalities in the implementation of these principles. The view of perception as imperfect Bayesian inference can provide a middle ground between traditional Bayesian and anti-Bayesian views.