Younger and older adults often differ in their risky choices. Theoretical frameworks on human aging point to various cognitive and motivational factors that might underlie these differences. Using a ...novel computational model based on the framework of resource rationality, we find that the two age groups rely on different strategies. Importantly, older adults did not use simpler strategies than younger adults, they did not select among fewer strategies, they did not make more errors, and they did not put more weight on cognitive costs. Instead, older adults selected strategies that had different risk propensities than those selected by younger adults. Our modeling approach suggests that age differences in risky choice are not necessarily a consequence of cognitive decline; instead, they may reflect motivational differences between age groups.
Celotno besedilo
Dostopno za:
DOBA, IZUM, KILJ, NUK, PILJ, PNG, SAZU, SIK, UILJ, UKNU, UL, UM, UPUK
To be useful, cognitive models with fitted parameters should show generalizability across time and allow accurate predictions of future observations. It has been proposed that hierarchical procedures ...yield better estimates of model parameters than do nonhierarchical, independent approaches, because the formers’ estimates for individuals within a group can mutually inform each other. Here, we examine Bayesian hierarchical approaches to evaluating model generalizability in the context of two prominent models of risky choice—cumulative prospect theory (Tversky & Kahneman,
1992
) and the transfer-of-attention-exchange model (Birnbaum & Chavez,
1997
). Using empirical data of risky choices collected for each individual at two time points, we compared the use of hierarchical versus independent, nonhierarchical Bayesian estimation techniques to assess two aspects of model generalizability: parameter stability (across time) and predictive accuracy. The relative performance of hierarchical versus independent estimation varied across the different measures of generalizability. The hierarchical approach improved parameter stability (in terms of a lower absolute discrepancy of parameter values across time) and predictive accuracy (in terms of deviance; i.e., likelihood). With respect to test–retest correlations and posterior predictive accuracy, however, the hierarchical approach did not outperform the independent approach. Further analyses suggested that this was due to strong correlations between some parameters within both models. Such intercorrelations make it difficult to identify and interpret single parameters and can induce high degrees of shrinkage in hierarchical models. Similar findings may also occur in the context of other cognitive models of choice.
Abstract
Online platforms’ data give advertisers the ability to “microtarget” recipients’ personal vulnerabilities by tailoring different messages for the same thing, such as a product or political ...candidate. One possible response is to raise awareness for and resilience against such manipulative strategies through psychological inoculation. Two online experiments (total
$$N= 828$$
N
=
828
) demonstrated that a short, simple intervention prompting participants to reflect on an attribute of their own personality—by completing a short personality questionnaire—boosted their ability to accurately identify ads that were targeted at them by up to 26 percentage points. Accuracy increased even without personalized feedback, but merely providing a description of the targeted personality dimension did not improve accuracy. We argue that such a “boosting approach,” which here aims to improve people’s competence to detect manipulative strategies themselves, should be part of a policy mix aiming to increase platforms’ transparency and user autonomy.
► Cognitive modeling with adjustable parameters allows capturing individual differences. ► Fitting parameters to data also bears the risk of fitting unsystematic noise. ► We test the parameter ...stability in cumulative prospect theory (CPT). ► An implementation of CPT with few parameters yielded the most robust parameter estimates. ► Even simpler models that ignore individual differences failed to predict risky choice.
In the behavioral sciences, a popular approach to describe and predict behavior is cognitive modeling with adjustable parameters (i.e., which can be fitted to data). Modeling with adjustable parameters allows, among other things, measuring differences between people. At the same time, parameter estimation also bears the risk of overfitting. Are individual differences as measured by model parameters stable enough to improve the ability to predict behavior as compared to modeling without adjustable parameters? We examined this issue in cumulative prospect theory (CPT), arguably the most widely used framework to model decisions under risk. Specifically, we examined (a) the temporal stability of CPT’s parameters; and (b) how well different implementations of CPT, varying in the number of adjustable parameters, predict individual choice relative to models with no adjustable parameters (such as CPT with fixed parameters, expected value theory, and various heuristics). We presented participants with risky choice problems and fitted CPT to each individual’s choices in two separate sessions (which were 1week apart). All parameters were correlated across time, in particular when using a simple implementation of CPT. CPT allowing for individual variability in parameter values predicted individual choice better than CPT with fixed parameters, expected value theory, and the heuristics. CPT’s parameters thus seem to pick up stable individual differences that need to be considered when predicting risky choice.
For a long time, the dominant approach to studying decision making under risk has been to use psychoeconomic functions to account for how behavior deviates from the normative prescriptions of ...expected value maximization. While this neo-Bernoullian tradition has advanced the field in various ways—such as identifying seminal phenomena of risky choice (e.g., Allais paradox, fourfold pattern)—it contains a major shortcoming: Psychoeconomic curves are mute with regard to the cognitive mechanisms underlying risky choice. This neglect of the mechanisms both limits the explanatory value of neo-Bernoullian models and fails to provide guidance for designing effective interventions to improve decision making. Here we showcase a recent “attentional turn” in research on risk choice that elaborates how deviations from normative prescriptions can result from imbalances in attention allocation (rather than distortions in the representation or processing of probability and outcome information) and that thus promises to overcome the challenges of the neo-Bernoullian tradition. We argue that a comprehensive understanding of preference formation in risky choice must provide an account on a mechanistic level, and we delineate directions in which existing theories that rely on attentional processes may be extended to achieve this objective.
There is a disconnect in the literature between analyses of risky choice based on cumulative prospect theory (CPT) and work on predecisional information processing. One likely reason is that for ...expectation models (e.g., CPT), it is often assumed that people behaved only as if they conducted the computations leading to the predicted choice and that the models are thus mute regarding information processing. We suggest that key psychological constructs in CPT, such as loss aversion and outcome and probability sensitivity, can be interpreted in terms of attention allocation. In two experiments, we tested hypotheses about specific links between CPT parameters and attentional regularities. Experiment 1 used process tracing to monitor participants' predecisional attention allocation to outcome and probability information. As hypothesized, individual differences in CPT's loss-aversion, outcome-sensitivity, and probability-sensitivity parameters (estimated from participants' choices) were systematically associated with individual differences in attention allocation to outcome and probability information. For instance, loss aversion was associated with the relative attention allocated to loss and gain outcomes, and a more strongly curved weighting function was associated with less attention allocated to probabilities. Experiment 2 manipulated participants' attention to losses or gains, causing systematic differences in CPT's loss-aversion parameter. This result indicates that attention allocation can to some extent cause choice regularities that are captured by CPT. Our findings demonstrate an as-if model's capacity to reflect characteristics of information processing. We suggest that the observed CPT-attention links can be harnessed to inform the development of process models of risky choice.
Common wisdom tells us that more information can only help and never hurt. Goldstein and Gigerenzer (2002) highlighted an instance violating this intuition. Specifically, in an analysis of their ...recognition heuristic, they found a counterintuitive less-is-more effect in inference: An individual recognizing fewer objects than another individual can, nevertheless, make more accurate inferences. Goldstein and Gigerenzer emphasized that a sufficient condition for this effect is that the recognition validity be higher than the knowledge validity, assuming that the validities are uncorrelated with the number of recognized objects, n. But how is the occurrence of the less-is-more effect affected when this independence assumption is violated? I show that validity dependencies (i.e., correlations of the validities with n) abound in empirical data sets, and I demonstrate by computer simulations that these dependencies often have a strong limiting effect on the less-is-more effect. Moreover, I discuss what cognitive (e.g., memory) and ecological (e.g., distribution of the criterion variable, environmental frequencies) factors can give rise to a dependency of the recognition validity on the number of recognized objects. Supplemental materials may be downloaded from http://pbr.psychonomic-journals.org/content/supplemental.
Almost 40% of global mortality is attributable to an unhealthy diet, and adolescents and young adults are particularly affected by growing obesity rates. How do (young) people conceptualize and judge ...the healthiness of foods and how are the judgments embedded in people's mental representations of the food ecology? We asked respondents to rate a large range of common food products on a diverse set of characteristics and then applied the psychometric paradigm to identify the dimensions structuring people's mental representations of the foods. Respondents were also asked to rate each food in terms of its healthiness, and we used the foods' scores on the extracted dimensions to predict the healthiness judgments. We compared three groups of respondents: adolescents, lay adults, and nutrition experts. Naturalness levels (e.g., processing, artificial additives) and cholesterol and protein content emerged as the two central dimensions structuring respondents' mental representations of the foods. Relative to the other two groups, the adolescents' representations were less differentiated. Judged food healthiness was determined by multiple factors, but naturalness was the strongest predictor across all groups. Overall, the adolescents' responses showed considerable heterogeneity, suggesting a lack of solid food knowledge and the need for tailored nutrition education on specific food products and content characteristics.
Public Significance Statement
This study examines mental representations of foods and how these guide how "healthy" people judge these foods to be, comparing adolescents, adults, and experts. The results show that for all respondents, perceived naturalness is a central dimension underlying mental representations of foods and the strongest predictor of healthiness judgments. Compared to experts and adults, adolescents exhibit the greatest variability in their food assessments, indicating a lack of solid food knowledge.
Uncertainty about the waiting time before obtaining an outcome is integral to intertemporal choice. Here, we showed that people express different time preferences depending on how they learn about ...this temporal uncertainty. In two studies, people chose between pairs of options: one with a single, sure delay and the other involving multiple, probabilistic delays (a lottery). The probability of each delay occurring either was explicitly described (timing risk) or could be learned through experiential sampling (timing uncertainty; the delay itself was not experienced). When the shorter delay was rare, people preferred the lottery more often when it was described than when it was experienced. When the longer delay was rare, this pattern was reversed. Modeling analyses suggested that underexperiencing rare delays and different patterns of probability weighting contribute to this description–experience gap. Our results challenge traditional models of intertemporal choice with temporal uncertainty as well as the generality of inverse-S-shaped probability weighting in such choice.