The proliferation of misinformation on social media platforms has given rise to growing demands for effective intervention strategies that increase sharing discernment (i.e. increase the difference ...in the probability of sharing true posts relative to the probability of sharing false posts). One suggested method is to encourage users to deliberate on the veracity of the information prior to sharing. However, this strategy is undermined by individuals’ propensity to share posts they acknowledge as false. In our study, across three experiments, in a simulated social media environment, participants were shown social media posts and asked whether they wished to share them and, sometimes, whether they believed the posts to be truthful. We observe that requiring users to verify their belief in a news post’s truthfulness before sharing it markedly curtails the dissemination of false information. Thus, requiring self-certification increased sharing discernment. Importantly, requiring self-certification didn’t hinder users from sharing content they genuinely believed to be true because participants were allowed to share any posts that they indicated were true. We propose self-certification as a method that substantially curbs the spread of misleading content on social media without infringing upon the principle of free speech.
Category Clustering and Morphological Learning Mansfield, John; Saldana, Carmen; Hurst, Peter ...
Cognitive science,
February 2022, 2022-02-00, 20220201, Letnik:
46, Številka:
2
Journal Article
Recenzirano
Odprti dostop
Inflectional affixes expressing the same grammatical category (e.g., subject agreement) tend to appear in the same morphological position in the word. We hypothesize that this cross‐linguistic ...tendency toward category clustering is at least partly the result of a learning bias, which facilitates the transmission of morphology from one generation to the next if each inflectional category has a consistent morphological position. We test this in an online artificial language experiment, teaching adult English speakers a miniature language consisting of noun stems representing shapes and suffixes representing the color and number features of each shape. In one experimental condition, each suffix category has a fixed position, with color in the first position and number in the second position. In a second condition, each specific combination of suffixes has a fixed order, but some combinations have color in the first position, and some have number in the first position. In a third condition, suffixes are randomly ordered on each presentation. While the language in the first condition is consistent with the category clustering principle, those in the other conditions are not. Our results indicate that category clustering of inflectional affixes facilitates morphological learning, at least in adult English speakers. Moreover, we found that languages that violate category clustering but still follow fixed affix ordering patterns are more learnable than languages with random ordering. Altogether, our results provide evidence for individual biases toward category clustering; we suggest that this bias may play a causal role in shaping the typological regularities in affix order we find in natural language.
Everyday reasoning requires more evidence than raw data alone can provide. We explore the idea that people can go beyond this data by reasoning about how the data was sampled. This idea is ...investigated through an examination of premise non‐monotonicity, in which adding premises to a category‐based argument weakens rather than strengthens it. Relevance theories explain this phenomenon in terms of people's sensitivity to the relationships among premise items. We show that a Bayesian model of category‐based induction taking premise sampling assumptions and category similarity into account complements such theories and yields two important predictions: First, that sensitivity to premise relationships can be violated by inducing a weak sampling assumption; and second, that premise monotonicity should be restored as a result. We test these predictions with an experiment that manipulates people's assumptions in this regard, showing that people draw qualitatively different conclusions in each case.
An experiment examined decision-making processes among nonclinical participants with low or high levels of OCD symptomatology (N = 303). To better simulate the decision environments that are most ...likely to be problematic for clients with OCD, we employed decision tasks that incorporated “black swan” options that have a very low probability but involve substantial loss. When faced with a choice between a safer option that involved no risk of loss or a riskier alternative with a very low probability of substantial loss, most participants chose the safer option regardless of OCD symptom level. However, when faced with choices between options that had similar expected values to the previous choices, but where each option had some low risk of a substantial loss, there was a significant shift towards riskier decisions. These effects were stronger when the task involved a contamination based, health-relevant decision task as compared to one with financial outcomes. The results suggest that both low and high symptom OC participants approach decisions involving risk-free options and decisions involving risky alternatives in qualitatively different ways. There was some evidence that measures of impulsivity were better predictors of the shift to risky decision making than OCD symptomatology.
How does the process of information transmission affect the cultural or linguistic products that emerge? This question is often studied experimentally and computationally via iterated learning, a ...procedure in which participants learn from previous participants in a chain. Iterated learning is a powerful tool because, when all participants share the same priors, the stationary distributions of the iterated learning chains reveal those priors. In many situations, however, it is unreasonable to assume that all participants share the same prior beliefs. We present four simulation studies and one experiment demonstrating that when the population of learners is heterogeneous, the behavior of an iterated learning chain can be unpredictable and is often systematically distorted by the learners with the most extreme biases. This results in group‐level outcomes that reflect neither the behavior of any individuals within the population nor the overall population average. We discuss implications for the use of iterated learning as a methodological tool as well as for the processes that might have shaped cultural and linguistic evolution in the real world.
The COVID-19 global pandemic has brought into sharp focus the urgency of tackling the question of how globalized humanity responds to a global societal threat, which can adversely affect a large ...portion of the human population. Changing geospatial distribution of COVID-19 morbidity paints a gloomy picture of cross-national differences in human vulnerabilities across the globe. We describe the dynamic nexus among societal – particularly pathogen – threat, social institutions, and culture, and discuss collectivism (ingroup favouritism and outgroup avoidance) and tightness (narrow prescription of behaviours and severe punishment of norm violations) as potential cultural adaptations to prevalent pathogen threats. We then sketch out a theoretical framework for cultural dynamics of collective adaptation to pathogen threats, outline a large number of theory- and policy-relevant research questions and what answers we have at present, and end with a call for renewed efforts to investigate collective human responses to societal threats.
The curse of dimensionality, which has been widely studied in statistics and machine learning, occurs when additional features cause the size of the feature space to grow so quickly that learning ...classification rules becomes increasingly difficult. How do people overcome the curse of dimensionality when acquiring real‐world categories that have many different features? Here we investigate the possibility that the structure of categories can help. We show that when categories follow a family resemblance structure, people are unaffected by the presence of additional features in learning. However, when categories are based on a single feature, they fall prey to the curse, and having additional irrelevant features hurts performance. We compare and contrast these results to three different computational models to show that a model with limited computational capacity best captures human performance across almost all of the conditions in both experiments.
The study of semi-supervised category learning has generally focused on how additional unlabeled information with given labeled information might benefit category learning. The literature is also ...somewhat contradictory, sometimes appearing to show a benefit to unlabeled information and sometimes not. In this paper, we frame the problem differently, focusing on when labels might be helpful to a learner who has access to lots of unlabeled information. Using an unconstrained free-sorting categorization experiment, we show that labels are useful to participants only when the category structure is ambiguous and that people’s responses are driven by the specific set of labels they see. We present an extension of Anderson’s Rational Model of Categorization that captures this effect.
Bayesian statistics offers a normative description for how a person should combine their original beliefs (i.e., their priors) in light of new evidence (i.e., the likelihood). Previous research ...suggests that people tend to under-weight both their prior (base rate neglect) and the likelihood (conservatism), although this varies by individual and situation. Yet this work generally elicits people's knowledge as single point estimates (e.g., x has a 5% probability of occurring) rather than as a full distribution. Here we demonstrate the utility of eliciting and fitting full distributions when studying these questions. Across three experiments, we found substantial variation in the extent to which people showed base rate neglect and conservatism, which our method allowed us to measure for the first time simultaneously at the level of the individual. While most people tended to disregard the base rate, they did so less when the prior was made explicit. Although many individuals were conservative, there was no apparent systematic relationship between base rate neglect and conservatism within each individual. We suggest that this method shows great potential for studying human probabilistic reasoning.
As Bayesian methods become more popular among behavioral scientists, they will inevitably be applied in situations that violate the assumptions underpinning typical models used to guide statistical ...inference. With this in mind, it is important to know something about how robust Bayesian methods are to the violation of those assumptions. In this paper, we focus on the problem of
contaminated
data (such as data with outliers or conflicts present), with specific application to the problem of estimating a credible interval for the population mean. We evaluate five Bayesian methods for constructing a credible interval, using toy examples to illustrate the qualitative behavior of different approaches in the presence of contaminants, and an extensive simulation study to quantify the robustness of each method. We find that the “default” normal model used in most Bayesian data analyses is not robust, and that approaches based on the Bayesian bootstrap are only robust in limited circumstances. A simple parametric model based on Tukey’s “contaminated normal model” and a model based on the t-distribution were markedly more robust. However, the contaminated normal model had the added benefit of estimating which data points were discounted as outliers and which were not.