Research on processing fluency and instrumental goal activation suggests people often perceive complex information positively when effort in a task is valued. The current article evaluates this idea ...in five online petition samples (total N = 1,047,655 petitions and over 200 million words), assessing how the linguistic fluency of a petition associates with support. Consistent with prior work, petitions with lower rates of lexical fluency (fewer common words) associated with more signatures and an increased probability of petitions making a concrete change than those with higher rates of lexical fluency (more common words). Exploratory results suggest other forms of linguistic complexity also associated with petition support: petitions with more analytic writing (e.g., more formal and complex writing patterns) and with less structural fluency (less readable writing) received more signatures than those with less analytic writing and more structural fluency. Controlling for the political leaning of the petition writers as inferred by their language patterns revealed consistent effects. Crucially, the lexical fluency results were maintained across eight languages as well. Various types of linguistic complexity are therefore instrumental to get people to support online causes. Contributions to fluency theory and the psychology of giving are discussed.
Dehumanization is a topic of significant interest for academia and society at large. Empirical studies often have people rate the evolved nature of outgroups and prior work suggests immigrants are ...common victims of less-than-human treatment. Despite existing work that suggests who dehumanizes particular outgroups and who is often dehumanized, the extant literature knows less about why people dehumanize outgroups such as immigrants. The current work takes up this opportunity by examining why people dehumanize immigrants said to be illegal and how measurement format affects dehumanization ratings. Participants (N = 672) dehumanized such immigrants more if their ratings were made on a slider versus clicking images of hominids, an effect most pronounced for Republicans. Dehumanization was negatively associated with warmth toward illegal immigrants and the perceived unhappiness felt by illegal immigrants from U.S. immigration policies. Finally, most dehumanization is not entirely blatant but instead, captured by virtuous violence and affect as well, suggesting the many ways that dehumanization can manifest as predicted by theory. This work offers a mechanistic account for why people dehumanize immigrants and addresses how survey measurement artifacts (e.g., clicking on images of hominids vs. using a slider) affect dehumanization rates. We discuss how these data extend dehumanization theory and inform empirical research.
The current paper used a preregistered set of language dimensions to indicate how scientists psychologically managed the COVID-19 pandemic and its effects. Study 1 evaluated over 1.8 million ...preprints from arXiv.org and assessed how papers written during the COVID-19 pandemic reflected patterns of psychological trauma and emotional upheaval compared to those written before the pandemic. The data suggest papers written during the pandemic contained more affect and more cognitive processing terms to indicate writers working through a crisis than papers written before the pandemic. Study 2 (N = 74,744 published PLoS One papers) observed consistent emotion results, though cognitive processing patterns were inconsistent. Papers written specifically about COVID-19 contained more emotion than those not written about COVID-19. Finally, Study 3 (N = 361,189 published papers) replicated the Study 2 emotion results across more diverse journals and observed papers written during the pandemic contained a greater rate of cognitive processing terms, but a lower rate of analytic thinking, than papers written before the pandemic. These data suggest emotional upheavals are associated with psychological correlates reflected in the language of scientists at scale. Implications for psychology of language research and trauma are discussed.
Generative AI, short for Generative Artificial Intelligence, a class of artificial intelligence systems, is not currently the choice technology for text analysis, but prior work suggests it may have ...some utility to assess dynamics like emotion. The current work builds upon this empirical foundation to consider how analytic thinking scores from a large language model chatbot, ChatGPT, were linked to analytic thinking scores from dictionary-based tools like Linguistic Inquiry and Word Count (LIWC). Using over 16,000 texts from four samples and tested against three prompts and two large language models (GPT-3.5, GPT-4), the evidence suggests there were small associations between ChatGPT and LIWC analytic thinking scores (meta-analytic effect sizes: .058 <
r
s < .304;
p
s < .001). When given the formula to calculate the LIWC analytic thinking index, ChatGPT performed incorrect mathematical operations in 22% of the cases, suggesting basic word and number processing may be unreliable with large language models. Researchers should be cautious when using AI for text analysis.
Understanding how people think is a key interest in psychology, and recent advances in automated text analysis have used a verbal analytic thinking index to approximate Kahneman's System 2 (e.g., ...deliberate, rational thinking). That is, prior work used a style word index to assess university student admissions essays and observed that those who used more articles and prepositions relative to storytelling words (e.g., pronouns) had higher grades at the end of college. This work presumed that verbal analytic thinking represented one's cognitive ability or intellectual potential, but this presumption has remained untested. The current research evaluated if verbal analytic thinking is indeed a reflection of cognitive ability or one's interest and motivation to engage in thinking, called need for cognition. Across 500 participants and two writing samples, the most reliable link to verbal analytic thinking was need for cognition, addressing an unexamined empirical question in psychology of language research.
Prior work suggests those who lie prolifically tend to be younger and self-identify as male compared to those who engage in everyday lying, but little research has developed an understanding of ...prolific lying beyond demographics. Study 1 (N = 775) replicated the prior demographic effects and assessed prolific lying through situation-level (e.g., opportunistic cheating) and individual-level characteristics (e.g., dispositional traits, general communication patterns) for white and big lies. For these two lie types, prolific lying associated with more opportunistic cheating, the use of fewer adjectives, and being high on psychopathy compared to everyday lying. Study 2 (N = 1,022) replicated these results and observed a deception consensus effect reported in other studies: the more that people deceived, the more they believed that others deceived as well. This piece develops a deeper theoretical understanding of prolific lying for white and big lies, combining evidence of situational, dispositional, and communication characteristics.
Abstract Deceptive and truthful statements draw on a common pool of communication data, and they are typically embedded within false and truthful narratives. How often does embeddedness occur, who ...communicates embedded deceptions and truths, and what linguistic characteristics reveal embeddedness? In this study, nearly 800 participants deceived or told the truth about their friends and indicated the embedded deceptions (e.g., false statements told within entirely false or truthful messages) and truths (e.g., truthful statements told within entirely false or truthful messages). Embedded deceiving and truth‐telling rates were only statistically different among those who were instructed to tell the truth. Therefore, the distribution of embedded deceptions and truths were similar for false statements, but dissimilar for truthful statements. Embedded truths were also more likely to be written by women (vs men), liberals (vs conservatives), and communicated in a formal versus narrative style. Theoretical implications are discussed.
Across four studies, two controlled lab experiments and two field studies, we tested the efficacy of immersive Virtual Reality (VR) as an education medium for teaching the consequences of climate ...change, particularly ocean acidification. Over 270 participants from four different learning settings experienced an immersive underwater world designed to show the process and effects of rising sea water acidity. In all of our investigations, after experiencing immersive VR people demonstrated knowledge gains or inquisitiveness about climate science and in some cases, displayed more positive attitudes toward the environment after comparing pre- and post-test assessments. The analyses also revealed a potential
mechanism for the learning effects, as the more that people explored the spatial learning environment, the more they demonstrated a change in knowledge about ocean acidification. This work is unique by showing distinct learning gains or an interest in learning across a variety of participants (high school, college students, adults), measures (learning gain scores, tracking data about movement in the virtual world, qualitative responses from classroom teachers), and content (multiple versions varying in length and content about climate change were tested). Our findings explicate the opportunity to use immersive VR for environmental education and to drive information-seeking about important social issues such as climate change.
This paper evaluates persuasion dynamics of animal adoption using text data from a large archive of online pet advertisements. In Study 1, 184,805 adoption profiles from Petfinder indicated how long ...a pet will remain online and unadopted. Consistent with evidence from related persuasion settings such as peer‐to‐peer lending, pets spent less time online if profile writers had an analytic thinking style and advertisements contained few peripheral processing cues such as social words. Study 2 (N = 676,004 adoption profiles) replicated Study 1 patterns and found that adopted pet profiles contained more markers of analytic thinking and fewer social words than unadopted pet profiles. In an experiment (Study 3, N = 987), participants read an adoption advertisement typical of adopted or unadopted pets. Participants self‐reported that they would be more likely to adopt a pet and visit its shelter after reading a more analytic and less social adoption profile (indicators of adopted pets) than a less analytic and more social profile (indicators of unadopted pets). Finally, Study 4 (N = 3,245 Tweets) demonstrated that more analytic and less social word patterns relate to increased engagement online, such as likes and retweets. These data suggest pet adoption that begins online is a social and psychological process, enhanced by messages with markers of complex thinking and few humanizing references. Advances to persuasion theory are discussed, underscored by the implications for pet adoption and how language patterns in online advertisements can reflect influence at scale.
When scientists report false data, does their writing style reflect their deception? In this study, we investigated the linguistic patterns of fraudulent (N = 24; 170,008 words) and genuine ...publications (N = 25; 189,705 words) first-authored by social psychologist Diederik Stapel. The analysis revealed that Stapel's fraudulent papers contained linguistic changes in science-related discourse dimensions, including more terms pertaining to methods, investigation, and certainty than his genuine papers. His writing style also matched patterns in other deceptive language, including fewer adjectives in fraudulent publications relative to genuine publications. Using differences in language dimensions we were able to classify Stapel's publications with above chance accuracy. Beyond these discourse dimensions, Stapel included fewer co-authors when reporting fake data than genuine data, although other evidentiary claims (e.g., number of references and experiments) did not differ across the two article types. This research supports recent findings that language cues vary systematically with deception, and that deception can be revealed in fraudulent scientific discourse.