This article evaluates the psychological correlates of imperative speech through pronouns. We demonstrate that people communicate with more collective immediacy (“we” words) when using imperatives ...than nonimperatives in an experiment (Study 1, N = 828) and field studies of American politicians (Study 2a: N = 123,678 speeches), and Joseph Stalin (Study 2b: N = 593 speeches). However, respondents experience a psychological distancing effect after an imperative (fewer “I” words). This experimental pattern (Study 3: N = 852) also holds in the field using U.S. Supreme Court dissents from the Roberts Court (Study 4: N = 644). Exploratory findings suggest that third-person plural pronouns (“they” words) are used more when communicating imperative speech relative to nonimperative speech. Our evidence supports an interpersonal imperatives asymmetry: imperatives demand psychological support when communicating how the world must be, but they undermine the autonomy of respondents. Social and psychological implications of these data are discussed.
Humans often display a truth-bias—the perception that others are honest independent of message veracity—but does this phenomenon extend to generative artificial intelligence (AI)? We had humans and ...large language models make nearly 1,000 veracity judgments across different prompts. Human detection accuracies were near chance (50%–53%) with notable truth-biases (59%–64%); AI had a substantially greater truth-bias than humans (67%–99%). GPT-4 was also truth-default, not suspecting deception when veracity assessments were unprompted. Together, people and AI judge most information to be true.
Research has documented substantial individual differences in the proclivity for honesty or dishonesty and that personality traits meaningfully account for variations in honesty–dishonesty. Research ...also shows important situational variation related to deception, as situations can motivate or discourage dishonest behaviors. The current experiment examines personality and situational influences on honesty–dishonesty in tandem, arguing that their effects may not be additive. Participants (N = 114) engaged in an experimental task providing the opportunity to cheat for tangible gain. The situation varied to encourage or discourage cheating. Participants completed the HEXACO-100 and the Dark Triad of Personality scales. Both situational variation and personality dimensions predicted honesty–dishonesty, but the effects of personality were not uniform across situations. These results were also supported using public data from an independent, multilab sample (N = 5,757). We outline how these results inform our understanding of deception, situational influences, and the role of disposition in honesty.
Most deception scholars agree that deception production and deception detection effects often display mixed results across settings. For example, some liars use more emotion than truth-tellers when ...discussing fake opinions on abortion, but not when communicating fake distress. Similarly, verbal and nonverbal cues are often inconsistent predictors to assist in deception detection, leading to mixed accuracies and detection rates. Why are lie production and detection effects typically inconsistent? In this piece, we argue that aspects of the context are often unconsidered in how lies are produced and detected. Greater theory-building related to contextual constraints of deception are therefore required. We reintroduce and extend the Contextual Organization of Language and Deception (COLD) model, a framework that outlines how psychological dynamics, pragmatic goals, and genre conventions are aspects of the context that moderate the relationship between deception and communication behavior such as language. We extend this foundation by proposing three additional aspects of the context - individual differences, situational opportunities for deception, and interpersonal characteristics - for the COLD model that can specifically inform and potentially improve forensic interviewing. We conclude with a forward-looking perspective for deception researchers and practitioners related to the need for more theoretical explication of deception and its detection related to the context.
In this preregistered experiment, we address an understudied question in the deception and language literature: What is the impact of context on false and truthful language patterns? Drawing on two ...theories, Truth-Default Theory and the Contextual Organization of Language and Deception model, we instructed participants (N = 639) to lie, tell the truth, or write within a genre without explicit lying or truth-telling instructions across different topics (e.g. their friends, attitudes on abortion). The results successfully replicate several cue-based models for self-references and negative affect, such as the Newman Pennebaker model of deception. Participants without lying or truth-telling instructions, but who wrote within genre conventions, showed markedly similar patterns to truth-tellers, though indicators of analytic thinking, adjectives, and auxiliary verbs were distinct. The data were also evaluated with a topic modeling approach and suggest that the abortion process was construed negatively when people lied about the topic. Truth-tellers construed abortion in objective terms and genre-related speech highlighted key role-players (e.g. the government, men, women, baby). We discuss how these data advance deception and language theory.
Researchers and practitioners have used virtual reality (VR) as a tool to understand attitudes and behaviors around climate change for decades. As VR has become more immersive, mainstream, and ...commercially available, it has also become a medium for education about climate issues, a way to indirectly expose users to novel stimuli, and a tool to tell stories about antienvironmental activity. This review explicates the relationship between VR and climate change from a psychological perspective and offers recommendations to make virtual experiences engaging, available, and impactful for users. Climate change is perhaps the most urgent global issue of our lifetime with irreversible consequences. It therefore requires innovative experiential approaches to teach its effects and modify attitudes in support of proenvironmental actions.
This article examines how verbal authenticity influences person perception. Our work combines human judgments and natural language processing to suggest verbal authenticity is a positive predictor of ...interpersonal interest (Study 1: 294 dyadic conversations), engagement with speeches (Study 2: 2,655 TED talks), entrepreneurial success (Study 3: 478 Shark Tank pitches), and social media engagements (Studies 4a–c; N = 387,039 Tweets). We find that communicating authenticity is associated with increased interest in and perceived connection to another person, more comments and views for TED talks, receiving a financial investment from investors, and more social media likes and retweets. Our work is among the first to evaluate how authenticity relates to person perception and manifests naturally using verbal data.
The comments teachers write when sending students to the office have the potential to increase our understanding of how bias may contribute to longstanding racial disparities in school discipline. ...However, large-scale analysis of open text has traditionally had a prohibitive cost. Through natural language processing techniques, we examined over 3.5 million office discipline records from national samples of more than 4,000 schools for whether teachers’ linguistic patterns differed when describing incidents depending on the race/ethnicity and gender of the students. Results of such analyses consistently showed that teachers wrote longer descriptions and included more negative emotion when disciplining Black compared to White students, especially for Black girls. In conjunction with psychology of language theory, the patterns suggest that teachers may perceive and process student behavior differently depending on student identities. Implications of the findings and potential for research on naturally occurring language data in education are discussed.
To the human eye, AI-generated outputs of large language models have increasingly become indistinguishable from human-generated outputs. Therefore, to determine the linguistic properties that ...separate AI-generated text from human-generated text, we used a state-of-the-art chatbot, ChatGPT, and compared how it wrote hotel reviews to human-generated counterparts across content (emotion), style (analytic writing, adjectives), and structural features (readability). Results suggested AI-generated text had a more analytic style and was more affective, more descriptive, and less readable than human-generated text. Classification accuracies of AI-generated versus human-generated texts were over 80%, far exceeding chance (∼50%). Here, we argue AI-generated text is inherently false when communicating about personal experiences that are typical of humans and differs from intentionally false human-generated text at the language level. Implications for AI-mediated communication and deception research are discussed.