How We Reason Johnson-Laird, Philip
2008, 2006, 2008-10-23
eBook
Good reasoning can lead to success; bad reasoning can lead to catastrophe. Yet, it's not obvious how we reason, and why we make mistakes. This new book by one of the pioneers of the field, Philip ...Johnson-Laird, looks at the mental processes that underlie our reasoning. It provides the most accessible account yet of the science of reasoning.
Mental models and human reasoning Johnson-Laird, Philip N.
Proceedings of the National Academy of Sciences,
10/2010, Letnik:
107, Številka:
43
Journal Article
Recenzirano
Odprti dostop
To be rational is to be able to reason. Thirty years ago psychologists believed that human reasoning depended on formal rules of inference akin to those of a logical calculus. This hypothesis ran ...into difficulties, which led to an alternative view: reasoning depends on envisaging the possibilities consistent with the starting point—a perception of the world, a set of assertions, a memory, or some mixture of them. We construct mental models of each distinct possibility and derive a conclusion from them. The theory predicts systematic errors in our reasoning, and the evidence corroborates this prediction. Yet, our ability to use counterexamples to refute invalid inferences provides a foundation for rationality. On this account, reasoning is a simulation of the world fleshed out with our knowledge, not a formal rearrangement of the logical skeletons of sentences.
This article presents a fundamental advance in the theory of mental models as an explanation of reasoning about facts, possibilities, and probabilities. It postulates that the meanings of compound ...assertions, such as conditionals (if) and disjunctions (or), unlike those in logic, refer to conjunctions of epistemic possibilities that hold in default of information to the contrary. Various factors such as general knowledge can modulate these interpretations. New information can always override sentential inferences; that is, reasoning in daily life is defeasible (or nonmonotonic). The theory is a dual process one: It distinguishes between intuitive inferences (based on system 1) and deliberative inferences (based on system 2). The article describes a computer implementation of the theory, including its two systems of reasoning, and it shows how the program simulates crucial predictions that evidence corroborates. It concludes with a discussion of how the theory contrasts with those based on logic or on probabilities.
How poetry evokes emotions Johnson-Laird, Philip N.; Oatley, Keith
Acta psychologica,
April 2022, 2022-Apr, 2022-04-00, 20220401, 2022-04-01, Letnik:
224
Journal Article
Recenzirano
Odprti dostop
Poetry evokes emotions. It does so, according to the theory we present, from three sorts of simulation. They each can prompt emotions, which are communications both within the brain and among people. ...First, models of a poem's semantic contents can evoke emotions as do models that occur in depictions of all kinds, from novels to perceptions. Second, mimetic simulations of prosodic cues, such as meter, rhythm, and rhyme, yield particular emotional states. Third, people's simulations of themselves enable them to know that they are engaged with a poem, and an aesthetic emotion can occur as a result. The three simulations predict certain sorts of emotion, e.g., prosodic cues can evoke basic emotions of happiness, sadness, anger, and anxiety. Empirical evidence corroborates the theory, which we relate to other accounts of poetic emotions.
How individuals choose evidence to test hypotheses is a long-standing puzzle. According to an algorithmic theory that we present, it is based on dual processes: individuals' intuitions depending on ...mental models of the hypothesis yield selections of evidence matching instances of the hypothesis, but their deliberations yield selections of potential counterexamples to the hypothesis. The results of 228 experiments using Wason's selection task corroborated the theory's predictions. Participants made dependent choices of items of evidence: the selections in 99 experiments were significantly more redundant (using Shannon's measure) than those of 10,000 simulations of each experiment based on independent selections. Participants tended to select evidence corresponding to instances of hypotheses, or to its counterexamples, or to both. Given certain contents, instructions, or framings of the task, they were more likely to select potential counterexamples to the hypothesis. When participants received feedback about their selections in the "repeated" selection task, they switched from selections of instances of the hypothesis to selection of potential counterexamples. These results eliminated most of the 15 alternative theories of selecting evidence. In a meta-analysis, the model theory yielded a better fit of the results of 228 experiments than the one remaining theory based on reasoning rather than meaning. We discuss the implications of the model theory for hypothesis testing and for a well-known paradox of confirmation.
Public Significance Statement
Our research shows that individuals select evidence to test hypotheses that almost always seeks an instance of the hypothesis to corroborate it and that less often seeks potential counterexamples to the hypothesis to refute it. The data indicate that individuals do not reason independently about the evidence; a result that helped to eliminate most of the 16 existing cognitive theories.
A Priori True and False Conditionals Quelhas, Ana Cristina; Rasga, Célia; Johnson‐Laird, Philip N.
Cognitive science,
20/May , Letnik:
41, Številka:
S5
Journal Article
Recenzirano
Odprti dostop
The theory of mental models postulates that meaning and knowledge can modulate the interpretation of conditionals. The theory's computer implementation implied that certain conditionals should be ...true or false without the need for evidence. Three experiments corroborated this prediction. In Experiment 1, nearly 500 participants evaluated 24 conditionals as true or false, and they justified their judgments by completing sentences of the form, It is impossible that A and ___ appropriately. In Experiment 2, participants evaluated 16 conditionals and provided their own justifications, which tended to be explanations rather than logical justifications. In Experiment 3, the participants also evaluated as possible or impossible each of the four cases in the partitions of 16 conditionals: A and C, A and not‐C, not‐A and C, not‐A and not‐C. These evaluations corroborated the model theory. We consider the implications of these results for theories of reasoning based on logic, probabilistic logic, and suppositions.
We describe a dual‐process theory of how individuals estimate the probabilities of unique events, such as Hillary Clinton becoming U.S. President. It postulates that uncertainty is a guide to ...improbability. In its computer implementation, an intuitive system 1 simulates evidence in mental models and forms analog non‐numerical representations of the magnitude of degrees of belief. This system has minimal computational power and combines evidence using a small repertoire of primitive operations. It resolves the uncertainty of divergent evidence for single events, for conjunctions of events, and for inclusive disjunctions of events, by taking a primitive average of non‐numerical probabilities. It computes conditional probabilities in a tractable way, treating the given event as evidence that may be relevant to the probability of the dependent event. A deliberative system 2 maps the resulting representations into numerical probabilities. With access to working memory, it carries out arithmetical operations in combining numerical estimates. Experiments corroborated the theory's predictions. Participants concurred in estimates of real possibilities. They violated the complete joint probability distribution in the predicted ways, when they made estimates about conjunctions: P(A), P(B), P(A and B), disjunctions: P(A), P(B), P(A or B or both), and conditional probabilities P(A), P(B), P(B|A). They were faster to estimate the probabilities of compound propositions when they had already estimated the probabilities of each of their components. We discuss the implications of these results for theories of probabilistic reasoning.
Kinematic mental simulations in abduction and deduction Khemlani, Sangeet Suresh; Mackiewicz, Robert; Bucciarelli, Monica ...
Proceedings of the National Academy of Sciences,
10/2013, Letnik:
110, Številka:
42
Journal Article
Recenzirano
Odprti dostop
We present a theory, and its computer implementation, of how mental simulations underlie the abductions of informal algorithms and deductions from these algorithms. Three experiments tested the ...theory’s predictions, using an environment of a single railway track and a siding. This environment is akin to a universal Turing machine, but it is simple enough for nonprogrammers to use. Participants solved problems that required use of the siding to rearrange the order of cars in a train (experiment 1). Participants abduced and described in their own words algorithms that solved such problems for trains of any length, and, as the use of simulation predicts, they favored “while-loops” over “for-loops” in their descriptions (experiment 2). Given descriptions of loops of procedures, participants deduced the consequences for given trains of six cars, doing so without access to the railway environment (experiment 3). As the theory predicts, difficulty in rearranging trains depends on the numbers of moves and cars to be moved, whereas in formulating an algorithm and deducing its consequences, it depends on the Kolmogorov complexity of the algorithm. Overall, the results corroborated the use of a kinematic mental model in creating and testing informal algorithms and showed that individuals differ reliably in the ability to carry out these tasks.
Cognitive scientists treat verification as a computation in which descriptions that match the relevant situation are true, but otherwise false. The claim is controversial: The logician Gödel and the ...physicist Penrose have argued that human verifications are not computable. In contrast, the theory of mental models treats verification as computable, but the two truth values of standard logics,
and
, as insufficient. Three online experiments (n = 208) examined participants' verifications of disjunctive assertions about a location of an individual or a journey, such as: 'You arrived at Exeter or Perth'. The results showed that their verifications depended on observation of a match with one of the locations but also on the status of other locations (Experiment 1). Likewise, when they reached one destination and the alternative one was impossible, their use of the truth value:
increased (Experiment 2). And, when they reached one destination and the only alternative one was possible, they used the truth value,
, and when the alternative one was impossible, they used the truth value:
(Experiment 3). These truth values and those for falsity embody counterfactuals. We implemented a computer program that constructs models of disjunctions, represents possible destinations, and verifies the disjunctions using the truth values in our experiments. Whether an awareness of a verification's outcome is computable remains an open question.
Causal reasoning with mental models Khemlani, Sangeet S; Barbey, Aron K; Johnson-Laird, Philip N
Frontiers in human neuroscience,
10/2014, Letnik:
8
Journal Article
Recenzirano
Odprti dostop
This paper outlines the model-based theory of causal reasoning. It postulates that the core meanings of causal assertions are deterministic and refer to temporally-ordered sets of possibilities: A ...causes B to occur means that given A, B occurs, whereas A enables B to occur means that given A, it is possible for B to occur. The paper shows how mental models represent such assertions, and how these models underlie deductive, inductive, and abductive reasoning yielding explanations. It reviews evidence both to corroborate the theory and to account for phenomena sometimes taken to be incompatible with it. Finally, it reviews neuroscience evidence indicating that mental models for causal inference are implemented within lateral prefrontal cortex.