The Unreliability of Naive Introspection Schwitzgebel, Eric
The Philosophical review,
2008-April, 20080401, 2008-04-01, Letnik:
117, Številka:
2
Journal Article
Recenzirano
Odprti dostop
We are prone to gross error, even in favorable circumstances of extended reflection, about our own ongoing conscious experience, our current phenomenology. Even in this apparently privileged domain, ...our self-knowledge is faulty and untrustworthy. We are not simply fallible at the margins but broadly inept. Examples highlighted in this essay include: emotional experience (for example, is it entirely bodily; does joy have a common, distinctive phenomenological core?), peripheral vision (how broad and stable is the region of visual clarity?), and the phenomenology of thought (does it have a distinctive phenomenology, beyond just imagery and feelings?). Cartesian skeptical scenarios undermine knowledge of ongoing conscious experience as well as knowledge of the outside world. Infallible judgments about ongoing mental states are simply banal cases of self-fulfillment. Philosophical foundationalism supposing that we infer an external world from secure knowledge of our own consciousness is almost exactly backward.
This paper describes and defends in detail a novel account of belief, an account inspired by Ryle's dispositional characterization of belief, but emphasizing irreducibly phenomenal and cognitive ...dispositions as well as behavioral dispositions. Potential externalist and functionalist objections are considered, as well as concerns motivated by the inevitably ceteris paribus nature of the relevant dispositional attributions. It is argued that a dispositional account of belief is particularly well-suited to handle what might be called "in-between" cases of believing- cases in which it is neither quite right to describe a person as having a particular belief nor quite right to describe her as lacking it.
We examined the effects of framing and order of presentation on professional philosophers’ judgments about a moral puzzle case (the “trolley problem”) and a version of the Tversky & Kahneman “Asian ...disease” scenario. Professional philosophers exhibited substantial framing effects and order effects, and were no less subject to such effects than was a comparison group of non-philosopher academic participants. Framing and order effects were not reduced by a forced delay during which participants were encouraged to consider “different variants of the scenario or different ways of describing the case”. Nor were framing and order effects lower among participants reporting familiarity with the trolley problem or with loss-aversion framing effects, nor among those reporting having had a stable opinion on the issues before participating the experiment, nor among those reporting expertise on the very issues in question. Thus, for these scenario types, neither framing effects nor order effects appear to be reduced even by high levels of academic expertise.
This article defends the existence of
borderline consciousness.
In borderline consciousness, conscious experience is neither determinately present nor determinately absent, but rather somewhere ...between. The argument in brief is this. In considering what types of systems are conscious, we face a quadrilemma. Either nothing is conscious, or everything is conscious, or there’s a sharp boundary across the apparent continuum between conscious systems and nonconscious ones, or consciousness is a vague property admitting indeterminate cases. Assuming mainstream naturalism about consciousness, we ought to reject the first three options, which forces us to the fourth, indeterminacy. Standard objections to the existence of borderline consciousness turn on the inconceivability of borderline cases. However, borderline cases are only inconceivable by an inappropriately demanding standard of conceivability. I conclude with some plausible cases and applications.
If you're a materialist, you probably think that rabbits are conscious. And you ought to think that. After all, rabbits are a lot like us, biologically and neurophysiologically. If you're a ...materialist, you probably also think that conscious experience would be present in a wide range of naturally-evolved alien beings behaviorally very similar to us even if they are physiologically very different. And you ought to think that. After all, to deny it seems insupportable Earthly chauvinism. But a materialist who accepts consciousness in weirdly formed aliens ought also to accept consciousness in spatially distributed group entities. If she then also accepts rabbit consciousness, she ought to accept the possibility of consciousness even in rather dumb group entities. Finally, the United States would seem to be a rather dumb group entity of the relevant sort. If we set aside our morphological prejudices against spatially distributed group entities, we can see that the United States has all the types of properties that materialists tend to regard as characteristic of conscious beings.
People often sincerely assert or judge one thing (for example, that all the races are intellectually equal) while at the same time being disposed to act in a way evidently quite contrary to the ...espoused attitude (for example, in a way that seems to suggest an implicit assumption of the intellectual superiority of their own race). Such cases should be regarded as ‘in‐between’ cases of believing, in which it's neither quite right to ascribe the belief in question nor quite right to say that the person lacks the belief.
We examined the effects of order of presentation on the moral judgments of professional philosophers and two comparison groups. All groups showed similar‐sized order effects on their judgments about ...hypothetical moral scenarios targeting the doctrine of the double effect, the action‐omission distinction, and the principle of moral luck. Philosophers' endorsements of related general moral principles were also substantially influenced by the order in which the hypothetical scenarios had previously been presented. Thus, philosophical expertise does not appear to enhance the stability of moral judgments against this presumably unwanted source of bias, even given familiar types of cases and principles.
One relatively neglected challenge in ethical artificial intelligence (AI) design is ensuring that AI systems invite a degree of emotional and moral concern appropriate to their moral standing. ...Although experts generally agree that current AI chatbots are not sentient to any meaningful degree, these systems can already provoke substantial attachment and sometimes intense emotional responses in users. Furthermore, rapid advances in AI technology could soon create AIs of plausibly debatable sentience and moral standing, at least by some relevant definitions. Morally confusing AI systems create unfortunate ethical dilemmas for the owners and users of those systems, since it is unclear how those systems ethically should be treated. I argue here that, to the extent possible, we should avoid creating AI systems whose sentience or moral standing is unclear and that AI systems should be designed so as to invite appropriate emotional responses in ordinary users.
The draft European Union Artificial Intelligence Act highlights the seriousness with which policymakers and the public have begun to take issues in the ethics of artificial intelligence (AI). Scientists and engineers have been developing increasingly more sophisticated AI systems, with recent breakthroughs especially in large language models such as ChatGPT. Some scientists and engineers argue, or at least hope, that we are on the cusp of creating genuinely sentient AI systems, that is, systems capable of feeling genuine pain and pleasure. Ordinary users are increasingly growing attached to AI companions and might soon do so in much greater numbers. Before long, substantial numbers of people might come to regard some AI systems as deserving of at least some limited rights or moral standing, being targets of ethical concern for their own sake. Given high uncertainty both about the conditions under which an entity can be sentient and about the proper grounds of moral standing, we should expect to enter a period of dispute and confusion about the moral status of our most advanced and socially attractive machines.
Recent advances in AI have led to a resurgence of interest in ethical AI design. One relatively neglected challenge is ensuring that AI systems invite a degree of emotional and moral concern appropriate to their moral standing. Non-sentient machines should be readily recognizable as such. If sentient machines someday become possible, their sentience should also be readily recognizable. This perspective explores the importance of this issue and proposes policy guidelines aimed at avoiding the creation of morally confusing machines.
Carrie Figdor's Pieces of mind lays the groundwork for critiquing the mind package view of minds. According to the mind package view, psychological properties travel in groups, such that an entity ...either has the whole mind package or lacks mentality altogether. Implicit commitment to the mind package view makes it seem absurd to attribute some psychological properties (e.g., preferences) to entities that lack other psychological properties (e.g., feelings). Contra the mind package view, we are psychologically continuous with plants, worms, and bacteria: Our patterns of mindedness resemble theirs, even if such entities do not have the whole mind package.