The term “epistemic shifts” refers to a widely recognized phenomenon that knowledge ascribers would ascribe different epistemic statuses to the same belief under different internal/external ...conditions. Mainstream theories explaining shifts (including contextualism, contrastivism, and intellectual invariantism) can all be assimilated into a probabilistic framework, according to which the epistemic status of a belief
P
can be at least partially evaluated in terms of the strength of the link between this belief and its normal truth-maker, namely, a
P
-corresponding fact, and the strength of this link can be further probabilistically measured in terms of the knowledge-undermining force of an “abnormal” truth-maker of
P
, namely, a
P
-inducing fact which itself is
not
a
P
-corresponding fact
simpliciter
. I will further claim that by accepting my framework, a theorist of shifts has to tolerate a pluralist view of the etiology of shifts on the philosophical level, and mainstream theories explaining shifts are flawed in the sense that they all attempt to exaggerate one single and particular factor underpinning shifts by ignoring others.
The EPR effect of 41 renal tumors collected from clinical patients were analyzed via perfusion strategy, correlating the EPR effect in human tumors with that in animal models and confirming that more ...than 87 % of the examined renal tumors possess the considerable EPR effect, which yet showed significant diversity and heterogeneity in different patients.
Display omitted
•An ex vivo perfusion model was developed for real-time investigation of the EPR effect in human renal tumors via X-ray computed tomography (CT).•The EPR in human solid tumors was positively correlated with that in animal models.•Considerable EPR effect was observed in more than 87% of human renal tumors, which showed significant diversity and heterogeneity.
The enhanced permeability and retention (EPR) effect in human solid tumors is being increasingly questioned due to the failure of many nanomedicines in their clinical translation. Herein, we developed an ex vivo perfusion model for real-time investigation of the EPR effect in human renal tumors via X-ray computed tomography (CT), proving the EPR in human solid tumors and correlating the EPR effect in human tumors with that in animal models. Unexpectedly, more than 87 % of human renal tumors displayed a considerable EPR effect, which yet showed significant diversity and heterogeneity in different patients. For the first time, we unraveled that the EPR effect in renal tumors was positively correlated with the tumor size, and tumors from male patients exhibited a significantly higher EPR effect. This ex vivo model provides an efficient strategy for investigating the EPR effect in human tumors. Our results may provide a theoretical basis for the development of anticancer nanomedicines in the future.
The “Grotto-Heavens and Blissful Lands” (dongtian and fudi, 洞天福地) is a unique concept of sacred space in China and even in East Asia, combining beautiful natural scenery, rich historical heritage, ...and diverse cultural heritage. This paper tries to explain Mount Jingfu’s (jingfu shan, 靜福山) aesthetic representations. The results show that the landscape’s physical environment projects the spatio-temporal system and the concept of the universe in Daoist aesthetic ideals. With the spatial evolution of divine immortals’ abodes from imagination to reality, people’s yearning for divine cave palaces is transformed into their connection with and their expression of the palaces in exploring space interests and aesthetic trends that are then integrated into the secular life of thousands of households through living religious rituals. Preserved by local religious believers, the ritual activities incorporated geographic, familial, and divine interactions, and characterised essential social aesthetics. By exploring a typical case of Lingnan Region (lingnan, 嶺南, an old term for South China), this paper aims to elucidate the significance of the Grotto-Heavens and Blissful Lands as living heritage in contemporary society across multiple dimensions, and to provide a theoretical basis for the protection of its system.
Although atherosclerosis has been widely investigated at carotid artery bifurcation, there is a lack of morphometric and hemodynamic data at different stages of the disease. The purpose of this study ...was to determine the lesion difference in patients with carotid artery disease compared with healthy control subjects. The three-dimensional (3D) geometry of carotid artery bifurcation was reconstructed from computed tomography angiography (CTA) images of Chinese control subjects (n = 30) and patients with carotid artery disease (n = 30). We defined two novel vector angles (i.e., angles 1 and 2) that were tangential to the reconstructed contour of the 3D vessel. The best-fit diameter was computed along the internal carotid artery (ICA) center line. Hemodynamic analysis was performed at various bifurcations. Patients with stenotic vessels have larger angles 1 and 2 (151 ± 11° and 42 ± 20°) and smaller diameters of the external carotid artery (ECA) (4.6 ± 0.85 mm) compared with control subjects (144 ± 13° and 36 ± 16°, 5.2 ± 0.57 mm) although there is no significant difference in the common carotid artery (CCA) (7.1 ± 1.2 vs. 7.5 ± 1.0 mm, P = 0.18). In particular, all patients with carotid artery disease have a stenosis at the proximal ICA (including both sinus and carina regions), while 20% of patients have stenosis at the middle ICA and 20% have stenosis expansion to the entire cervical ICA. Morphometric and hemodynamic analyses suggest that atherosclerotic plaques initiate at both sinus and carina regions of ICA and progress downstream.
AI has a long tradition of borrowing insights from psychology. There is also a voice of embracing ontogenetic elements in AI since ontogenetically earlier developing subsystems look easier to be the ...target of computational modeling. But due to be the fundamental difference between natural organisms and digital computers on the hardware level, this analogy does not always hold. For instance, as reported (Carey The origin of concepts, Oxford University Press, Oxford, 2009a), (Carey in JP 106:220–254, 2009b) ontogeny about the development of the cognitive mechanism cannot be smoothly mapped onto an AI context, although many of her psychological/philosophical insights, especially the indispensability of a quasi-phenomenological interface for manipulating numerical concepts, could still be kept.
To have a coherent picture of Wang Chong's Lunheng is difficult. Some of Lunheng's chapters obviously show Wang's hostility to a large part of the folklore (including the social institutions based on ...it) and traditional philosophical texts. In some other chapters, however, Wang appears to be more sympathetic to the social institutions related to folk religious beliefs. Esther Sunkyung Klein & Colin Klein attempt to explain this prima facie inconsistency in terms of 'piecemeal non-reductionism', which roughly means that Wang would take any testimonial belief for granted until he can find a defeater of such a belief. But this explanation merely depicts Wang as a defeater-seeker rather than a thinker looking for philosophical grounds of his claims in a more positive manner. In contrast, in this paper, I intend to attribute the following epistemological thesis to Wang: A testimonial belief taken from classics or folklore will be judged as unjustified if the knowledge attributor finds a non-negligible defeater of it, and such an attributor would feel more sympathetic to the target belief if it can be at least prima facie justified in the light of analogical reasoning.
Meta-philosophically speaking, the philosophy of artificial intelligence (AI) is intended not only to explore the theoretical possibility of building “thinking machines,” but also to reveal ...philosophical implications of specific AI approaches. Wittgenstein’s comments on the analytic/empirical dichotomy may offer inspirations for AI in the second sense. According to his “river metaphor” in On Certainty, the analytic/empirical boundary should be delimited in a way sensitive to specific contexts of practical reasoning. His proposal seems to suggest that any cognitive modeling project needs to render the system context-sensitive by avoiding representing large amounts of truisms in its cognitive processes, otherwise neither representational compactness nor computational efficiency can be achieved. In this article, different AI approaches (like the Common Sense Law of Inertia approach, the Bayesian approach and the connectionist approach) will be critically evaluated under the afore-mentioned Wittgensteinian criteria, followed by the author’s own constructive suggestion on what AI needs to try to do in the near future.
As many philosophers agree, the frame problem is concerned with how an agent may efficiently filter out irrelevant information in the process of problem-solving. Hence, how to solve this problem ...hinges on how to properly handle semantic relevance in cognitive modeling, which is an area of cognitive science that deals with simulating human's cognitive processes in a computerized model. By "semantic relevance", we mean certain inferential relations among acquired beliefs which may facilitate information retrieval and practical reasoning under certain epistemic constraints, e. g., the insufficiency of knowledge, the limitation of time budget, etc. However, traditional approaches to relevance—as for example, relevance logic, the Bayesian approach, as well as Description Logic—have failed to do justice to the foregoing constraints, and in this sense, they are not proper tools for solving the frame problem/relevance problem. As we will argue in this paper, Non-Axiomatic Reasoning System (NARS) can handle the frame problem in a more proper manner, because the resulting solution seriously takes epistemic constraints on cognition as a fundamental theoretical principle.
Wittgenstein is widely viewed as a potential critic of a key philosophical assumption of the Strong Artificial Intelligence (AI) thesis, namely, that it is in principle possible to build a programmed ...machine which can achieve real intelligence. Stuart Shanker has provided the most systematic reconstruction of the Wittgensteinian argument against AI, building on Wittgenstein’s own statements, the “rule-following” feature of language-games, and the putative alliance between AI and psychologism. This article will attempt to refute this reconstruction and its constituent arguments, thereby paving the way for a new and amicable rather than agonistic conception of the Wittgensteinian position on AI.
According to the standard interpretation of Gettier cases, they can be used to form significant challenges to the tradition of defining knowledge as justified true beliefs (hereafter “JTB”). This ...position naturally assumes that varieties of target beliefs involved in typical Gettier cases are all JTBs, namely, unified beliefs which are simultaneously justified and true. But I do not think this is true. Conversely, every target belief in typical Gettier cases should be cashed out as one or more beliefs, none of which is a genuine JTB. In short, there is no JTB in Gettier’s JTB-hostile vignettes at all. Hence, no matter whether the JTB account of knowledge is correct or not, Gettier cases are not really relevant to it. In addition, although I agree to Mizrahi’s (Logos Episteme 7(1):31–44,
2016
) general observation that face-values of target beliefs cannot be taken for granted in all Gettier cases, I have made a further claim that there is no cross-board methodology to disambiguate the target beliefs in all Gettier cases.