•Interoception refers to the signalling and perception of internal bodily sensations.•We validate a three dimensional construct of interoception.•This comprises: interoceptive accuracy, sensibility ...and awareness (metacognition).•These interoceptive dimensions represent dissociable interoceptive processes.•Interoceptive accuracy serves as the core (central) construct.
Interoception refers to the sensing of internal bodily changes. Interoception interacts with cognition and emotion, making measurement of individual differences in interoceptive ability broadly relevant to neuropsychology. However, inconsistency in how interoception is defined and quantified led to a three-dimensional model. Here, we provide empirical support for dissociation between dimensions of: (1) interoceptive accuracy (performance on objective behavioural tests of heartbeat detection), (2) interoceptive sensibility (self-evaluated assessment of subjective interoception, gauged using interviews/questionnaires) and (3) interoceptive awareness (metacognitive awareness of interoceptive accuracy, e.g. confidence-accuracy correspondence). In a normative sample (N=80), all three dimensions were distinct and dissociable. Interoceptive accuracy was only partly predicted by interoceptive awareness and interoceptive sensibility. Significant correspondence between dimensions emerged only within the sub-group of individuals with greatest interoceptive accuracy. These findings set the context for defining how the relative balance of accuracy, sensibility and awareness dimensions explain cognitive, emotional and clinical associations of interoceptive ability.
A key challenge in neuroscience and, in particular, neuroimaging, is to move beyond identification of regional activations toward the characterization of functional circuits underpinning perception, ...cognition, behavior, and consciousness. Granger causality (G-causality) analysis provides a powerful method for achieving this, by identifying directed functional ("causal") interactions from time-series data. G-causality implements a statistical, predictive notion of causality whereby causes precede, and help predict, their effects. It is defined in both the time and frequency domains, and it allows for the conditioning out of common causal influences. In this paper we explain the theoretical basis and computational implementation of G-causality analysis in neuroimaging and, more broadly, in neurophysiology, noting both its exciting potential and the assumptions that govern its application and interpretation.
A recent measure of 'integrated information', Φ(DM), quantifies the extent to which a system generates more information than the sum of its parts as it transitions between states, possibly reflecting ...levels of consciousness generated by neural systems. However, Φ(DM) is defined only for discrete Markov systems, which are unusual in biology; as a result, Φ(DM) can rarely be measured in practice. Here, we describe two new measures, Φ(E) and Φ(AR), that overcome these limitations and are easy to apply to time-series data. We use simulations to demonstrate the in-practice applicability of our measures, and to explore their properties. Our results provide new opportunities for examining information integration in real and model systems and carry implications for relations between integrated information, consciousness, and other neurocognitive processes. However, our findings pose challenges for theories that ascribe physical meaning to the measured quantities.
Granger causality is a statistical notion of causal influence based on prediction via vector autoregression. Developed originally in the field of econometrics, it has since found application in a ...broader arena, particularly in neuroscience. More recently transfer entropy, an information-theoretic measure of time-directed information transfer between jointly dependent processes, has gained traction in a similarly wide field. While it has been recognized that the two concepts must be related, the exact relationship has until now not been formally described. Here we show that for Gaussian variables, Granger causality and transfer entropy are entirely equivalent, thus bridging autoregressive and information-theoretic approaches to data-driven causal inference.
The broad concept of emergence is instrumental in various of the most challenging open scientific questions-yet, few quantitative theories of what constitutes emergent phenomena have been proposed. ...This article introduces a formal theory of causal emergence in multivariate systems, which studies the relationship between the dynamics of parts of a system and macroscopic features of interest. Our theory provides a quantitative definition of downward causation, and introduces a complementary modality of emergent behaviour-which we refer to as causal decoupling. Moreover, the theory allows practical criteria that can be efficiently calculated in large systems, making our framework applicable in a range of scenarios of practical interest. We illustrate our findings in a number of case studies, including Conway's Game of Life, Reynolds' flocking model, and neural activity as measured by electrocorticography.
Integrated Information Theory (IIT) is a prominent theory of consciousness that has at its centre measures that quantify the extent to which a system generates more information than the sum of its ...parts. While several candidate measures of integrated information (“ Φ ”) now exist, little is known about how they compare, especially in terms of their behaviour on non-trivial network models. In this article, we provide clear and intuitive descriptions of six distinct candidate measures. We then explore the properties of each of these measures in simulation on networks consisting of eight interacting nodes, animated with Gaussian linear autoregressive dynamics. We find a striking diversity in the behaviour of these measures—no two measures show consistent agreement across all analyses. A subset of the measures appears to reflect some form of dynamical complexity, in the sense of simultaneous segregation and integration between system components. Our results help guide the operationalisation of IIT and advance the development of measures of integrated information and dynamical complexity that may have more general applicability.
As humanity is becoming increasingly confronted by Earth's finite biophysical limits, there is increasing interest in questions about the stability and equitability of a zero-growth capitalist ...economy, most notably: if one maintains a positive interest rate for loans, can a zero-growth economy be stable? This question has been explored on a few different macroeconomic models, and both ‘yes' and ‘no’ answers have been obtained. However, economies can become unstable whether or not there is ongoing underlying growth in productivity with which to sustain growth in output. Here we attempt, for the first time, to assess via a model the relative stability of growth versus no-growth scenarios. The model employed draws from Keen's model of the Minsky financial instability hypothesis. The analysis focuses on dynamics as opposed to equilibrium, and scenarios of growth and no-growth of output (GDP) are obtained by tweaking a productivity growth input parameter. We confirm that, with or without growth, there can be both stable and unstable scenarios. To maintain stability, firms must not change their debt levels or target debt levels too quickly. Further, according to the model, the wages share is higher for zero-growth scenarios, although there are more frequent substantial drops in employment.
Granger-Geweke causality (GGC) is a powerful and popular method for identifying directed functional (‘causal’) connectivity in neuroscience. In a recent paper, Stokes and Purdon (2017b) raise several ...concerns about its use. They make two primary claims: (1) that GGC estimates may be severely biased or of high variance, and (2) that GGC fails to reveal the full structural/causal mechanisms of a system. However, these claims rest, respectively, on an incomplete evaluation of the literature, and a misconception about what GGC can be said to measure. Here we explain how existing approaches resolve the first issue, and discuss the frequently-misunderstood distinction between functional and effective neural connectivity which underlies Stokes and Purdon's second claim.
To truly eliminate Cartesian ghosts from the science of consciousness, we must describe consciousness as an aspect of the physical. Integrated Information Theory states that consciousness arises from ...intrinsic information generated by dynamical systems; however existing formulations of this theory are not applicable to standard models of fundamental physical entities. Modern physics has shown that fields are fundamental entities, and in particular that the electromagnetic field is fundamental. Here I hypothesize that consciousness arises from information intrinsic to fundamental fields. This hypothesis unites fundamental physics with what we know empirically about the neuroscience underlying consciousness, and it bypasses the need to consider quantum effects.