A free-energy principle has been proposed recently that accounts for action, perception and learning. This Review looks at some key brain theories in the biological (for example, neural Darwinism) ...and physical (for example, information theory and optimal control theory) sciences from the free-energy perspective. Crucially, one key theme runs through each of these theories - optimization. Furthermore, if we look closely at what is optimized, the same quantity keeps emerging, namely value (expected reward, expected utility) or its complement, surprise (prediction error, expected cost). This is the quantity that is optimized under the free-energy principle, which suggests that several global brain theories might be unified within a free-energy framework.
This article reviews a free-energy formulation that advances Helmholtz's agenda to find principles of brain function based on conservation laws and neuronal energy. It rests on advances in ...statistical physics, theoretical biology and machine learning to explain a remarkable range of facts about brain structure and function. We could have just scratched the surface of what this formulation offers; for example, it is becoming clear that the Bayesian brain is just one facet of the free-energy principle and that perception is an inevitable consequence of active exchange with the environment. Furthermore, one can see easily how constructs like memory, attention, value, reinforcement and salience might disclose their simple relationships within this framework.
Is self-consciousness necessary for consciousness? The answer is yes. So there you have it-the answer is yes. This was my response to a question I was asked to address in a recent AEON piece ...(https://aeon.co/essays/consciousness-is-not-a-thing-but-a-process-of-inference). What follows is based upon the notes for that essay, with a special focus on self-organization, self-evidencing and self-modeling. I will try to substantiate my (polemic) answer from the perspective of a physicist. In brief, the argument goes as follows: if we want to talk about creatures, like ourselves, then we have to identify the characteristic behaviors they must exhibit. This is fairly easy to do by noting that living systems return to a set of attracting states time and time again. Mathematically, this implies the existence of a Lyapunov function that turns out to be model evidence (i.e., self-evidence) in Bayesian statistics or surprise (i.e., self-information) in information theory. This means that all biological processes can be construed as performing some form of inference, from evolution through to conscious processing. If this is the case, at what point do we invoke consciousness? The proposal on offer here is that the mind comes into being when self-evidencing has a temporal thickness or counterfactual depth, which grounds inferences about the consequences of
action. On this view, consciousness is nothing more than inference about
future; namely, the self-evidencing consequences of what I could do.
The slight perversion of the original title of this piece (The Future of the Bayesian Brain) reflects my attempt to write prospectively about ‘Science and Stories’ over the past 20years. I will meet ...this challenge by dealing with the future and then turning to its history. The future of the Bayesian brain (in neuroimaging) is clear: it is the application of dynamic causal modeling to understand how the brain conforms to the free energy principle. In this context, the Bayesian brain is a corollary of the free energy principle, which says that any self organizing system (like a brain or neuroimaging community) must maximize the evidence for its own existence, which means it must minimize its free energy using a model of its world. Dynamic causal modeling involves finding models of the brain that have the greatest evidence or the lowest free energy. In short, the future of imaging neuroscience is to refine models of the brain to minimize free energy, where the brain refines models of the world to minimize free energy. This endeavor itself minimizes free energy because our community is itself a self organizing system. I cannot imagine an alternative future that has the same beautiful self consistency as mine. Having dispensed with the future, we can now focus on the past, which is much more interesting:
This paper describes a free energy principle that tries to explain the ability of biological systems to resist a natural tendency to disorder. It appeals to circular causality of the sort found in ...synergetic formulations of self-organization (e.g., the slaving principle) and models of coupled dynamical systems, using nonlinear Fokker Planck equations. Here, circular causality is induced by separating the states of a random dynamical system into external and internal states, where external states are subject to random fluctuations and internal states are not. This reduces the problem to finding some (deterministic) dynamics of the internal states that ensure the system visits a limited number of external states; in other words, the measure of its (random) attracting set, or the Shannon entropy of the external states is small. We motivate a solution using a principle of least action based on variational free energy (from statistical physics) and establish the conditions under which it is formally equivalent to the information bottleneck method. This approach has proved useful in understanding the functional architecture of the brain. The generality of variational free energy minimisation and corresponding information theoretic formulations may speak to interesting applications beyond the neurosciences; e.g., in molecular or evolutionary biology.
Recent advances in data analysis and modeling allow the use of fMRI data to ask not just which brain regions are involved in various cognitive and perceptual tasks, but also how they communicate with ...each other. Karl Friston examines two different state-of-the-art approaches to modeling brain connectivity using neuroimaging.
Prediction, perception and agency Friston, Karl
International journal of psychophysiology,
February 2012, 2012-Feb, 2012-02-00, 20120201, Letnik:
83, Številka:
2
Journal Article
Recenzirano
Odprti dostop
The articles in this special issue provide a rich and thoughtful perspective on the brain as an inference machine. They illuminate key aspects of the internal or generative models the brain might use ...for perception. Furthermore, they explore the implications for a sense of agency and the nature of false inference in neuropsychiatric syndromes. In this review, I try to gather together some of the themes that emerge in this special issue and use them to illustrate how far one can take the notion of predictive coding in understanding behaviour and agency.
Waves of prediction Friston, Karl J
PLoS biology,
10/2019, Letnik:
17, Številka:
10
Journal Article
Recenzirano
Odprti dostop
Predictive processing (e.g., predictive coding) is a predominant paradigm in cognitive neuroscience. This Primer considers the various levels of commitment neuroscientists have to the neuronal ...process theories that accompany the principles of predictive processing. Specifically, it reviews and contextualises a recent PLOS Biology study of alpha oscillations and travelling waves. We will see that alpha oscillations emerge naturally under the computational architectures implied by predictive coding-and may tell us something profound about recurrent message passing in brain hierarchies. Specifically, the bidirectional nature of forward and backward waves speaks to opportunities to understand attention and how it nuances bottom-up and top-down influences.
Mean-fields and neural masses Friston, Karl
PLOS computational biology/PLoS computational biology,
08/2008, Letnik:
4, Številka:
8
Journal Article
Recenzirano
Odprti dostop
...Rolf challenged us to define and synthesise these perspectives in a coherent and pragmatic way; the response to that challenge is the article in this issue of PLoS Computational Biology by Deco, ...Jirsa, Robinson, Breakspear, and Friston, which took more than two years to prepare. The applications of these models are essentially twofold; some authors use them to understand the basic principles of neuronal dynamics and implicit computations; for example, understanding dynamics in terms of nonlinear mechanisms such as bifurcations, understanding perceptual categorisation in terms of multistability, or identifying the domains of parameter-space that support commonly observed spatiotemporal patterns of activity.
How rich functionality emerges from the invariant structural architecture of the brain remains a major mystery in neuroscience. Recent applications of network theory and theoretical neuroscience to ...large-scale brain networks have started to dissolve this mystery. Network analyses suggest that hierarchical modular brain networks are particularly suited to facilitate local (segregated) neuronal operations and the global integration of segregated functions. Although functional networks are constrained by structural connections, context-sensitive integration during cognition tasks necessarily entails a divergence between structural and functional networks. This degenerate (many-to-one) function-structure mapping is crucial for understanding the nature of brain networks. The emergence of dynamic functional networks from static structural connections calls for a formal (computational) approach to neuronal information processing that may resolve this dialectic between structure and function.