In the idling brain, neuronal circuits transition between periods of sustained firing (UP state) and quiescence (DOWN state), a pattern the mechanisms of which remain unclear. Here we analyzed ...spontaneous cortical population activity from anesthetized rats and found that UP and DOWN durations were highly variable and that population rates showed no significant decay during UP periods. We built a network rate model with excitatory (E) and inhibitory (I) populations exhibiting a novel bistable regime between a quiescent and an inhibition-stabilized state of arbitrarily low rate. Fluctuations triggered state transitions, while adaptation in E cells paradoxically caused a marginal decay of E-rate but a marked decay of I-rate in UP periods, a prediction that we validated experimentally. A spiking network implementation further predicted that DOWN-to-UP transitions must be caused by synchronous high-amplitude events. Our findings provide evidence of bistable cortical networks that exhibit non-rhythmic state transitions when the brain rests.
A prevalent model is that sharp-wave ripples (SWR) arise 'spontaneously' in CA3 and propagate recent memory traces outward to the neocortex to facilitate memory consolidation there. Using voltage and ...extracellular glutamate transient recording over widespread regions of mice dorsal neocortex in relation to CA1 multiunit activity (MUA) and SWR, we find that the largest SWR-related modulation occurs in retrosplenial cortex; however, contrary to the unidirectional hypothesis, neocortical activation exhibited a continuum of activation timings relative to SWRs, varying from leading to lagging. Thus, contrary to the model in which SWRs arise 'spontaneously' in the hippocampus, neocortical activation often precedes SWRs and may thus constitute a trigger event in which neocortical information seeds associative reactivation of hippocampal 'indices'. This timing continuum is consistent with a dynamics in which older, more consolidated memories may in fact initiate the hippocampal-neocortical dialog, whereas reactivation of newer memories may be initiated predominantly in the hippocampus.
The activity of neural populations is determined not only by sensory inputs but also by internally generated patterns. During quiet wakefulness, the brain produces spontaneous firing events that can ...spread over large areas of cortex and have been suggested to underlie processes such as memory recall and consolidation. Here we demonstrate a different role for spontaneous activity in sensory cortex: gating of sensory inputs. We show that population activity in rat auditory cortex is composed of transient 50-100 ms packets of spiking activity that occur irregularly during silence and sustained tone stimuli, but reliably at tone onset. Population activity within these packets had broadly consistent spatiotemporal structure, but the rate and also precise relative timing of action potentials varied between stimuli. Packet frequency varied with cortical state, with desynchronized state activity consistent with superposition of multiple overlapping packets. We suggest that such packets reflect the sporadic opening of a "gate" that allows auditory cortex to broadcast a representation of external sounds to other brain regions.
Being able to correctly predict the future and to adjust own actions accordingly can offer a great survival advantage. In fact, this could be the main reason why brains evolved. Consciousness, the ...most mysterious feature of brain activity, also seems to be related to predicting the future and detecting surprise: a mismatch between actual and predicted situation. Similarly at a single neuron level, predicting future activity and adapting synaptic inputs accordingly was shown to be the best strategy to maximize the metabolic energy for a neuron. Following on these ideas, here we examined if surprise minimization by single neurons could be a basis for consciousness. First, we showed in simulations that as a neural network learns a new task, then the surprise within neurons (defined as the difference between actual and expected activity) changes similarly to the consciousness of skills in humans. Moreover, implementing adaptation of neuronal activity to minimize surprise at fast time scales (tens of milliseconds) resulted in improved network performance. This improvement is likely because adapting activity based on the internal predictive model allows each neuron to make a more "educated" response to stimuli. Based on those results, we propose that the neuronal predictive adaptation to minimize surprise could be a basic building block of conscious processing. Such adaptation allows neurons to exchange information about own predictions and thus to build more complex predictive models. To be precise, we provide an equation to quantify consciousness as the amount of surprise minus the size of the adaptation error. Since neuronal adaptation can be studied experimentally, this can allow testing directly our hypothesis. Specifically, we postulate that any substance affecting neuronal adaptation will also affect consciousness. Interestingly, our predictive adaptation hypothesis is consistent with multiple ideas presented previously in diverse theories of consciousness, such as global workspace theory, integrated information, attention schema theory, and predictive processing framework. In summary, we present a theoretical, computational, and experimental support for the hypothesis that neuronal adaptation is a possible biological mechanism of conscious processing, and we discuss how this could provide a step toward a unified theory of consciousness.
Foreign body airway obstruction (FBAO), or commonly known as choking, is an extremely dangerous event. The European Resuscitation Council recommends that back blows and abdominal thrusts should be ...performed for relieving FBAO in conscious adults. Reviewed here evidence suggests that applying a prone or a head-down position increases effectiveness of the above standard approaches to relieve obstruction, due to help of gravity.
Abstract
Understanding how the brain learns may lead to machines with human-like intellectual capacities. It was previously proposed that the brain may operate on the principle of predictive coding. ...However, it is still not well understood how a predictive system could be implemented in the brain. Here we demonstrate that the ability of a single neuron to predict its future activity may provide an effective learning mechanism. Interestingly, this predictive learning rule can be derived from a metabolic principle, whereby neurons need to minimize their own synaptic activity (cost) while maximizing their impact on local blood supply by recruiting other neurons. We show how this mathematically derived learning rule can provide a theoretical connection between diverse types of brain-inspired algorithm, thus offering a step towards the development of a general theory of neuronal learning. We tested this predictive learning rule in neural network simulations and in data recorded from awake animals. Our results also suggest that spontaneous brain activity provides ‘training data’ for neurons to learn to predict cortical dynamics. Thus, the ability of a single neuron to minimize surprise—that is, the difference between actual and expected activity—could be an important missing element to understand computation in the brain.
Epileptogenesis is a complex and not well understood phenomenon. Here, we explore the hypothesis that epileptogenesis could be "hijacking" normal memory processes, and how this hypothesis may provide ...new directions for epilepsy treatment. First, we review similarities between the hypersynchronous circuits observed in epilepsy and memory consolidation processes involved in strengthening neuronal connections. Next, we describe the kindling model of seizures and its relation to long-term potentiation model of synaptic plasticity. We also examine how the strengthening of epileptic circuits is facilitated during the physiological slow wave sleep, similarly as episodic memories. Furthermore, we present studies showing that specific memories can directly trigger reflex seizures. The neuronal hypersynchrony in early stages of Alzheimer's disease, and the use of anti-epileptic drugs to improve the cognitive symptoms in this disease also suggests a connection between memory systems and epilepsy. Given the commonalities between memory processes and epilepsy, we propose that therapies for memory disorders might provide new avenues for treatment of epileptic patients.
Backpropagation (BP) has been used to train neural networks for many years, allowing them to solve a wide variety of tasks like image classification, speech recognition, and reinforcement learning ...tasks. But the biological plausibility of BP as a mechanism of neural learning has been questioned. Equilibrium Propagation (EP) has been proposed as a more biologically plausible alternative and achieves comparable accuracy on the CIFAR-10 image classification task. This study proposes the first EP-based reinforcement learning architecture: an Actor-Critic architecture with the actor network trained by EP. We show that this model can solve the basic control tasks often used as benchmarks for BP-based models. Interestingly, our trained model demonstrates more consistent high-reward behavior than a comparable model trained exclusively by BP.
Even in the absence of sensory stimulation, the neocortex shows complex spontaneous activity patterns, often consisting of alternating "DOWN" states of generalized neural silence and "UP" states of ...massive, persistent network activity. To investigate how this spontaneous activity propagates through neuronal assemblies in vivo, we simultaneously recorded populations of 50-200 cortical neurons in layer V of anesthetized and awake rats. Each neuron displayed a virtually unique spike pattern during UP states, with diversity seen amongst both putative pyramidal cells and interneurons, reflecting a complex but stereotypically organized sequential spread of activation through local cortical networks. Spike timing was most precise during the first almost equal to100 ms after UP state onset, and decayed as UP states progressed. A subset of UP states propagated as traveling waves, but waves passing a given point in either direction initiated similar local sequences, suggesting local networks as the substrate of sequential firing patterns. A search for repeating motifs indicated that their occurrence and structure was predictable from neurons' individual latencies to UP state onset. We suggest that these stereotyped patterns arise from the interplay of intrinsic cellular conductances and local circuit properties.
Since humans still outperform artificial neural networks on many tasks, drawing inspiration from the brain may help to improve current machine learning algorithms. Contrastive Hebbian learning (CHL) ...and equilibrium propagation (EP) are biologically plausible algorithms that update weights using only local information (without explicitly calculating gradients) and still achieve performance comparable to conventional backpropagation. In this study, we augmented CHL and EP with Adjusted Adaptation, inspired by the adaptation effect observed in neurons, in which a neuron's response to a given stimulus is adjusted after a short time. We add this adaptation feature to multilayer perceptrons and convolutional neural networks trained on MNIST and CIFAR-10. Surprisingly, adaptation improved the performance of these networks. We discuss the biological inspiration for this idea and investigate why Neuronal Adaptation could be an important brain mechanism to improve the stability and accuracy of learning.