Neural responses are modulated by brain state, which varies with arousal, attention, and behavior. In mice, running and whisking desynchronize the cortex and enhance sensory responses, but the ...quiescent periods between bouts of exploratory behaviors have not been well studied. We found that these periods of “quiet wakefulness” were characterized by state fluctuations on a timescale of 1–2 s. Small fluctuations in pupil diameter tracked these state transitions in multiple cortical areas. During dilation, the intracellular membrane potential was desynchronized, sensory responses were enhanced, and population activity was less correlated. In contrast, constriction was characterized by increased low-frequency oscillations and higher ensemble correlations. Specific subtypes of cortical interneurons were differentially activated during dilation and constriction, consistent with their participation in the observed state changes. Pupillometry has been used to index attention and mental effort in humans, but the intracellular dynamics and differences in population activity underlying this phenomenon were previously unknown.
•Cortical state fluctuates during quiet wakefulness in S1 and V1 of awake mice•Pupil fluctuations track variations in state in whole-cell in vivo patch recordings•Visual responses are enhanced during desynchronized states indexed by dilation•Ensemble correlations are increased during pupil constriction
Reimer et al. describe fast state changes in the cortex of quietly awake mice that are tracked by small fluctuations in pupil size. The cortical membrane potential is desynchronized and sensory responses are enhanced during the state indexed by pupil dilation.
Despite the importance of the mammalian neocortex for complex cognitive processes, we still lack a comprehensive description of its cellular components. To improve the classification of neuronal cell ...types and the functional characterization of single neurons, we present Patch-seq, a method that combines whole-cell electrophysiological patch-clamp recordings, single-cell RNA-sequencing and morphological characterization. Following electrophysiological characterization, cell contents are aspirated through the patch-clamp pipette and prepared for RNA-sequencing. Using this approach, we generate electrophysiological and molecular profiles of 58 neocortical cells and show that gene expression patterns can be used to infer the morphological and physiological properties such as axonal arborization and action potential amplitude of individual neurons. Our results shed light on the molecular underpinnings of neuronal diversity and suggest that Patch-seq can facilitate the classification of cell types in the nervous system.
Artificial neural networks suffer from catastrophic forgetting. Unlike humans, when these networks are trained on something new, they rapidly forget what was learned before. In the brain, a mechanism ...thought to be important for protecting memories is the reactivation of neuronal activity patterns representing those memories. In artificial neural networks, such memory replay can be implemented as 'generative replay', which can successfully - and surprisingly efficiently - prevent catastrophic forgetting on toy examples even in a class-incremental learning scenario. However, scaling up generative replay to complicated problems with many tasks or complex inputs is challenging. We propose a new, brain-inspired variant of replay in which internal or hidden representations are replayed that are generated by the network's own, context-modulated feedback connections. Our method achieves state-of-the-art performance on challenging continual learning benchmarks (e.g., class-incremental learning on CIFAR-100) without storing data, and it provides a novel model for replay in the brain.
Shared, trial-to-trial variability in neuronal populations has a strong impact on the accuracy of information processing in the brain. Estimates of the level of such noise correlations are diverse, ...ranging from 0.01 to 0.4, with little consensus on which factors account for these differences. Here we addressed one important factor that varied across studies, asking how anesthesia affects the population activity structure in macaque primary visual cortex. We found that under opioid anesthesia, activity was dominated by strong coordinated fluctuations on a timescale of 1-2 Hz, which were mostly absent in awake, fixating monkeys. Accounting for these global fluctuations markedly reduced correlations under anesthesia, matching those observed during wakefulness and reconciling earlier studies conducted under anesthesia and in awake animals. Our results show that internal signals, such as brain state transitions under anesthesia, can induce noise correlations but can also be estimated and accounted for based on neuronal population activity.
Despite great efforts over several decades, our best models of primary visual cortex (V1) still predict spiking activity quite poorly when probed with natural stimuli, highlighting our limited ...understanding of the nonlinear computations in V1. Recently, two approaches based on deep learning have emerged for modeling these nonlinear computations: transfer learning from artificial neural networks trained on object recognition and data-driven convolutional neural network models trained end-to-end on large populations of neurons. Here, we test the ability of both approaches to predict spiking activity in response to natural images in V1 of awake monkeys. We found that the transfer learning approach performed similarly well to the data-driven approach and both outperformed classical linear-nonlinear and wavelet-based feature representations that build on existing theories of V1. Notably, transfer learning using a pre-trained feature space required substantially less experimental time to achieve the same performance. In conclusion, multi-layer convolutional neural networks (CNNs) set the new state of the art for predicting neural responses to natural images in primate V1 and deep features learned for object recognition are better explanations for V1 computation than all previous filter bank theories. This finding strengthens the necessity of V1 models that are multiple nonlinearities away from the image domain and it supports the idea of explaining early visual cortex based on high-level functional goals.
Celotno besedilo
Dostopno za:
DOBA, IZUM, KILJ, NUK, PILJ, PNG, SAZU, SIK, UILJ, UKNU, UL, UM, UPUK
Rapid variations in cortical state during wakefulness have a strong influence on neural and behavioural responses and are tightly coupled to changes in pupil size across species. However, the ...physiological processes linking cortical state and pupil variations are largely unknown. Here we demonstrate that these rapid variations, during both quiet waking and locomotion, are highly correlated with fluctuations in the activity of corticopetal noradrenergic and cholinergic projections. Rapid dilations of the pupil are tightly associated with phasic activity in noradrenergic axons, whereas longer-lasting dilations of the pupil, such as during locomotion, are accompanied by sustained activity in cholinergic axons. Thus, the pupil can be used to sensitively track the activity in multiple neuromodulatory transmitter systems as they control the state of the waking brain.
The state of the brain and body constantly varies on rapid and slow timescales. These variations contribute to the apparent noisiness of sensory responses at both the neural and the behavioral level. ...Recent investigations of rapid state changes in awake, behaving animals have provided insight into the mechanisms by which optimal sensory encoding and behavioral performance are achieved. Fluctuations in state, as indexed by pupillometry, impact both the “signal” (sensory evoked response) and the “noise” (spontaneous activity) of cortical responses. By taking these fluctuations into account, neural response (co)variability is significantly reduced, revealing the brain to be more reliable and predictable than previously thought.
The waking brain appears to be noisy, giving rise to variable responses. McGinley et al. review literature that reveals the careful monitoring of waking can control for these variations and reveal a brain that is both reliable and predictable.
Despite enormous progress in machine learning, artificial neural networks still lag behind brains in their ability to generalize to new situations. Given identical training data, differences in ...generalization are caused by many defining features of a learning algorithm, such as network architecture and learning rule. Their joint effect, called “inductive bias,” determines how well any learning algorithm—or brain—generalizes: robust generalization needs good inductive biases. Artificial networks use rather nonspecific biases and often latch onto patterns that are only informative about the statistics of the training data but may not generalize to different scenarios. Brains, on the other hand, generalize across comparatively drastic changes in the sensory input all the time. We highlight some shortcomings of state-of-the-art learning algorithms compared to biological brains and discuss several ideas about how neuroscience can guide the quest for better inductive biases by providing useful constraints on representations and network architecture.
Artificial neural networks still lag behind brains in their ability to generalize beyond their training conditions. In this review, Sinz et al. discuss several ideas for how neuroscience can guide the search for better inductive biases by providing useful constraints on representations and network architecture.
High-resolution optical imaging is critical to understanding brain function. We demonstrate that three-photon microscopy at 1,300-nm excitation enables functional imaging of GCaMP6s-labeled neurons ...beyond the depth limit of two-photon microscopy. We record spontaneous activity from up to 150 neurons in the hippocampal stratum pyramidale at ∼1-mm depth within an intact mouse brain. Our method creates opportunities for noninvasive recording of neuronal activity with high spatial and temporal resolution deep within scattering brain tissues.
Three types of incremental learning van de Ven, Gido M; Tuytelaars, Tinne; Tolias, Andreas S
Nature machine intelligence,
12/2022, Letnik:
4, Številka:
12
Journal Article
Recenzirano
Odprti dostop
Incrementally learning new information from a non-stationary stream of data, referred to as 'continual learning', is a key feature of natural intelligence, but a challenging problem for deep neural ...networks. In recent years, numerous deep learning methods for continual learning have been proposed, but comparing their performances is difficult due to the lack of a common framework. To help address this, we describe three fundamental types, or 'scenarios', of continual learning: task-incremental, domain-incremental and class-incremental learning. Each of these scenarios has its own set of challenges. To illustrate this, we provide a comprehensive empirical comparison of currently used continual learning strategies, by performing the Split MNIST and Split CIFAR-100 protocols according to each scenario. We demonstrate substantial differences between the three scenarios in terms of difficulty and in terms of the effectiveness of different strategies. The proposed categorization aims to structure the continual learning field, by forming a key foundation for clearly defining benchmark problems.