Memory formation is hypothesized to involve the generation of event-specific neural activity patterns during learning and the subsequent spontaneous reactivation of these patterns. Here, we present ...evidence that these processes can also be observed in urethane-anesthetized rats and are enhanced by desynchronized brain state evoked by tail pinch, subcortical carbachol infusion, or systemic amphetamine administration. During desynchronization, we found that repeated tactile or auditory stimulation evoked unique sequential patterns of neural firing in somatosensory and auditory cortex and that these patterns then reoccurred during subsequent spontaneous activity, similar to what we have observed in awake animals. Furthermore, the formation of these patterns was blocked by an NMDA receptor antagonist, suggesting that the phenomenon depends on synaptic plasticity. These results suggest that anesthetized animals with a desynchronized brain state could serve as a convenient model for studying stimulus-induced plasticity to improve our understanding of memory formation and replay in the brain.
•Sensory stimulation evokes sequential activity patterns in sensory cortex•These patterns reoccur during subsequent spontaneous activity•Formation of new spiking patterns is facilitated in desynchronized brain state•In anesthetized rats, desynchronized brain state can be induced by amphetamine
Sensory experience evokes stimulus-specific patterns of neuronal activity. These patterns are later spontaneously replayed, which is believed to be an important part of memory formation and learning. Bermudez Contreras et al. show that these processes are enhanced in attentive-like brain state.
Sleep consists of two basic stages: non-rapid eye movement (NREM) and rapid eye movement (REM) sleep. NREM sleep is characterized by slow high-amplitude cortical electroencephalogram (EEG) signals, ...while REM sleep is characterized by desynchronized cortical rhythms. Despite this, recent electrophysiological studies have suggested the presence of slow waves (SWs) in local cortical areas during REM sleep. Electrophysiological techniques, however, have been unable to resolve the regional structure of these activities because of relatively sparse sampling. Here, we map functional gradients in cortical activity during REM sleep using mesoscale imaging in mice and show local SW patterns occurring mainly in somatomotor and auditory cortical regions with minimum presence within the default mode network. The role of the cholinergic system in local desynchronization during REM sleep is also explored by calcium imaging of cholinergic activity within the cortex and analyzing structural data. We demonstrate weaker cholinergic projections and terminal activity in regions exhibiting frequent SWs during REM sleep.
Display omitted
•There is a gradient of slow-wave activity across cortex during REM sleep•Synchronized and desynchronized states co-exist within single cortical hemisphere•Slow-wave epicenters are orthogonal to the default mode network•Slow-wave epicenters have weaker cholinergic projections and terminal activity
Nazari et al. investigate the functional gradient of cortical activity during REM sleep and its relationship with the cholinergic system in the mouse brain. Their results suggest that slow waves occur in local cortical areas during REM sleep and that their spatial distribution is determined by regional variation of cholinergic activity.
Documenting a mouse’s “real world” behavior in the “small world” of a laboratory cage with continuous video recordings offers insights into phenotypical expression of mouse genotypes, development and ...aging, and neurological disease. Nevertheless, there are challenges in the design of a small world, the behavior selected for analysis, and the form of the analysis used. Here we offer insights into small world analyses by describing how acute behavioral procedures can guide continuous behavioral methodology. We show how algorithms can identify behavioral acts including walking and rearing, circadian patterns of action including sleep duration and waking activity, and the organization of patterns of movement into home base activity and excursions, and how they are altered with aging. We additionally describe how specific tests can be incorporated within a mouse’s living arrangement. We emphasize how machine learning can condense and organize continuous activity that extends over extended periods of time.
•Description of the challenges of continuous and extensive small world mouse behavior analysis.•Description of how to uncover insights into phenotypical expression, development and aging, and neurological disease.•Discussion of data collection and role of machine learning in uncovering the behavioral organization over long time periods.•Advantages and limitations of extending acute behavioral testing methodology to the small world in the laboratory.•Description of a behavioral methodology for assessing organized behavior across the mouse lifespan.
The reinforcement learning (RL) paradigm allows agents to solve tasks through trial-and-error learning. To be capable of efficient, long-term learning, RL agents should be able to apply knowledge ...gained in the past to new tasks they may encounter in the future. The ability to predict actions' consequences may facilitate such knowledge transfer. We consider here domains where an RL agent has access to two kinds of information: agent-centric information with constant semantics across tasks, and environment-centric information, which is necessary to solve the task, but with semantics that differ between tasks. For example, in robot navigation, environment-centric information may include the robot's geographic location, while agent-centric information may include sensor readings of various nearby obstacles. We propose that these situations provide an opportunity for a very natural style of knowledge transfer, in which the agent learns to predict actions' environmental consequences using agent-centric information. These predictions contain important information about the affordances and dangers present in a novel environment, and can effectively transfer knowledge from agent-centric to environment-centric learning systems. Using several example problems including spatial navigation and network routing, we show that our knowledge transfer approach can allow faster and lower cost learning than existing alternatives.
In the current research on measuring complex behaviours/phenotyping in rodents, most of the experimental design requires the experimenter to remove the animal from its home-cage environment and place ...it in an unfamiliar apparatus (novel environment). This interaction may influence behaviour, general well-being, and the metabolism of the animal, affecting the phenotypic outcome even if the data collection method is automated. Most of the commercially available solutions for home-cage monitoring are expensive and usually lack the flexibility to be incorporated with existing home-cages. Here we present a low-cost solution for monitoring home-cage behaviour of rodents that can be easily incorporated to practically any available rodent home-cage. To demonstrate the use of our system, we reliably predict the sleep/wake state of mice in their home-cage using only video. We validate these results using hippocampal local field potential (LFP) and electromyography (EMG) data. Our approach provides a low-cost flexible methodology for high-throughput studies of sleep, circadian rhythm and rodent behaviour with minimal experimenter interference.
Recent advances in artificial intelligence (AI) and neuroscience are impressive. In AI, this includes the development of computer programs that can beat a grandmaster at GO or outperform human ...radiologists at cancer detection. A great deal of these technological developments are directly related to progress in artificial neural networks-initially inspired by our knowledge about how the brain carries out computation. In parallel, neuroscience has also experienced significant advances in understanding the brain. For example, in the field of spatial navigation, knowledge about the mechanisms and brain regions involved in neural computations of cognitive maps-an internal representation of space-recently received the Nobel Prize in medicine. Much of the recent progress in neuroscience has partly been due to the development of technology used to record from very large populations of neurons in multiple regions of the brain with exquisite temporal and spatial resolution in behaving animals. With the advent of the vast quantities of data that these techniques allow us to collect there has been an increased interest in the intersection between AI and neuroscience, many of these intersections involve using AI as a novel tool to explore and analyze these large data sets. However, given the common initial motivation point-to understand the brain-these disciplines could be more strongly linked. Currently much of this potential synergy is not being realized. We propose that spatial navigation is an excellent area in which these two disciplines can converge to help advance what we know about the brain. In this review, we first summarize progress in the neuroscience of spatial navigation and reinforcement learning. We then turn our attention to discuss how spatial navigation has been modeled using descriptive, mechanistic, and normative approaches and the use of AI in such models. Next, we discuss how AI can advance neuroscience, how neuroscience can advance AI, and the limitations of these approaches. We finally conclude by highlighting promising lines of research in which spatial navigation can be the point of intersection between neuroscience and AI and how this can contribute to the advancement of the understanding of intelligent behavior.
Despite the recent advancements and popularity of deep learning that has resulted from the advent of numerous industrial applications, artificial neural networks (ANNs) still lack crucial features ...from their biological counterparts that could improve their performance and their potential to advance our understanding of how the brain works. One avenue that has been proposed to change this is to strengthen the interaction between artificial intelligence (AI) research and neuroscience. Since their historical beginnings, ANNs and AI, in general, have developed in close alignment with both neuroscience and psychology. In addition to deep learning, reinforcement learning (RL) is another approach that is strongly linked to AI and neuroscience to understand how learning is implemented in the brain. In a recently published article, Botvinick et al. (Neuron, 107:603–616, 2020) explain why deep reinforcement learning (DRL) is important for neuroscience as a framework to study learning, representations and decision making. Here, I summarise Botvinick et al.’s main arguments and frame them in the context of the study of learning, memory and spatial navigation. I believe that applying this approach to study spatial navigation can provide useful insights for the understanding of how the brain builds, processes and stores representations of the outside world to extract knowledge.
Behavior provides important insights into neuronal processes. For example, analysis of reaching movements can give a reliable indication of the degree of impairment in neurological disorders such as ...stroke, Parkinson disease, or Huntington disease. The analysis of such movement abnormalities is notoriously difficult and requires a trained evaluator. Here, we show that a deep neural network is able to score behavioral impairments with expert accuracy in rodent models of stroke. The same network was also trained to successfully score movements in a variety of other behavioral tasks. The neural network also uncovered novel movement alterations related to stroke, which had higher predictive power of stroke volume than the movement components defined by human experts. Moreover, when the regression network was trained only on categorical information (control = 0; stroke = 1), it generated predictions with intermediate values between 0 and 1 that matched the human expert scores of stroke severity. The network thus offers a new data-driven approach to automatically derive ratings of motor impairments. Altogether, this network can provide a reliable neurological assessment and can assist the design of behavioral indices to diagnose and monitor neurological disorders.
Abstract
In response to sensory stimulation, the cortex exhibits an early transient response followed by late and slower activation. Recent studies suggest that the early component represents ...features of the stimulus while the late component is associated with stimulus perception. Although very informative, these studies only focus on the amplitude of the evoked responses to study its relationship with sensory perception. In this work, we expand upon the study of how patterns of evoked and spontaneous activity are modified by experience at the mesoscale level using voltage and extracellular glutamate transient recordings over widespread regions of mouse dorsal neocortex. We find that repeated tactile or auditory stimulation selectively modifies the spatiotemporal patterns of cortical activity, mainly of the late evoked response in anesthetized mice injected with amphetamine and also in awake mice. This modification lasted up to 60 min and results in an increase in the amplitude of the late response after repeated stimulation and in an increase in the similarity between the spatiotemporal patterns of the late early evoked response. This similarity increase occurs only for the evoked responses of the sensory modality that received the repeated stimulation. Thus, this selective long-lasting spatiotemporal modification of the cortical activity patterns might provide evidence that evoked responses are a cortex-wide phenomenon. This work opens new questions about how perception-related cortical activity changes with sensory experience across the cortex.