The construction of biologically plausible models of neural circuits is crucial for understanding the computational properties of the nervous system. Constructing functional networks composed of ...separate excitatory and inhibitory neurons obeying Dale's law presents a number of challenges. We show how a target-based approach, when combined with a fast online constrained optimization technique, is capable of building functional models of rate and spiking recurrent neural networks in which excitation and inhibition are balanced. Balanced networks can be trained to produce complicated temporal patterns and to solve input-output tasks while retaining biologically desirable features such as Dale's law and response variability.
Neural circuits display complex activity patterns both spontaneously and when responding to a stimulus or generating a motor output. How are these two forms of activity related? We develop a ...procedure called FORCE learning for modifying synaptic strengths either external to or within a model neural network to change chaotic spontaneous activity into a wide variety of desired activity patterns. FORCE learning works even though the networks we train are spontaneously chaotic and we leave feedback loops intact and unclamped during learning. Using this approach, we construct networks that produce a wide variety of complex output patterns, input-output transformations that require memory, multiple outputs that can be switched by control inputs, and motor patterns matching human motion capture data. Our results reproduce data on premovement activity in motor and premotor cortex, and suggest that synaptic plasticity may be a more rapid and powerful modulator of network activity than generally appreciated.
Most of the networks used by computer scientists and many of those studied by modelers in neuroscience represent unit activities as continuous variables. Neurons, however, communicate primarily ...through discontinuous spiking. We review methods for transferring our ability to construct interesting networks that perform relevant tasks from the artificial continuous domain to more realistic spiking network models. These methods raise a number of issues that warrant further theoretical and experimental study.
Many animals rely on an internal heading representation when navigating in varied environments
. How this representation is linked to the sensory cues that define different surroundings is unclear. ...In the fly brain, heading is represented by 'compass' neurons that innervate a ring-shaped structure known as the ellipsoid body
. Each compass neuron receives inputs from 'ring' neurons that are selective for particular visual features
; this combination provides an ideal substrate for the extraction of directional information from a visual scene. Here we combine two-photon calcium imaging and optogenetics in tethered flying flies with circuit modelling, and show how the correlated activity of compass and visual neurons drives plasticity
, which flexibly transforms two-dimensional visual cues into a stable heading representation. We also describe how this plasticity enables the fly to convert a partial heading representation, established from orienting within part of a novel setting, into a complete heading representation. Our results provide mechanistic insight into the memory-related computations that are essential for flexible navigation in varied surroundings.
Many behavioural tasks require the manipulation of mathematical vectors, but, outside of computational models
, it is not known how brains perform vector operations. Here we show how the Drosophila ...central complex, a region implicated in goal-directed navigation
, performs vector arithmetic. First, we describe a neural signal in the fan-shaped body that explicitly tracks the allocentric travelling angle of a fly, that is, the travelling angle in reference to external cues. Past work has identified neurons in Drosophila
and mammals
that track the heading angle of an animal referenced to external cues (for example, head direction cells), but this new signal illuminates how the sense of space is properly updated when travelling and heading angles differ (for example, when walking sideways). We then characterize a neuronal circuit that performs an egocentric-to-allocentric (that is, body-centred to world-centred) coordinate transformation and vector addition to compute the allocentric travelling direction. This circuit operates by mapping two-dimensional vectors onto sinusoidal patterns of activity across distinct neuronal populations, with the amplitude of the sinusoid representing the length of the vector and its phase representing the angle of the vector. The principles of this circuit may generalize to other brains and to domains beyond navigation where vector operations or reference-frame transformations are required.
Advances in experimental neuroscience have transformed our ability to explore the structure and function of neural circuits. At the same time, advances in machine learning have unleashed the ...remarkable computational power of artificial neural networks (ANNs). While these two fields have different tools and applications, they present a similar challenge: namely, understanding how information is embedded and processed through high-dimensional representations to solve complex tasks. One approach to addressing this challenge is to utilize mathematical and computational tools to analyze the geometry of these high-dimensional representations, i.e., neural population geometry. We review examples of geometrical approaches providing insight into the function of biological and artificial neural networks: representation untangling in perception, a geometric theory of classification capacity, disentanglement, and abstraction in cognitive systems, topological representations underlying cognitive maps, dynamic untangling in motor systems, and a dynamical approach to cognition. Together, these findings illustrate an exciting trend at the intersection of machine learning, neuroscience, and geometry, in which neural population geometry provides a useful population-level mechanistic descriptor underlying task implementation. Importantly, geometric descriptions are applicable across sensory modalities, brain regions, network architectures, and timescales. Thus, neural population geometry has the potential to unify our understanding of structure and function in biological and artificial neural networks, bridging the gap between single neurons, population activities, and behavior.
•Manifold-like representations arise when a set of neurons in a biological or artificial neural network exhibits variability in response to stimuli or through internal recurrent dynamics.•Approaches focused on analyzing geometric properties of neural populations, i.e. neural population geometry, have emerged as a promising population-level analysis technique connecting neural responses and task implementation.•We highlight recent studies of neural population geometry: untangling in perception, classification theory of manifolds, abstraction in cognitive systems, topology underlying cognitive maps, dynamic untangling in motor systems, and a dynamic approach to cognition.•Future directions include developing geometric measures as a population-level hypothesis, connecting representational geometry to biophysical properties of neurons, developing theories of neural population geometry for a larger array of tasks.
Trained recurrent networks are powerful tools for modeling dynamic neural computations. We present a target-based method for modifying the full connectivity matrix of a recurrent network to train it ...to perform tasks involving temporally complex input/output transformations. The method introduces a second network during training to provide suitable "target" dynamics useful for performing the task. Because it exploits the full recurrent connectivity, the method produces networks that perform tasks with fewer neurons and greater noise robustness than traditional least-squares (FORCE) approaches. In addition, we show how introducing additional input signals into the target-generating network, which act as task hints, greatly extends the range of tasks that can be learned and provides control over the complexity and nature of the dynamics of the trained, task-performing network.
The mushroom body in the fruitfly Drosophila melanogaster is an associative brain centre that translates odour representations into learned behavioural responses. Kenyon cells, the intrinsic neurons ...of the mushroom body, integrate input from olfactory glomeruli to encode odours as sparse distributed patterns of neural activity. We have developed anatomic tracing techniques to identify the glomerular origin of the inputs that converge onto 200 individual Kenyon cells. Here we show that each Kenyon cell integrates input from a different and apparently random combination of glomeruli. The glomerular inputs to individual Kenyon cells show no discernible organization with respect to their odour tuning, anatomic features or developmental origins. Moreover, different classes of Kenyon cells do not seem to preferentially integrate inputs from specific combinations of glomeruli. This organization of glomerular connections to the mushroom body could allow the fly to contextualize novel sensory experiences, a feature consistent with the role of this brain centre in mediating learned olfactory associations and behaviours.
Over the course of a lifetime, we process a continual stream of information. Extracted from this stream, memories must be efficiently encoded and stored in an addressable manner for retrieval. To ...explore potential mechanisms, we consider a familiarity detection task in which a subject reports whether an image has been previously encountered. We design a feedforward network endowed with synaptic plasticity and an addressing matrix, meta-learned to optimize familiarity detection over long intervals. We find that anti-Hebbian plasticity leads to better performance than Hebbian plasticity and replicates experimental results such as repetition suppression. A combinatorial addressing function emerges, selecting a unique neuron as an index into the synaptic memory matrix for storage or retrieval. Unlike previous models, this network operates continuously and generalizes to intervals it has not been trained on. Our work suggests a biologically plausible mechanism for continual learning and demonstrates an effective application of machine learning for neuroscience discovery.
•Meta-learning is used to discover network architectures and plasticity rules•Anti-Hebbian plasticity emerges as the mechanism for encoding familiarity•Strong feedforward synapses emerge as an addressing function for storage and retrieval•Experimental features such as repetition suppression are reproduced
Tyulmankov et al. use meta-learning to build neural network models for continual familiarity detection. They show that anti-Hebbian plasticity is the preferred mechanism for optimizing memory capacity and propose strong feedforward weights as an explicit addressing mechanism for selecting memory locations during storage and retrieval.
Associating stimuli with positive or negative reinforcement is essential for survival, but a complete wiring diagram of a higher-order circuit supporting associative memory has not been previously ...available. Here we reconstruct one such circuit at synaptic resolution, the Drosophila larval mushroom body. We find that most Kenyon cells integrate random combinations of inputs but that a subset receives stereotyped inputs from single projection neurons. This organization maximizes performance of a model output neuron on a stimulus discrimination task. We also report a novel canonical circuit in each mushroom body compartment with previously unidentified connections: reciprocal Kenyon cell to modulatory neuron connections, modulatory neuron to output neuron connections, and a surprisingly high number of recurrent connections between Kenyon cells. Stereotyped connections found between output neurons could enhance the selection of learned behaviours. The complete circuit map of the mushroom body should guide future functional studies of this learning and memory centre.