Spatial navigation can serve as a model system in cognitive neuroscience, in which specific neural representations, learning rules, and control strategies can be inferred from the vast experimental ...literature that exists across many species, including humans. Here, we review this literature, focusing on the contributions of hippocampal and striatal systems, and attempt to outline a minimal cognitive architecture that is consistent with the experimental literature and that synthesizes previous related computational modeling. The resulting architecture includes striatal reinforcement learning based on egocentric representations of sensory states and actions, incidental Hebbian association of sensory information with allocentric state representations in the hippocampus, and arbitration of the outputs of both systems based on confidence/uncertainty in medial prefrontal cortex. We discuss the relationship between this architecture and learning in model-free and model-based systems, episodic memory, imagery, and planning, including some open questions and directions for further experiments.
Chersi and Burgess review the neural mechanisms of spatial navigation in rodents and humans to extract a common “cognitive architecture,” identifying the learning rules and representations at work in hippocampal, striatal, and parietal systems and discussing remaining open questions.
Humans and other animals use multiple strategies for making decisions. Reinforcement-learning theory distinguishes between stimulus–response (model-free; MF) learning and deliberative (model-based; ...MB) planning. The spatial-navigation literature presents a parallel dichotomy between navigation strategies. In “response learning,” associated with the dorsolateral striatum (DLS), decisions are anchored to an egocentric reference frame. In “place learning,” associated with the hippocampus, decisions are anchored to an allocentric reference frame. Emerging evidence suggests that the contribution of hippocampus to place learning may also underlie its contribution to MB learning by representing relational structure in a cognitive map. Here, we introduce a computational model in which hippocampus subserves place and MB learning by learning a “successor representation” of relational structure between states; DLS implements model-free response learning by learning associations between actions and egocentric representations of landmarks; and action values from either system are weighted by the reliability of its predictions. We show that this model reproduces a range of seemingly disparate behavioral findings in spatial and nonspatial decision tasks and explains the effects of lesions to DLS and hippocampus on these tasks. Furthermore, modeling place cells as driven by boundaries explains the observation that, unlike navigation guided by landmarks, navigation guided by boundaries is robust to “blocking” by prior state–reward associations due to learned associations between place cells. Our model, originally shaped by detailed constraints in the spatial literature, successfully characterizes the hippocampal–striatal system as a general system for decision making via adaptive combination of stimulus–response learning and the use of a cognitive map.
Inferior parietal lobule (IPL) neurons were studied when monkeys performed motor acts embedded in different actions and when they observed similar acts done by an experimenter. Most motor IPL neurons ...coding a specific act (e.g., grasping) showed markedly different activations when this act was part of different actions (e.g., for eating or for placing). Many motor IPL neurons also discharged during the observation of acts done by others. Most responded differentially when the same observed act was embedded in a specific action. These neurons fired during the observation of an act, before the beginning of the subsequent acts specifying the action. Thus, these neurons not only code the observed motor act but also allow the observer to understand the agent's intentions.
Instrumental behavior depends on both goal-directed and habitual mechanisms of choice. Normative views cast these mechanisms in terms of model-free and model-based methods of reinforcement learning, ...respectively. An influential proposal hypothesizes that model-free and model-based mechanisms coexist and compete in the brain according to their relative uncertainty. In this paper we propose a novel view in which a single Mixed Instrumental Controller produces both goal-directed and habitual behavior by flexibly balancing and combining model-based and model-free computations. The Mixed Instrumental Controller performs a cost-benefits analysis to decide whether to chose an action immediately based on the available "cached" value of actions (linked to model-free mechanisms) or to improve value estimation by mentally simulating the expected outcome values (linked to model-based mechanisms). Since mental simulation entails cognitive effort and increases the reward delay, it is activated only when the associated "Value of Information" exceeds its costs. The model proposes a method to compute the Value of Information, based on the uncertainty of action values and on the distance of alternative cached action values. Overall, the model by default chooses on the basis of lighter model-free estimates, and integrates them with costly model-based predictions only when useful. Mental simulation uses a sampling method to produce reward expectancies, which are used to update the cached value of one or more actions; in turn, this updated value is used for the choice. The key predictions of the model are tested in different settings of a double T-maze scenario. Results are discussed in relation with neurobiological evidence on the hippocampus - ventral striatum circuit in rodents, which has been linked to goal-directed spatial navigation.
The inferior part of the parietal lobe (IPL) is known to play a very important role in sensorimotor integration. Neurons in this region code goal-related motor acts performed with the mouth, with the ...hand and with the arm. It has been demonstrated that most IPL motor neurons coding a specific motor act (e.g., grasping) show markedly different activation patterns according to the final goal of the action sequence in which the act is embedded (grasping for eating or grasping for placing). Some of these neurons (parietal mirror neurons) show a similar selectivity also during the observation of the same action sequences when executed by others. Thus, it appears that the neuronal response occurring during the execution and the observation of a specific grasping act codes not only the executed motor act, but also the agent's final goal (intention).In this work we present a biologically inspired neural network architecture that models mechanisms of motor sequences execution and recognition. In this network, pools composed of motor and mirror neurons that encode motor acts of a sequence are arranged in form of action goal-specific neuronal chains. The execution and the recognition of actions is achieved through the propagation of activity bursts along specific chains modulated by visual and somatosensory inputs.The implemented spiking neuron network is able to reproduce the results found in neurophysiological recordings of parietal neurons during task performance and provides a biologically plausible implementation of the action selection and recognition process.Finally, the present paper proposes a mechanism for the formation of new neural chains by linking together in a sequential manner neurons that represent subsequent motor acts, thus producing goal-directed sequences.
Dual-system theories postulate that actions are supported either by a goal-directed or by a habit-driven response system. Neuroimaging and anatomo-functional studies have provided evidence that the ...prefrontal cortex plays a fundamental role in the first type of action control, while internal brain areas such as the basal ganglia are more active during habitual and overtrained responses. Additionally, it has been shown that areas of the cortex and the basal ganglia are connected through multiple parallel “channels”, which are thought to function as an action selection mechanism resolving competitions between alternative options available in a given context.
In this paper we propose a multi-layer network of spiking neurons that implements in detail the thalamo-cortical circuits that are believed to be involved in action learning and execution. A key feature of this model is that neurons are organized in small pools in the motor cortex and form independent loops with specific pools of the basal ganglia where inhibitory circuits implement a multistep selection mechanism.
The described model has been validated utilizing it to control the actions of a virtual monkey that has to learn to turn on briefly flashing lights by pressing corresponding buttons on a board. When the animal is able to fluently execute the task the button–light associations are remapped so that it has to suppress its habitual behavior in order to execute goal-directed actions.
The model nicely shows how sensory-motor associations for action sequences are formed at the cortico-basal ganglia level and how goal-directed decisions may override automatic motor responses.
► Modulation of the motor system due to noun processing occurs within 150
ms. ► Tool nouns are a special class because they imply both manipulation and use. ► Graspable natural nouns lose effectivity ...in modulating the motor system with repetition.
While increasing evidence points to a critical role for the motor system in language processing, the focus of previous work has been on the linguistic category of verbs. Here we tested whether nouns are effective in modulating the motor system and further whether different kinds of nouns – those referring to artifacts or natural items, and items that are graspable or ungraspable – would differentially modulate the system. A Transcranial Magnetic Stimulation (TMS) study was carried out to compare modulation of the motor system when subjects read nouns referring to objects which are Artificial or Natural and which are Graspable or Ungraspable. TMS was applied to the primary motor cortex representation of the first dorsal interosseous (FDI) muscle of the right hand at 150
ms after noun presentation. Analyses of Motor Evoked Potentials (MEPs) revealed that across the duration of the task, nouns referring to graspable artifacts (tools) were associated with significantly greater MEP areas. Analyses of the initial presentation of items revealed a main effect of graspability. The findings are in line with an embodied view of nouns, with MEP measures modulated according to whether nouns referred to natural objects or artifacts (tools), confirming tools as a special class of items in motor terms. Additionally our data support a difference for graspable versus non graspable objects, an effect which for natural objects is restricted to initial presentation of items.
Neuronflow is a neuromorphic, many core, data flow architecture that exploits brain-inspired concepts to deliver a scalable event-based processing engine for neuron networks in Live AI applications. ...Its design is inspired by brain biology, but not necessarily biologically plausible. The main design goal is the exploitation of sparsity to dramatically reduce latency and power consumption as required by sensor processing at the Edge.
A growing body of evidence in cognitive science and neuroscience points towards the existence of a deep interconnection between cognition, perception and action. According to this embodied ...perspective language is grounded in the sensorimotor system and language understanding is based on a mental simulation process (Jeannerod, 2007; Gallese, 2008; Barsalou, 2009). This means that during action words and sentence comprehension the same perception, action, and emotion mechanisms implied during interaction with objects are recruited. Among the neural underpinnings of this simulation process an important role is played by a sensorimotor matching system known as the mirror neuron system (Rizzolatti and Craighero, 2004). Despite a growing number of studies, the precise dynamics underlying the relation between language and action are not yet well understood. In fact, experimental studies are not always coherent as some report that language processing interferes with action execution while others find facilitation. In this work we present a detailed neural network model capable of reproducing experimentally observed influences of the processing of action-related sentences on the execution of motor sequences. The proposed model is based on three main points. The first is that the processing of action-related sentences causes the resonance of motor and mirror neurons encoding the corresponding actions. The second is that there exists a varying degree of crosstalk between neuronal populations depending on whether they encode the same motor act, the same effector or the same action-goal. The third is the fact that neuronal populations' internal dynamics, which results from the combination of multiple processes taking place at different time scales, can facilitate or interfere with successive activations of the same or of partially overlapping pools.
Humans, in particular, and to a lesser extent also other species of animals, possess the impressive capability of smoothly coordinating their actions with those of others. The great amount of work ...done in recent years in neuroscience has provided new insights into the processes involved in joint action, intention understanding, and task sharing. In particular, the discovery of mirror neurons, which fire both when animals execute actions and when they observe the same actions done by other individuals, has shed light on the intimate relationship between perception and action elucidating the direct contribution of motor knowledge to action understanding. Up to date, however, a detailed description of the neural processes involved in these phenomena is still mostly lacking. Building upon data from single neuron recordings in monkeys observing the actions of a demonstrator and then executing the same or a complementary action, this paper describes the functioning of a biologically constraint neural network model of the motor and mirror systems during joint action. In this model, motor sequences are encoded as independent neuronal chains that represent concatenations of elementary motor acts leading to a specific goal. Action execution and recognition are achieved through the propagation of activity within specific chains. Due to the dual property of mirror neurons, the same architecture is capable of smoothly integrating and switching between observed and self-generated action sequences, thus allowing to evaluate multiple hypotheses simultaneously, understand actions done by others, and to respond in an appropriate way.