The hippocampus plays a central role in spatial representation, declarative and episodic memory. In this area, so-called place cells possess high spatial selectivity, firing preferentially when the ...individual is within a small area of the environment. Interestingly, it has been found in rats that these cells can be active also when the animal is outside the location or context of their corresponding place field producing so-called “forward sweeps”. These typically occur at decision points during task execution and seem to be utilized, among other things, for the evaluation of potential alternative paths. Anticipatory firing is also found in the ventral striatum, a brain area that is strongly interconnected with the hippocampus and is known to encode value and reward. In this paper, we describe a biologically based computational model of the hippocampal-ventral striatum circuit that implements a goal-directed mechanism of choice, with the hippocampus primarily involved in the mental simulation of possible navigation paths and the ventral striatum involved in the evaluation of the associated reward expectancies. The model is validated in a navigation task in which a rat is placed in a complex maze with multiple rewarding sites. We show that the rat mentally activates place cells to simulate paths, estimate their value, and make decisions, implementing two essential processes of model-based reinforcement learning algorithms of choice: look-ahead prediction and the evaluation of predicted states.
Humans are very efficient in learning new skills through imitation and social interaction with other individuals. Recent experimental findings on the functioning of the mirror neuron system in humans ...and animals and on the coding of intentions, have led to the development of more realistic and powerful models of action understanding and imitation. This paper describes the implementation on a humanoid robot of a spiking neuron model of the mirror system. The proposed architecture is validated in an imitation task where the robot has to observe and understand manipulative action sequences executed by a human demonstrator and reproduce them on demand utilizing its own motor repertoire. To instruct the robot what to observe and to learn, and when to imitate, the demonstrator utilizes a simple form of sign language. Two basic principles underlie the functioning of the system: 1) imitation is primarily directed toward reproducing the goals of observed actions rather than the exact hand trajectories; and 2) the capacity to understand the motor intentions of another individual is based on the resonance of the same neural populations that are active during action execution. Experimental findings show that the use of even a very simple form of gesture-based communication allows to develop robotic architectures that are efficient, simple and user friendly.
Recent experimental evidence indicates that animals can use mental simulation to make decisions about the actions to take during goal-directed navigation. The principal brain areas found to be active ...during this process are the hippocampus, the ventral striatum and the sensory-motor cortex. In this paper, we present a computational model that includes biological aspects of this circuit and explains mechanistically how it may be used to imagine and evaluate future events. Its most salient characteristic is that choices about actions are made by simulating movements and their sensory effects using the same brain areas that are active during overt execution. More precisely, the simulation of an action (e.g., walking) creates a new sensory pattern that is evaluated in the same way as real inputs. The model is validated in a navigation task in which a simulated rat is placed in a complex maze. We show that hippocampal and striatal cells are activated to simulate paths, to retrieve their estimated value and to make decisions. We link these results with a general framework that sees the brain as a predictive device that can ‘detach’ itself from the here-and-now of current perception using mechanisms such as episodic memories, motor and visual imagery.
Recent theories of mindreading explain the recognition of action, intention, and belief of other agents in terms of generative architectures that model the causal relations between observables (e.g., ...observed movements) and their hidden causes (e.g., action goals and beliefs). Two kinds of probabilistic generative schemes have been proposed in cognitive science and robotics that link to a “theory theory” and “simulation theory” of mindreading, respectively. The former compares perceived actions to optimal plans derived from rationality principles and conceptual theories of others’ minds. The latter reuses one’s own internal (inverse and forward) models for action execution to perform a look-ahead mental simulation of perceived actions. Both theories, however, leave one question unanswered: how are the generative models – including task structure and parameters – learned in the first place? We start from Dennett’s “intentional stance” proposal and characterize it within generative theories of action and intention recognition. We propose that humans use an intentional stance as a learning bias that sidesteps the (hard) structure learning problem and bootstraps the acquisition of generative models for others’ actions. The intentional stance corresponds to a candidate structure in the generative scheme, which encodes a simplified belief-desire folk psychology and a hierarchical intention-to-action organization of behavior. This simple structure can be used as a proxy for the “true” generative structure of others’ actions and intentions and is
continuously
grown and refined – via state and parameter learning – during interactions. In turn – as our computational simulations show – this can help solve mindreading problems and bootstrap the acquisition of useful causal models of both one’s own and others’ goal-directed actions.
There is wide consensus that the prefrontal cortex (PFC) is able to exert cognitive control on behavior by biasing processing toward task-relevant information and by modulating response selection. ...This idea is typically framed in terms of top-down influences within a cortical control hierarchy, where prefrontal-basal ganglia loops gate multiple input-output channels, which in turn can activate or sequence motor primitives expressed in (pre-)motor cortices. Here we advance a new hypothesis, based on the notion of programmability and an interpreter-programmer computational scheme, on how the PFC can flexibly bias the selection of sensorimotor patterns depending on internal goal and task contexts. In this approach, multiple elementary behaviors representing motor primitives are expressed by a single multi-purpose neural network, which is seen as a reusable area of "recycled" neurons (interpreter). The PFC thus acts as a "programmer" that, without modifying the network connectivity, feeds the interpreter networks with specific input parameters encoding the programs (corresponding to network structures) to be interpreted by the (pre-)motor areas. Our architecture is validated in a standard test for executive function: the 1-2-AX task. Our results show that this computational framework provides a robust, scalable and flexible scheme that can be iterated at different hierarchical layers, supporting the realization of multiple goals. We discuss the plausibility of the "programmer-interpreter" scheme to explain the functioning of prefrontal-(pre)motor cortical hierarchies.
A growing body of evidence in cognitive psychology and neuroscience suggests a deep interconnection between sensory‐motor and language systems in the brain. Based on recent neurophysiological ...findings on the anatomo‐functional organization of the fronto‐parietal network, we present a computational model showing that language processing may have reused or co‐developed organizing principles, functionality, and learning mechanisms typical of premotor circuit. The proposed model combines principles of Hebbian topological self‐organization and prediction learning. Trained on sequences of either motor or linguistic units, the network develops independent neuronal chains, formed by dedicated nodes encoding only context‐specific stimuli. Moreover, neurons responding to the same stimulus or class of stimuli tend to cluster together to form topologically connected areas similar to those observed in the brain cortex. Simulations support a unitary explanatory framework reconciling neurophysiological motor data with established behavioral evidence on lexical acquisition, access, and recall.