The absorption of impacts resulting from contact with a landing surface during gait, running and drop landings has received considerable attention in the literature. This research has important ...clinical relevance as failure to appropriately plan and control impact absorption may lead to injuries to the musculo-skeletal system. This review attempts to summarize evidence gathered by studies on the motor control aspects of impact absorption during landing movements. Although this review focuses primarily on the control of landings from self-initiated falls or ‘drop landings’, an understanding of the motor control mechanisms underlying impact absorption is essential to understand common anticipatory and reflex mechanisms involved in a broader variety of movements such as running and jumping. The review is structured in three parts: the first two parts examine the preparatory muscle activity occurring during the fall (Part I) and after touch down (Part II). Part III explores the proposed sensorimotor mechanisms underlying the control of landing. The review concludes with as yet unresolved questions and directions for future research.
Dexterous manipulation relies on the ability to simultaneously attain two goals: controlling object position and orientation (pose) and preventing object slip. Although object manipulation has been ...extensively studied, most previous work has focused only on the control of digit forces for slip prevention. Therefore, it remains underexplored how humans coordinate digit forces to prevent object slip and control object pose simultaneously. We developed a dexterous manipulation task requiring subjects to grasp and lift a sensorized object using different grasp configurations while preventing it from tilting. We decomposed digit forces into manipulation and grasp forces for pose control and slip prevention, respectively. By separating biomechanically-obligatory from non-obligatory effects of grasp configuration, we found that subjects prioritized grasp stability over efficiency in grasp force control. Furthermore, grasp force was controlled in an anticipatory fashion at object lift onset, whereas manipulation force was modulated following acquisition of somatosensory and visual feedback of object's dynamics throughout object lift. Mathematical modeling of feasible manipulation forces further confirmed that subjects could not accurately anticipate the required manipulation force prior to acquisition of sensory feedback. Our experimental approach and findings open new research avenues for investigating neural mechanisms underlying dexterous manipulation and biomedical applications.
Despite longstanding evidence suggesting a relation between action and perception, the mechanisms underlying their integration are still unclear. It has been proposed that to simplify the ...sensorimotor integration processes underlying active perception, the central nervous system (CNS) selects patterns of movements aimed at maximizing sampling of task-related sensory input. While previous studies investigated the action-perception loop focusing on the role of higher-level features of motor behavior (e.g., kinematic invariants, effort), the present study explored and quantified the contribution of lower-level organization of motor control. We tested the hypothesis that the coordinated recruitment of group of muscles (i.e., motor modules) engaged to counteract an external force contributes to participants' perception of the same force. We found that: 1) a model describing the modulation of a subset of motor modules involved in the motor task accounted for about 70% of participants' perceptual variance; 2) an alternative model, incompatible with the motor modules hypothesis, accounted for significantly lower variance of participants' detection performance. Our results provide empirical evidence of the potential role played by muscle activation patterns in active perception of force. They also suggest that a modular organization of motor control may mediate not only coordination of multiple muscles, but also perceptual inference.
An object can be used in multiple contexts, each requiring different hand actions. How the central nervous system builds and maintains memory of such dexterous manipulations remains unclear. We ...conducted experiments in which human subjects had to learn and recall manipulations performed in two contexts, A and B. Both contexts involved lifting the same L-shaped object whose geometry cued its asymmetrical mass distribution. Correct performance required producing a torque on the vertical handle at object lift onset to prevent it from tilting. The torque direction depended on the context, i.e., object orientation, which was changed by 180° object rotation about a vertical axis. With an A1B1A2 context switching paradigm, subjects learned A1 in the first block of eight trials as indicated by a torque approaching the required one. However, subjects made large errors in anticipating the required torque when switching to B1 immediately after A1 (negative transfer), as well as when they had to recall A1 when switching to A2 after learning B through another block of eight lifts (retrieval interference). Classic sensorimotor learning theories attribute such interferences to multi-rate, multi-state error-driven updates of internal models. However, by systematically changing the interblock break duration and within-block number of trials, our results suggest an alternative explanation underlying interference and retention of dexterous manipulation. Specifically, we identified and quantified through a novel computational model the nonlinear interaction between two sensorimotor mechanisms: a short-lived, context-independent, use-dependent sensorimotor memory and a context-sensitive, error-based learning process.
The term ‘synergy’ – from the Greek synergia – means ‘working together’. The concept of multiple elements working together towards a common goal has been extensively used in neuroscience to develop ...theoretical frameworks, experimental approaches, and analytical techniques to understand neural control of movement, and for applications for neuro-rehabilitation. In the past decade, roboticists have successfully applied the framework of synergies to create novel design and control concepts for artificial hands, i.e., robotic hands and prostheses. At the same time, robotic research on the sensorimotor integration underlying the control and sensing of artificial hands has inspired new research approaches in neuroscience, and has provided useful instruments for novel experiments.
The ambitious goal of integrating expertise and research approaches in robotics and neuroscience to study the properties and applications of the concept of synergies is generating a number of multidisciplinary cooperative projects, among which the recently finished 4-year European project “The Hand Embodied” (THE). This paper reviews the main insights provided by this framework. Specifically, we provide an overview of neuroscientific bases of hand synergies and introduce how robotics has leveraged the insights from neuroscience for innovative design in hardware and controllers for biomedical engineering applications, including myoelectric hand prostheses, devices for haptics research, and wearable sensing of human hand kinematics. The review also emphasizes how this multidisciplinary collaboration has generated new ways to conceptualize a synergy-based approach for robotics, and provides guidelines and principles for analyzing human behavior and synthesizing artificial robotic systems based on a theory of synergies.
•Synergies have been extensively studied by neuroscientists to understand the control of multi-joint movements.•Synergies are thought to emerge from the interaction of neural and biomechanical factors.•The field of robotics has recently exploited the concept of synergies to design and build artificial hands.•Neuroscience has leveraged recent advances in robotics research by using novel tools and approaches to study hand control.
The hand is one of the most fascinating and sophisticated biological motor systems. The complex biomechanical and neural architecture of the hand poses challenging questions for understanding the ...control strategies that underlie the coordination of finger movements and forces required for a wide variety of behavioral tasks, ranging from multidigit grasping to the individuated movements of single digits. Hence, a number of experimental approaches, from studies of finger movement kinematics to the recording of electromyographic and cortical activities, have been used to extend our knowledge of neural control of the hand. Experimental evidence indicates that the simultaneous motion and force of the fingers are characterized by coordination patterns that reduce the number of independent degrees of freedom to be controlled. Peripheral and central constraints in the neuromuscular apparatus have been identified that may in part underlie these coordination patterns, simplifying the control of multi-digit grasping while placing certain limitations on individuation of finger movements. We review this evidence, with a particular emphasis on how these constraints extend through the neuromuscular system from the behavioral aspects of finger movements and forces to the control of the hand from the motor cortex.
During manipulation, object slipping is prevented by modulating the grip force (GF) in synchrony with motion-related inertial forces, i.e., load force (LF). However, due to conduction delays of the ...sensory system, GF must be modulated in advance based on predictions of LF changes. It has been proposed that such predictive force control relies on internal representations, i.e., internal models, of the relation between the dynamic of the environment and movement kinematics. Somatosensory and visual feedback play a primary role in building these internal representations. For instance, it has been shown that manipulation-dependent somatosensory signals contribute to building internal representations of gravity in normal and altered gravitational contexts. Furthermore, delaying the timing of visual feedback of object displacement has been shown to affect GF. Here, we explored whether and the extent to which spatial features of visual feedback movement, such as motion direction, may contribute to GF control. If this were the case, a spatial mismatch between actual (somatosensory) and visual feedback of object motion would elicit changes in GF modulation. We tested this hypothesis by asking participants to generate vertical object movements while visual feedback of object position was congruent (0° rotation) or incongruent (180° or 90°) with the actual object displacement. The role of vision on GF control was quantified by the temporal shift of grip force modulation as a function of visual feedback orientation and actual object motion direction. GF control was affected by visual feedback when this was incongruent in the vertical (180°), but not horizontal dimension. Importantly, 180° visual feedback rotation delayed and anticipated GF modulation during upward and downward actual movements, respectively. Our findings suggest that during manipulation, spatial features of visual feedback motion are used to predict upcoming LF changes. Furthermore, the present study provides evidence that internal model of gravity contributes to GF control by influencing sensory reweighting processes during object manipulation.
Roughly one quarter of active upper limb prosthetic technology is rejected by the user, and user surveys have identified key areas requiring improvement: function, comfort, cost, durability, and ...appearance. Here we present the first systematic, clinical assessment of a novel prosthetic hand, the SoftHand Pro (SHP), in participants with transradial amputation and age-matched, limb-intact participants. The SHP is a robust and functional prosthetic hand that minimizes cost and weight using an underactuated design with a single motor. Participants with limb loss were evaluated on functional clinical measures before and after a 6-8 hour training period with the SHP as well as with their own prosthesis; limb-intact participants were tested only before and after SHP training. Participants with limb loss also evaluated their own prosthesis and the SHP (following training) using subjective questionnaires. Both objective and subjective results were positive and illuminated the strengths and weaknesses of the SHP. In particular, results pre-training show the SHP is easy to use, and significant improvement in the Activities Measure for Upper Limb Amputees in both groups following a 6-8 hour training highlights the ease of learning the unique features of the SHP (median improvement: 4.71 and 3.26 and p = 0.009 and 0.036 for limb loss and limb-intact groups, respectively). Further, we found no difference in performance compared to participant's own commercial devices in several clinical measures and found performance surpassing these devices on two functional tasks, buttoning a shirt and using a cell phone, suggesting a functional prosthetic design. Finally, improvements are needed in the SHP design and/or training in light of poor results in small object manipulation. Taken together, these results show the promise of the SHP, a flexible and adaptive prosthetic hand, and pave a path forward to ensuring higher functionality in future.
Options currently available to individuals with upper limb loss range from prosthetic hands that can perform many movements, but require more cognitive effort to control, to simpler terminal devices ...with limited functional abilities. We attempted to address this issue by designing a myoelectric control system to modulate prosthetic hand posture and digit force distribution.
We recorded surface electromyographic (EMG) signals from five forearm muscles in eight able-bodied subjects while they modulated hand posture and the flexion force distribution of individual fingers. We used a support vector machine (SVM) and a random forest regression (RFR) to map EMG signal features to hand posture and individual digit forces, respectively. After training, subjects performed grasping tasks and hand gestures while a computer program computed and displayed online feedback of all digit forces, in which digits were flexed, and the magnitude of contact forces. We also used a commercially available prosthetic hand, the i-Limb (Touch Bionics), to provide a practical demonstration of the proposed approach's ability to control hand posture and finger forces.
Subjects could control hand pose and force distribution across the fingers during online testing. Decoding success rates ranged from 60% (index finger pointing) to 83-99% for 2-digit grasp and resting state, respectively. Subjects could also modulate finger force distribution.
This work provides a proof of concept for the application of SVM and RFR for online control of hand posture and finger force distribution, respectively. Our approach has potential applications for enabling in-hand manipulation with a prosthetic hand.