Structured piezoresistive membranes are compelling building blocks for wearable bioelectronics. However, the poor structural compressibility of conventional microstructures leads to rapid saturation ...of detection range and low sensitivity of piezoresistive devices, limiting their commercial applications. Herein, a bioinspired MXene‐based piezoresistive device is reported, which can effectively boost the sensitivity while broadening the response range by architecting intermittent villus‐like microstructures. Benefitting from the two‐stage amplification effect of this intermittent architecture, the developed MXene‐based piezoresistive bioelectronics exhibit a high sensitivity of 461 kPa−1 and a broad pressure detection range of up to 311 kPa, which are about 20 and 5 times higher than that of the homogeneous microstructures, respectively. Cooperating with the deep‐learning algorithm, the designed bioelectronics can effectively capture complex human movements and precisely identify human motion with a high recognition accuracy of 99%. Evidently, this intermittent architecture of biomimetic strategy may pave a promising avenue to overcome the limitation of rapid saturation and low sensitivity in piezoresistive bioelectronics, and provide a general way to promote its large‐scale applications.
A villus‐inspired MXene‐based pressure sensor is developed for motion capture. Utilizing the two‐stage enhancement of intermittent architecture and a large‐scale fabrication process, the sensor provides an ascendant way for piezoresistive bioelectronics to overcome the limitations of rapid saturation of detection range and low sensitivity in conventional microstructure‐based sensors, thus promoting a solid advancement toward the rapid development of commercial bioelectronics.
(1) Background: Marker-based 3D motion capture systems (MBS) are considered the gold standard in gait analysis. However, they have limitations for which markerless camera-based 3D motion capture ...systems (MCBS) could provide a solution. The aim of this systematic review and meta-analysis is to compare the accuracy, validity, and reliability of MCBS and MBS. (2) Methods: A total of 2047 papers were systematically searched according to PRISMA guidelines on 7 February 2024, in two different databases: Pubmed (1339) and WoS (708). The COSMIN-tool and EBRO guidelines were used to assess risk of bias and level of evidence. (3) Results: After full text screening, 22 papers were included. Spatiotemporal parameters showed overall good to excellent accuracy, validity, and reliability. For kinematic variables, hip and knee showed moderate to excellent agreement between the systems, while for the ankle joint, poor concurrent validity and reliability were measured. The accuracy and concurrent validity of walking speed were considered excellent in all cases, with only a small bias. The meta-analysis of the inter-rater reliability and concurrent validity of walking speed, step time, and step length resulted in a good-to-excellent intraclass correlation coefficient (ICC) (0.81; 0.98). (4) Discussion and conclusions: MCBS are comparable in terms of accuracy, concurrent validity, and reliability to MBS in spatiotemporal parameters. Additionally, kinematic parameters for hip and knee in the sagittal plane are considered most valid and reliable but lack valid and accurate measurement outcomes in transverse and frontal planes. Customization and standardization of methodological procedures are necessary for future research to adequately compare protocols in clinical settings, with more attention to patient populations.
Embodied hands Romero, Javier; Tzionas, Dimitrios; Black, Michael J.
ACM transactions on graphics,
12/2017, Volume:
36, Issue:
6
Journal Article
Peer reviewed
Open access
Humans move their hands and bodies together to communicate and solve tasks. Capturing and replicating such coordinated activity is critical for virtual characters that behave realistically. ...Surprisingly, most methods treat the 3D modeling and tracking of bodies and hands separately. Here we formulate a model of
hands and bodies interacting together
and fit it to full-body 4D sequences. When scanning or capturing the full body in 3D, hands are small and often partially occluded, making their shape and pose hard to recover. To cope with low-resolution, occlusion, and noise, we develop a new model called
MANO
(
hand Model with Articulated and Non-rigid defOrmations
). MANO is learned from around 1000 high-resolution 3D scans of hands of 31 subjects in a wide variety of hand poses. The model is realistic, low-dimensional, captures non-rigid shape changes with pose, is compatible with standard graphics packages, and can fit any human hand. MANO provides a compact mapping from hand poses to pose blend shape corrections and a linear manifold of pose synergies. We attach MANO to a standard parameterized 3D body shape model (SMPL), resulting in a fully articulated body and hand model (SMPL+H). We illustrate SMPL+H by fitting complex, natural, activities of subjects captured with a 4D scanner. The fitting is fully automatic and results in full body models that move naturally with detailed hand motions and a realism not seen before in full body performance capture. The models and data are freely available for research purposes at http://mano.is.tue.mpg.de.
We present an approach to capture the 3D motion of a group of people engaged in a social interaction. The core challenges in capturing social interactions are: (1) occlusion is functional and ...frequent; (2) subtle motion needs to be measured over a space large enough to host a social group; (3) human appearance and configuration variation is immense; and (4) attaching markers to the body may prime the nature of interactions. The Panoptic Studio is a system organized around the thesis that social interactions should be measured through the integration of perceptual analyses over a large variety of view points. We present a modularized system designed around this principle, consisting of integrated structural, hardware, and software innovations. The system takes, as input, 480 synchronized video streams of multiple people engaged in social activities, and produces, as output, the labeled time-varying 3D structure of anatomical landmarks on individuals in the space. Our algorithm is designed to fuse the "weak" perceptual processes in the large number of views by progressively generating skeletal proposals from low-level appearance cues, and a framework for temporal refinement is also presented by associating body parts to reconstructed dense 3D trajectory stream. Our system and method are the first in reconstructing full body motion of more than five people engaged in social interactions without using markers. We also empirically demonstrate the impact of the number of views in achieving this goal.
Hand motion capture is a popular research field, recently gaining more attention due to the ubiquity of RGB-D sensors. However, even most recent approaches focus on the case of a single isolated ...hand. In this work, we focus on hands that interact with other hands or objects and present a framework that successfully captures motion in such interaction scenarios for both rigid and articulated objects. Our framework combines a generative model with discriminatively trained salient points to achieve a low tracking error and with collision detection and physics simulation to achieve physically plausible estimates even in case of occlusions and missing visual data. Since all components are unified in a single objective function which is almost everywhere differentiable, it can be optimized with standard optimization techniques. Our approach works for monocular RGB-D sequences as well as setups with multiple synchronized RGB cameras. For a qualitative and quantitative evaluation, we captured 29 sequences with a large variety of interactions and up to 150 degrees of freedom.
In the 21st century, much progress has been made following the industrial revolution 4.0. Therefore, the field of animation has also been injected into technological advances. The transmission of ...information is difficult to convey accurately and interactively to users. In providing users with more exposure and convenience in a more attractive and convenient way. Therefore, animation and motion capture are the best options for use today. As a result, the results of this project can have a huge impact on the animation industry in making video or animation more interesting in 2D animation.
Ground reaction forces and moments (GRF&M) are important measures used as input in biomechanical analysis to estimate joint kinetics, which often are used to infer information for many ...musculoskeletal diseases. Their assessment is conventionally achieved using laboratory-based equipment that cannot be applied in daily life monitoring. In this study, we propose a method to predict GRF&M during walking, using exclusively kinematic information from fully-ambulatory inertial motion capture (IMC). From the equations of motion, we derive the total external forces and moments. Then, we solve the indeterminacy problem during double stance using a distribution algorithm based on a smooth transition assumption. The agreement between the IMC-predicted and reference GRF&M was categorized over normal walking speed as excellent for the vertical (
= 0.992, rRMSE = 5.3%), anterior (
= 0.965, rRMSE = 9.4%) and sagittal (
= 0.933, rRMSE = 12.4%) GRF&M components and as strong for the lateral (
= 0.862, rRMSE = 13.1%), frontal (
= 0.710, rRMSE = 29.6%), and transverse GRF&M (
= 0.826, rRMSE = 18.2%). Sensitivity analysis was performed on the effect of the cut-off frequency used in the filtering of the input kinematics, as well as the threshold velocities for the gait event detection algorithm. This study was the first to use only inertial motion capture to estimate 3D GRF&M during gait, providing comparable accuracy with optical motion capture prediction. This approach enables applications that require estimation of the kinetics during walking outside the gait laboratory.
Freezing of gait (FOG) in Parkinson’s disease (PD) is described as a short-term episode of absence or considerable decrease of movement despite the intention of moving forward. FOG is related to risk ...of falls and low quality of life for individuals with PD. FOG has been studied and analyzed through different techniques, including inertial movement units (IMUs) and motion capture systems (MOCAP), both along with robust algorithms. Still, there is not a standardized methodology to identify nor quantify freezing episodes (FEs). In a previous work from our group, a new methodology was developed to differentiate FEs from normal movement using position data obtained from a motion capture system. The purpose of this study is to determine if this methodology is equally effective identifying FEs when using IMUs. Twenty subjects with PD will perform two different gait-related tasks. Trials will be tracked by IMUs and filmed by a video camera; data from IMUs will be compared to the time occurrence of FEs obtained from the videos. We expect this methodology will successfully detect FEs with IMUs’ data. Results would allow the development of a wearable device able to detect and monitor FOG. It is expected that the use of this type of devices would allow clinicians to better understand FOG and improve patients’ care.
We present the results of an evaluation of the performance of the Leap Motion Controller with the aid of a professional, high-precision, fast motion tracking system. A set of static and dynamic ...measurements was performed with different numbers of tracking objects and configurations. For the static measurements, a plastic arm model simulating a human arm was used. A set of 37 reference locations was selected to cover the controller's sensory space. For the dynamic measurements, a special V-shaped tool, consisting of two tracking objects maintaining a constant distance between them, was created to simulate two human fingers. In the static scenario, the standard deviation was less than 0.5 mm. The linear correlation revealed a significant increase in the standard deviation when moving away from the controller. The results of the dynamic scenario revealed the inconsistent performance of the controller, with a significant drop in accuracy for samples taken more than 250 mm above the controller's surface. The Leap Motion Controller undoubtedly represents a revolutionary input device for gesture-based human-computer interaction; however, due to its rather limited sensory space and inconsistent sampling frequency, in its current configuration it cannot currently be used as a professional tracking system.
In the 21st century, much progress has been made following the industrial revolution 4.0. Therefore, the field of animation has also been injected into technological advances. The transmission of ...information is difficult to convey accurately and interactively to users. In providing users with more exposure and convenience in a more attractive and convenient way. Therefore, animation and motion capture are the best options for use today. As a result, the results of this project can have a huge impact on the animation industry in making video or animation more interesting 3D animation.