Freezing of gait (FOG) in Parkinson’s disease (PD) is described as a short-term episode of absence or considerable decrease of movement despite the intention of moving forward. FOG is related to risk ...of falls and low quality of life for individuals with PD. FOG has been studied and analyzed through different techniques, including inertial movement units (IMUs) and motion capture systems (MOCAP), both along with robust algorithms. Still, there is not a standardized methodology to identify nor quantify freezing episodes (FEs). In a previous work from our group, a new methodology was developed to differentiate FEs from normal movement using position data obtained from a motion capture system. The purpose of this study is to determine if this methodology is equally effective identifying FEs when using IMUs. Twenty subjects with PD will perform two different gait-related tasks. Trials will be tracked by IMUs and filmed by a video camera; data from IMUs will be compared to the time occurrence of FEs obtained from the videos. We expect this methodology will successfully detect FEs with IMUs’ data. Results would allow the development of a wearable device able to detect and monitor FOG. It is expected that the use of this type of devices would allow clinicians to better understand FOG and improve patients’ care.
In the 21st century, much progress has been made following the industrial revolution 4.0. Therefore, the field of animation has also been injected into technological advances. The transmission of ...information is difficult to convey accurately and interactively to users. In providing users with more exposure and convenience in a more attractive and convenient way. Therefore, animation and motion capture are the best options for use today. As a result, the results of this project can have a huge impact on the animation industry in making video or animation more interesting 3D animation.
Spatiotemporal parameters can characterize the gait patterns of individuals, allowing assessment of their health status and detection of clinically meaningful changes in their gait. Video-based ...markerless motion capture is a user-friendly, inexpensive, and widely applicable technology that could reduce the barriers to measuring spatiotemporal gait parameters in clinical and more diverse settings. Two studies were performed to determine whether gait parameters measured using markerless motion capture demonstrate concurrent validity with those measured using marker-based motion capture and a pressure-sensitive gait mat. For the first study, thirty healthy young adults performed treadmill gait at self-selected speeds while marker-based motion capture and synchronized video data were recorded simultaneously. For the second study, twenty-five healthy young adults performed over-ground gait at self-selected speeds while footfalls were recorded using a gait mat and synchronized video data were recorded simultaneously. Kinematic heel-strike and toe-off gait events were used to identify the same gait cycles between systems. Nine spatiotemporal gait parameters were measured by each system and directly compared between systems. Measurements were compared using Bland-Altman methods, mean differences, Pearson correlation coefficients, and intraclass correlation coefficients. The results indicate that markerless measurements of spatiotemporal gait parameters have good to excellent agreement with marker-based motion capture and gait mat systems, except for stance time and double limb support time relative to both systems and stride width relative to the gait mat. These findings indicate that markerless motion capture can adequately measure spatiotemporal gait parameters of healthy young adults during treadmill and over-ground gait.
The clinical uptake and influence of gait analysis has been hindered by inherent limitations of marker-based motion capture systems, which have long been the standard method for the collection of ...gait data including kinematics. Markerless motion capture offers an alternative method for the collection of gait kinematics that presents several practical benefits over marker-based systems. This work aimed to determine the reliability of lower limb gait kinematics from video based markerless motion capture using an established experimental protocol for testing reliability. Eight healthy adult participants performed three sessions of five over-ground walking trials in their own self-selected clothing, separated by an average of 8.5 days, while eight synchronized and calibrated cameras recorded video. Three-dimensional pose estimates from the video data were used to compute lower limb joint angles. Inter-session variability, inter-trial variability, and the variability ratio were used to assess the reliability of the gait kinematics. Compared to repeatability studies based on marker-based motion capture, inter-trial variability was slightly greater than previously reported for some angles, with an average across all joint angles of 2.5°. Inter-session variability was smaller on average than all previously reported values, with an average across all joint angles of 2.8°. Variability ratios were all smaller than those previously reported with an average of 1.1, indicating that the multi-session protocol increased the total variability of joint angles by 10% of the inter-trial variability. These results indicate that gait kinematics can be reliably measured using markerless motion capture.
DReCon Bergamin, Kevin; Clavet, Simon; Holden, Daniel ...
ACM transactions on graphics,
11/2019, Volume:
38, Issue:
6
Journal Article
Peer reviewed
Interactive control of self-balancing, physically simulated humanoids is a long standing problem in the field of real-time character animation. While physical simulation guarantees realistic ...interactions in the virtual world, simulated characters can appear unnatural if they perform unusual movements in order to maintain balance. Therefore, obtaining a high level of responsiveness to user control, runtime performance, and diversity has often been overlooked in exchange for motion quality. Recent work in the field of deep reinforcement learning has shown that training physically simulated characters to follow motion capture clips can yield high quality tracking results. We propose a two-step approach for building responsive simulated character controllers from unstructured motion capture data. First, meaningful features from the data such as movement direction, heading direction, speed, and locomotion style, are interactively specified and drive a kinematic character controller implemented using motion matching. Second, reinforcement learning is used to train a simulated character controller that is general enough to track the entire distribution of motion that can be generated by the kinematic controller. Our design emphasizes responsiveness to user input, visual quality, and low runtime cost for application in video-games.
Computer-vision-based frameworks enable markerless human motion capture on consumer-grade devices in real-time. They open up new possibilities for application, such as in the health and medical ...sector. So far, research on mobile solutions has been focused on 2-dimensional motion capture frameworks. 2D motion analysis is limited by the viewing angle of the positioned camera. New frameworks enable 3-dimensional human motion capture and can be supported through additional smartphone sensors such as LiDAR. 3D motion capture promises to overcome the limitations of 2D frameworks by considering all three movement planes independent of the camera angle. In this study, we performed a laboratory experiment with ten subjects, comparing the joint angles in eight different body-weight exercises tracked by Apple ARKit, a mobile 3D motion capture framework, against a gold-standard system for motion capture: the Vicon system. The 3D motion capture framework exposed a weighted Mean Absolute Error of 18.80° ± 12.12° (ranging from 3.75° ± 0.99° to 47.06° ± 5.11° per tracked joint angle and exercise) and a Mean Spearman Rank Correlation Coefficient of 0.76 for the whole data set. The data set shows a high variance of those two metrics between the observed angles and performed exercises. The observed accuracy is influenced by the visibility of the joints and the observed motion. While the 3D motion capture framework is a promising technology that could enable several use cases in the entertainment, health, and medical area, its limitations should be considered for each potential application area.
We present a modular differentiable renderer design that yields performance superior to previous methods by leveraging existing, highly optimized hardware graphics pipelines. Our design supports all ...crucial operations in a modern graphics pipeline: rasterizing large numbers of triangles, attribute interpolation, filtered texture lookups, as well as user-programmable shading and geometry processing, all in high resolutions. Our modular primitives allow custom, high-performance graphics pipelines to be built directly within automatic differentiation frameworks such as PyTorch or TensorFlow. As a motivating application, we formulate facial performance capture as an inverse rendering problem and show that it can be solved efficiently using our tools. Our results indicate that this simple and straightforward approach achieves excellent geometric correspondence between rendered results and reference imagery.
Gait analysis is necessary to diagnose movement disorders. In order to reduce the costs of three-dimensional motion capture systems, new low-cost methods of motion analysis have been developed. The ...purpose of this study was to evaluate the inter- and intra-rater reliability of Kinovea
and the agreement with a three-dimensional motion system for detecting the joint angles of the hip, knee and ankle during the initial contact phase of walking. Fifty healthy subjects participated in this study. All participants were examined twice with a one-week interval between the two appointments. The motion data were recorded using the VICON Motion System
and digital video cameras. The intra-rater reliability showed a good correlation for the hip, the knee and the ankle joints (Intraclass Correlation Coefficient, ICC > 0.85) for both observers. The ICC for the inter-rater reliability was >0.90 for the hip, the knee and the ankle joints. The Bland-Altman plots showed that the magnitude of disagreement was approximately ±5° for intra-rater reliability, ±2.5° for inter-rater reliability and around ±2.5° to ±5° for Kinovea
versus Vicon
. The ICC was good for the hip, knee and ankle angles registered with Kinovea
during the initial contact of walking for both observers (intra-rater reliability) and higher for the agreement between observers (inter-rater reliability). However, the Bland-Altman plots showed disagreement between observers, measurements and systems (Kinovea
vs. three-dimensional motion system) that should be considered in the interpretation of clinical evaluations.
Marker-less 3D human motion capture from a single colour camera has seen significant progress. However, it is a very challenging and severely ill-posed problem. In consequence, even the most accurate ...state-of-the-art approaches have significant limitations. Purely kinematic formulations on the basis of individual joints or skeletons, and the frequent frame-wise reconstruction in state-of-the-art methods greatly limit 3D accuracy and temporal stability compared to multi-view or marker-based motion capture. Further, captured 3D poses are often physically incorrect and biomechanically implausible, or exhibit implausible environment interactions (floor penetration, foot skating, unnatural body leaning and strong shifting in depth), which is problematic for any use case in computer graphics. We, therefore, present PhysCap, the first algorithm for physically plausible, real-time and marker-less human 3D motion capture with a single colour camera at 25 fps. Our algorithm first captures 3D human poses purely kinematically. To this end, a CNN infers 2D and 3D joint positions, and subsequently, an inverse kinematics step finds space-time coherent joint angles and global 3D pose. Next, these kinematic reconstructions are used as constraints in a real-time physics-based pose optimiser that accounts for environment constraints (e.g., collision handling and floor placement), gravity, and biophysical plausibility of human postures. Our approach employs a combination of ground reaction force and residual force for plausible root control, and uses a trained neural network to detect foot contact events in images. Our method captures physically plausible and temporally stable global 3D human motion, without physically implausible postures, floor penetrations or foot skating, from video in real time and in general scenes. PhysCap achieves state-of-the-art accuracy on established pose benchmarks, and we propose new metrics to demonstrate the improved physical plausibility and temporal stability.