Sepsis is the third leading cause of death worldwide and the main cause of mortality in hospitals
, but the best treatment strategy remains uncertain. In particular, evidence suggests that current ...practices in the administration of intravenous fluids and vasopressors are suboptimal and likely induce harm in a proportion of patients
. To tackle this sequential decision-making problem, we developed a reinforcement learning agent, the Artificial Intelligence (AI) Clinician, which extracted implicit knowledge from an amount of patient data that exceeds by many-fold the life-time experience of human clinicians and learned optimal treatment by analyzing a myriad of (mostly suboptimal) treatment decisions. We demonstrate that the value of the AI Clinician's selected treatment is on average reliably higher than human clinicians. In a large validation cohort independent of the training data, mortality was lowest in patients for whom clinicians' actual doses matched the AI decisions. Our model provides individualized and clinically interpretable treatment decisions for sepsis that could improve patient outcomes.
Motor-learning literature focuses on simple laboratory-tasks due to their controlled manner and the ease to apply manipulations to induce learning and adaptation. Recently, we introduced a billiards ...paradigm and demonstrated the feasibility of real-world-neuroscience using wearables for naturalistic full-body motion-tracking and mobile-brain-imaging. Here we developed an embodied virtual-reality (VR) environment to our real-world billiards paradigm, which allows to control the visual feedback for this complex real-world task, while maintaining sense of embodiment. The setup was validated by comparing real-world ball trajectories with the trajectories of the virtual balls, calculated by the physics engine. We then ran our short-term motor learning protocol in the embodied VR. Subjects played billiard shots when they held the physical cue and hit a physical ball on the table while seeing it all in VR. We found comparable short-term motor learning trends in the embodied VR to those we previously reported in the physical real-world task. Embodied VR can be used for learning real-world tasks in a highly controlled environment which enables applying visual manipulations, common in laboratory-tasks and rehabilitation, to a real-world full-body task. Embodied VR enables to manipulate feedback and apply perturbations to isolate and assess interactions between specific motor-learning components, thus enabling addressing the current questions of motor-learning in real-world tasks. Such a setup can potentially be used for rehabilitation, where VR is gaining popularity but the transfer to the real-world is currently limited, presumably, due to the lack of embodiment.
Many recent studies found signatures of motor learning in neural beta oscillations (13–30Hz), and specifically in the post-movement beta rebound (PMBR). All these studies were in controlled ...laboratory-tasks in which the task designed to induce the studied learning mechanism. Interestingly, these studies reported opposing dynamics of the PMBR magnitude over learning for the error-based and reward-based tasks (increase versus decrease, respectively). Here we explored the PMBR dynamics during real-world motor-skill-learning in a billiards task using mobile-brain-imaging. Our EEG recordings highlight the opposing dynamics of PMBR magnitudes (increase versus decrease) between different subjects performing the same task. The groups of subjects, defined by their neural dynamics, also showed behavioural differences expected for different learning mechanisms. Our results suggest that when faced with the complexity of the real-world different subjects might use different learning mechanisms for the same complex task. We speculate that all subjects combine multi-modal mechanisms of learning, but different subjects have different predominant learning mechanisms.
Human behaviors from toolmaking to language are thought to rely on a uniquely evolved capacity for hierarchical action sequencing. Testing this idea will require objective, generalizable methods for ...measuring the structural complexity of real-world behavior. Here we present a data-driven approach for extracting action grammars from basic ethograms, exemplified with respect to the evolutionarily relevant behavior of stone toolmaking. We analyzed sequences from the experimental replication of ~ 2.5 Mya Oldowan vs. ~ 0.5 Mya Acheulean tools, finding that, while using the same "alphabet" of elementary actions, Acheulean sequences are quantifiably more complex and Oldowan grammars are a subset of Acheulean grammars. We illustrate the utility of our complexity measures by re-analyzing data from an fMRI study of stone toolmaking to identify brain responses to structural complexity. Beyond specific implications regarding the co-evolution of language and technology, this exercise illustrates the general applicability of our method to investigate naturalistic human behavior and cognition.
. Brain-machine interfacing (BMI) has greatly benefited from adopting machine learning methods for feature learning that require extensive data for training, which are often unavailable from a single ...dataset. Yet, it is difficult to combine data across labs or even data within the same lab collected over the years due to the variation in recording equipment and electrode layouts resulting in shifts in data distribution, changes in data dimensionality, and altered identity of data dimensions. Our objective is to overcome this limitation and learn from many different and diverse datasets across labs with different experimental protocols.
. To tackle the domain adaptation problem, we developed a novel machine learning framework combining graph neural networks (GNNs) and transfer learning methodologies for non-invasive motor imagery (MI) EEG decoding, as an example of BMI. Empirically, we focus on the challenges of learning from EEG data with different electrode layouts and varying numbers of electrodes. We utilize three MI EEG databases collected using very different numbers of EEG sensors (from 22 channels to 64) and layouts (from custom layouts to 10-20).
. Our model achieved the highest accuracy with lower standard deviations on the testing datasets. This indicates that the GNN-based transfer learning framework can effectively aggregate knowledge from multiple datasets with different electrode layouts, leading to improved generalization in subject-independent MI EEG classification.
. The findings of this study have important implications for brain-computer-interface research, as they highlight a promising method for overcoming the limitations posed by non-unified experimental setups. By enabling the integration of diverse datasets with varying electrode layouts, our proposed approach can help advance the development and application of BMI technologies.
Artificial intelligence has the potential to revolutionize healthcare, yet clinical trials in neurological diseases continue to rely on subjective, semiquantitative and motivation-dependent endpoints ...for drug development. To overcome this limitation, we collected a digital readout of whole-body movement behavior of patients with Duchenne muscular dystrophy (DMD) (n = 21) and age-matched controls (n = 17). Movement behavior was assessed while the participant engaged in everyday activities using a 17-sensor bodysuit during three clinical visits over the course of 12 months. We first defined new movement behavioral fingerprints capable of distinguishing DMD from controls. Then, we used machine learning algorithms that combined the behavioral fingerprints to make cross-sectional and longitudinal disease course predictions, which outperformed predictions derived from currently used clinical assessments. Finally, using Bayesian optimization, we constructed a behavioral biomarker, termed the KineDMD ethomic biomarker, which is derived from daily-life behavioral data and whose value progresses with age in an S-shaped sigmoid curve form. The biomarker developed in this study, derived from digital readouts of daily-life movement behavior, can predict disease progression in patients with muscular dystrophy and can potentially track the response to therapy.
The neurobehavioral mechanisms of human motor-control and learning evolved in free behaving, real-life settings, yet this is studied mostly in reductionistic lab-based experiments. Here we take a ...step towards a more real-world motor neuroscience using wearables for naturalistic full-body motion-tracking and the sports of pool billiards to frame a real-world skill learning experiment. First, we asked if well-known features of motor learning in lab-based experiments generalize to a real-world task. We found similarities in many features such as multiple learning rates, and the relationship between task-related variability and motor learning. Our data-driven approach reveals the structure and complexity of movement, variability, and motor learning, enabling an in-depth understanding of the structure of motor learning in three ways: First, while expecting most of the movement learning is done by the cue-wielding arm, we find that motor learning affects the whole body, changing motor-control from head to toe. Second, during learning, all subjects decreased their movement variability and their variability in the outcome. Subjects who were initially more variable were also more variable after learning. Lastly, when screening the link across subjects between initial variability in individual joints and learning, we found that only the initial variability in the right forearm supination shows a significant correlation to the subjects' learning rates. This is in-line with the relationship between learning and variability: while learning leads to an overall reduction in movement variability, only initial variability in specific task-relevant dimensions can facilitate faster learning.
Contemporary robotics gives us mechatronic capabilities for augmenting human bodies with extra limbs. However, how our motor control capabilities pose limits on such augmentation is an open question. ...We developed a Supernumerary Robotic 3rd Thumbs (SR3T) with two degrees-of-freedom controlled by the user's body to endow them with an extra contralateral thumb on the hand. We demonstrate that a pianist can learn to play the piano with 11 fingers within an hour. We then evaluate 6 naïve and 6 experienced piano players in their prior motor coordination and their capability in piano playing with the robotic augmentation. We show that individuals' augmented performance with the SR3T could be explained by our new custom motor coordination assessment, the Human Augmentation Motor Coordination Assessment (HAMCA) performed pre-augmentation. Our work demonstrates how supernumerary robotics can augment humans in skilled tasks and that individual differences in their augmentation capability are explainable by their individual motor coordination abilities.
Inertial Measurement Units (IMUs) within an everyday consumer smartwatch offer a convenient and low-cost method to monitor the natural behaviour of hospital patients. However, their accuracy at ...quantifying limb motion, and clinical acceptability, have not yet been demonstrated. To this end we conducted a two-stage study: First, we compared the inertial accuracy of wrist-worn IMUs, both research-grade (Xsens MTw Awinda, and Axivity AX3) and consumer-grade (Apple Watch Series 3 and 5), and optical motion tracking (OptiTrack). Given the moderate to strong performance of the consumer-grade sensors, we then evaluated this sensor and surveyed the experiences and attitudes of hospital patients (N = 44) and staff (N = 15) following a clinical test in which patients wore smartwatches for 1.5-24 h in the second study. Results indicate that for acceleration, Xsens is more accurate than the Apple Series 5 and 3 smartwatches and Axivity AX3 (RMSE 1.66 ± 0.12 m·s
; R
0.78 ± 0.02; RMSE 2.29 ± 0.09 m·s
; R
0.56 ± 0.01; RMSE 2.14 ± 0.09 m·s
; R
0.49 ± 0.02; RMSE 4.12 ± 0.18 m·s
; R
0.34 ± 0.01 respectively). For angular velocity, Series 5 and 3 smartwatches achieved similar performances against Xsens with RMSE 0.22 ± 0.02 rad·s
; R
0.99 ± 0.00; and RMSE 0.18 ± 0.01 rad·s
; R
1.00± SE 0.00, respectively. Surveys indicated that in-patients and healthcare professionals strongly agreed that wearable motion sensors are easy to use, comfortable, unobtrusive, suitable for long-term use, and do not cause anxiety or limit daily activities. Our results suggest that consumer smartwatches achieved moderate to strong levels of accuracy compared to laboratory gold-standard and are acceptable for pervasive monitoring of motion/behaviour within hospital settings.