Abstract In recent years, smartwatches have emerged as a viable platform for a variety of medical and health-related applications. In addition to the benefits of a stable hardware platform, these ...devices have a significant advantage over other wrist-worn devices, in that user acceptance of watches is higher than other custom hardware solutions. In this paper, we describe signal-processing techniques for identification of chews and swallows using a smartwatch device׳s built-in microphone. Moreover, we conduct a survey to evaluate the potential of the smartwatch as a platform for monitoring nutrition. The focus of this paper is to analyze the overall applicability of a smartwatch-based system for food-intake monitoring. Evaluation results confirm the efficacy of our technique; classification was performed between apple and potato chip bites, water swallows, talking, and ambient noise, with an F -measure of 94.5% based on 250 collected samples.
Full text
Available for:
GEOZS, IJS, IMTLJ, KILJ, KISLJ, NUK, OILJ, PNG, SAZU, SBCE, SBJE, UL, UM, UPUK
The standard approaches to diagnosing autism spectrum disorder (ASD) evaluate between 20 and 100 behaviors and take several hours to complete. This has in part contributed to long wait times for a ...diagnosis and subsequent delays in access to therapy. We hypothesize that the use of machine learning analysis on home video can speed the diagnosis without compromising accuracy. We have analyzed item-level records from 2 standard diagnostic instruments to construct machine learning classifiers optimized for sparsity, interpretability, and accuracy. In the present study, we prospectively test whether the features from these optimized models can be extracted by blinded nonexpert raters from 3-minute home videos of children with and without ASD to arrive at a rapid and accurate machine learning autism classification.
We created a mobile web portal for video raters to assess 30 behavioral features (e.g., eye contact, social smile) that are used by 8 independent machine learning models for identifying ASD, each with >94% accuracy in cross-validation testing and subsequent independent validation from previous work. We then collected 116 short home videos of children with autism (mean age = 4 years 10 months, SD = 2 years 3 months) and 46 videos of typically developing children (mean age = 2 years 11 months, SD = 1 year 2 months). Three raters blind to the diagnosis independently measured each of the 30 features from the 8 models, with a median time to completion of 4 minutes. Although several models (consisting of alternating decision trees, support vector machine SVM, logistic regression (LR), radial kernel, and linear SVM) performed well, a sparse 5-feature LR classifier (LR5) yielded the highest accuracy (area under the curve AUC: 92% 95% CI 88%-97%) across all ages tested. We used a prospectively collected independent validation set of 66 videos (33 ASD and 33 non-ASD) and 3 independent rater measurements to validate the outcome, achieving lower but comparable accuracy (AUC: 89% 95% CI 81%-95%). Finally, we applied LR to the 162-video-feature matrix to construct an 8-feature model, which achieved 0.93 AUC (95% CI 0.90-0.97) on the held-out test set and 0.86 on the validation set of 66 videos. Validation on children with an existing diagnosis limited the ability to generalize the performance to undiagnosed populations.
These results support the hypothesis that feature tagging of home videos for machine learning classification of autism can yield accurate outcomes in short time frames, using mobile devices. Further work will be needed to confirm that this approach can accelerate autism diagnosis at scale.
Full text
Available for:
DOBA, IZUM, KILJ, NUK, PILJ, PNG, SAZU, SIK, UILJ, UKNU, UL, UM, UPUK
Standard medical diagnosis of mental health conditions requires licensed experts who are increasingly outnumbered by those at risk, limiting reach. We test the hypothesis that a trustworthy crowd of ...non-experts can efficiently annotate behavioral features needed for accurate machine learning detection of the common childhood developmental disorder Autism Spectrum Disorder (ASD) for children under 8 years old. We implement a novel process for identifying and certifying a trustworthy distributed workforce for video feature extraction, selecting a workforce of 102 workers from a pool of 1,107. Two previously validated ASD logistic regression classifiers, evaluated against parent-reported diagnoses, were used to assess the accuracy of the trusted crowd's ratings of unstructured home videos. A representative balanced sample (N = 50 videos) of videos were evaluated with and without face box and pitch shift privacy alterations, with AUROC and AUPRC scores > 0.98. With both privacy-preserving modifications, sensitivity is preserved (96.0%) while maintaining specificity (80.0%) and accuracy (88.0%) at levels comparable to prior classification methods without alterations. We find that machine learning classification from features extracted by a certified nonexpert crowd achieves high performance for ASD detection from natural home videos of the child at risk and maintains high sensitivity when privacy-preserving mechanisms are applied. These results suggest that privacy-safeguarded crowdsourced analysis of short home videos can help enable rapid and mobile machine-learning detection of developmental delays in children.
Full text
Available for:
IZUM, KILJ, NUK, PILJ, PNG, SAZU, UL, UM, UPUK
Food intake levels, hydration, ingestion rate, and dietary choices are all factors known to impact the risk of obesity. This paper presents a novel wearable system in the form of a necklace, which ...aggregates data from an embedded piezoelectric sensor capable of detecting skin motion in the lower trachea during ingestion. The skin motion produces an output voltage with varying frequencies over time. As a result, we propose an algorithm based on time-frequency decomposition, spectrogram analysis of piezoelectric sensor signals, to accurately distinguish between food types, such as liquid and solid, hot and cold drinks, and hard and soft foods. The necklace transmits data to a smartphone, which performs the processing of the signals, classifies the food type, and provides visual feedback to the user to assist the user in monitoring their eating habits over time. We compare our spectrogram analysis with other time-frequency features, such as matching pursuit and wavelets. Experimental results demonstrate promise in using time-frequency features, with high accuracy of distinguishing between food categories using spectrogram analysis and extracting key features representative of the unique swallow patterns of various foods.
Data science and digital technologies have the potential to transform diagnostic classification. Digital technologies enable the collection of big data, and advances in machine learning and ...artificial intelligence enable scalable, rapid, and automated classification of medical conditions. In this review, we summarize and categorize various data-driven methods for diagnostic classification. In particular, we focus on autism as an example of a challenging disorder due to its highly heterogeneous nature. We begin by describing the frontier of data science methods for the neuropsychiatry of autism. We discuss early signs of autism as defined by existing pen-and-paper–based diagnostic instruments and describe data-driven feature selection techniques for determining the behaviors that are most salient for distinguishing children with autism from neurologically typical children. We then describe data-driven detection techniques, particularly computer vision and eye tracking, that provide a means of quantifying behavioral differences between cases and controls. We also describe methods of preserving the privacy of collected videos and prior efforts of incorporating humans in the diagnostic loop. Finally, we summarize existing digital therapeutic interventions that allow for data capture and longitudinal outcome tracking as the diagnosis moves along a positive trajectory. Digital phenotyping of autism is paving the way for quantitative psychiatry more broadly and will set the stage for more scalable, accessible, and precise diagnostic techniques in the field.
Full text
Available for:
GEOZS, IJS, IMTLJ, KILJ, KISLJ, NLZOH, NUK, OILJ, PNG, SAZU, SBCE, SBJE, UILJ, UL, UM, UPUK, ZAGLJ, ZRSKP
To address the need for asthma self-management in pediatrics, the authors present the feasibility of a mobile health (mHealth) platform built on their prior work in an asthmatic adult and child. ...Real-time asthma attack risk was assessed through physiological and environmental sensors. Data were sent to a cloud via a smartwatch application (app) using Health Insurance Portability and Accountability Act (HIPAA)-compliant cryptography and combined with online source data. A risk level (high, medium or low) was determined using a random forest classifier and then sent to the app to be visualized as animated dragon graphics for easy interpretation by children. The feasibility of the system was first tested on an adult with moderate asthma, then usability was examined on a child with mild asthma over several weeks. It was found during feasibility testing that the system is able to assess asthma risk with 80.10 ± 14.13% accuracy. During usability testing, it was able to continuously collect sensor data, and the child was able to wear, easily understand and enjoy the use of the system. If tested in more individuals, this system may lead to an effective self-management program that can reduce hospitalization in those who suffer from asthma.
Full text
Available for:
IZUM, KILJ, NUK, PILJ, PNG, SAZU, UL, UM, UPUK
Automated emotion classification could aid those who struggle to recognize emotions, including children with developmental behavioral conditions such as autism. However, most computer vision emotion ...recognition models are trained on adult emotion and therefore underperform when applied to child faces.
We designed a strategy to gamify the collection and labeling of child emotion-enriched images to boost the performance of automatic child emotion recognition models to a level closer to what will be needed for digital health care approaches.
We leveraged our prototype therapeutic smartphone game, GuessWhat, which was designed in large part for children with developmental and behavioral conditions, to gamify the secure collection of video data of children expressing a variety of emotions prompted by the game. Independently, we created a secure web interface to gamify the human labeling effort, called HollywoodSquares, tailored for use by any qualified labeler. We gathered and labeled 2155 videos, 39,968 emotion frames, and 106,001 labels on all images. With this drastically expanded pediatric emotion-centric database (>30 times larger than existing public pediatric emotion data sets), we trained a convolutional neural network (CNN) computer vision classifier of happy, sad, surprised, fearful, angry, disgust, and neutral expressions evoked by children.
The classifier achieved a 66.9% balanced accuracy and 67.4% F1-score on the entirety of the Child Affective Facial Expression (CAFE) as well as a 79.1% balanced accuracy and 78% F1-score on CAFE Subset A, a subset containing at least 60% human agreement on emotions labels. This performance is at least 10% higher than all previously developed classifiers evaluated against CAFE, the best of which reached a 56% balanced accuracy even when combining "anger" and "disgust" into a single class.
This work validates that mobile games designed for pediatric therapies can generate high volumes of domain-relevant data sets to train state-of-the-art classifiers to perform tasks helpful to precision health efforts.
The demand for thin, light wearables with long battery lifetimes motivate novel energy optimization techniques for a wide range of devices with applications ranging from telemedicine in clinical ...devices to activity recognition in consumer applications. In this paper, we present a simple activity monitoring system based on a piezoelectric energy-harvesting platform. The device requires no battery, no recharging, and no complex algorithms, instead leveraging the relationship between energy harvesting efficiency and user activity to create an effective wearable device with an indefinite lifetime. Experimental results confirm that the system can be used for coarse-grained activity monitoring with no external power source.
Abstract Maintaining appropriate levels of food intake and developing regularity in eating habits is crucial to weight loss and the preservation of a healthy lifestyle. Moreover, awareness of eating ...habits is an important step towards portion control and weight loss. In this paper, we introduce a novel food-intake monitoring system based around a wearable wireless-enabled necklace. The proposed necklace includes an embedded piezoelectric sensor, small Arduino-compatible microcontroller, Bluetooth LE transceiver, and Lithium-Polymer battery. Motion in the throat is captured and transmitted to a mobile application for processing and user guidance. Results from data collected from 30 subjects indicate that it is possible to detect solid and liquid foods, with an F -measure of 0.837 and 0.864, respectively, using a naive Bayes classifier. Furthermore, identification of extraneous motions such as head turns and walking are shown to significantly reduce the false positive rate of swallow detection.
Full text
Available for:
GEOZS, IJS, IMTLJ, KILJ, KISLJ, NUK, OILJ, PNG, SAZU, SBCE, SBJE, UL, UM, UPUK
Abstract Background Prior research has shown a correlation between poor dietary habits and countless negative health outcomes such as heart disease, diabetes, and certain cancers. Automatic ...monitoring of food intake in an unobtrusive, wearable form-factor can encourage healthy dietary choices by enabling individuals to regulate their eating habits. Methods This paper presents an objective comparison of two of the most promising methods for digital dietary intake monitoring: piezoelectric swallow sensing by means of a smart necklace which monitors vibrations in the neck, and audio-based detection using a throat microphone. Results Data was collected from twenty subjects with ages ranging from 22 to 40 as they consumed a variety of foods using both devices. In Experiment I, we distinguished sandwich, chips, and water. In Experiment II, we distinguished nuts, chocolate, and a meat patty. F-Measures for the audio based approach were 91.3% and 88.5% for the first and second experiments, respectively. In the piezo-based approach, F-measures were 75.3% and 79.4%. Conclusion The accuracy of the audio-based approach was significantly higher for classifying between different foods. However, this accuracy comes at the expense of computational overhead increased power dissipation due to the higher sample rates required to process audio signals compared to inertial sensor data.
Full text
Available for:
GEOZS, IJS, IMTLJ, KILJ, KISLJ, NUK, OILJ, PNG, SAZU, SBCE, SBJE, UL, UM, UPUK, ZRSKP