Although awareness of sleep disorders is increasing, limited information is available on whole night detection of snoring. Our study aimed to develop and validate a robust, high performance, and ...sensitive whole-night snore detector based on non-contact technology.
Sounds during polysomnography (PSG) were recorded using a directional condenser microphone placed 1 m above the bed. An AdaBoost classifier was trained and validated on manually labeled snoring and non-snoring acoustic events.
Sixty-seven subjects (age 52.5 ± 13.5 years, BMI 30.8 ± 4.7 kg/m(2), m/f 40/27) referred for PSG for obstructive sleep apnea diagnoses were prospectively and consecutively recruited. Twenty-five subjects were used for the design study; the validation study was blindly performed on the remaining forty-two subjects.
To train the proposed sound detector, >76,600 acoustic episodes collected in the design study were manually classified by three scorers into snore and non-snore episodes (e.g., bedding noise, coughing, environmental). A feature selection process was applied to select the most discriminative features extracted from time and spectral domains. The average snore/non-snore detection rate (accuracy) for the design group was 98.4% based on a ten-fold cross-validation technique. When tested on the validation group, the average detection rate was 98.2% with sensitivity of 98.0% (snore as a snore) and specificity of 98.3% (noise as noise).
Audio-based features extracted from time and spectral domains can accurately discriminate between snore and non-snore acoustic events. This audio analysis approach enables detection and analysis of snoring sounds from a full night in order to produce quantified measures for objective follow-up of patients.
Falls are a major risk for the elderly people living independently. Rapid detection of fall events can reduce the rate of mortality and raise the chances to survive the event and return to ...independent living. In the last two decades, several technological solutions for detection of falls were published, but most of them suffer from critical limitations. In this paper, we present a proof of concept to an automatic fall detection system for elderly people. The system is based on floor vibration and sound sensing, and uses signal processing and pattern recognition algorithm to discriminate between fall events and other events. The classification is based on special features like shock response spectrum and mel frequency ceptral coefficients. For the simulation of human falls, we have used a human mimicking doll: ldquoRescue Randy.rdquo The proposed solution is unique, reliable, and does not require the person to wear anything. It is designed to detect fall events in critical cases in which the person is unconscious or in a stress condition. From the preliminary research, the proposed system can detect human mimicking dolls falls with a sensitivity of 97.5% and specificity of 98.6%.
An automatic non-contact cough detector designed especially for night audio recordings that can distinguish coughs from snores and other sounds is presented. Two different classifiers were ...implemented and tested: a Gaussian Mixture Model (GMM) and a Deep Neural Network (DNN). The detected coughs were analyzed and compared in different sleep stages and in terms of severity of Obstructive Sleep Apnea (OSA), along with age, Body Mass Index (BMI), and gender. The database was composed of nocturnal audio signals from 89 subjects recorded during a polysomnography study. The DNN-based system outperformed the GMM-based system, at 99.8% accuracy, with a sensitivity and specificity of 86.1% and 99.9%, respectively (Positive Predictive Value (PPV) of 78.4%). Cough events were significantly more frequent during wakefulness than in the sleep stages (p < 0.0001) and were significantly less frequent during deep sleep than in other sleep stages (p < 0.0001). A positive correlation was found between BMI and the number of nocturnal coughs (R = 0.232, p < 0.05), and between the number of nocturnal coughs and OSA severity in men (R = 0.278, p < 0.05). This non-contact cough detection system may thus be implemented to track the progression of respiratory illnesses and test reactions to different medications even at night when a contact sensor is uncomfortable or infeasible.
To develop and validate a novel non-contact system for whole-night sleep evaluation using breathing sounds analysis (BSA).
Whole-night breathing sounds (using ambient microphone) and polysomnography ...(PSG) were simultaneously collected at a sleep laboratory (mean recording time 7.1 hours). A set of acoustic features quantifying breathing pattern were developed to distinguish between sleep and wake epochs (30 sec segments). Epochs (n = 59,108 design study and n = 68,560 validation study) were classified using AdaBoost classifier and validated epoch-by-epoch for sensitivity, specificity, positive and negative predictive values, accuracy, and Cohen's kappa. Sleep quality parameters were calculated based on the sleep/wake classifications and compared with PSG for validity.
University affiliated sleep-wake disorder center and biomedical signal processing laboratory.
One hundred and fifty patients (age 54.0±14.8 years, BMI 31.6±5.5 kg/m2, m/f 97/53) referred for PSG were prospectively and consecutively recruited. The system was trained (design study) on 80 subjects; validation study was blindly performed on the additional 70 subjects.
Epoch-by-epoch accuracy rate for the validation study was 83.3% with sensitivity of 92.2% (sleep as sleep), specificity of 56.6% (awake as awake), and Cohen's kappa of 0.508. Comparing sleep quality parameters of BSA and PSG demonstrate average error of sleep latency, total sleep time, wake after sleep onset, and sleep efficiency of 16.6 min, 35.8 min, and 29.6 min, and 8%, respectively.
This study provides evidence that sleep-wake activity and sleep quality parameters can be reliably estimated solely using breathing sound analysis. This study highlights the potential of this innovative approach to measure sleep in research and clinical circumstances.
Autism Spectrum Disorder (ASD) is a neurodevelopmental disorder that involves difficulties in social communication. Previous research has demonstrated that these difficulties are apparent in the way ...ASD children speak, indicating that it may be possible to estimate ASD severity using quantitative features of speech. Here, we extracted a variety of prosodic, acoustic, and conversational features from speech recordings of Hebrew speaking children who completed an Autism Diagnostic Observation Schedule (ADOS) assessment. Sixty features were extracted from the recordings of 72 children and 21 of the features were significantly correlated with the children's ADOS scores. Positive correlations were found with pitch variability and Zero Crossing Rate (ZCR), while negative correlations were found with the speed and number of vocal responses to the clinician, and the overall number of vocalizations. Using these features, we built several Deep Neural Network (DNN) algorithms to estimate ADOS scores and compared their performance with Linear Regression and Support Vector Regression (SVR) models. We found that a Convolutional Neural Network (CNN) yielded the best results. This algorithm predicted ADOS scores with a mean RMSE of 4.65 and a mean correlation of 0.72 with the true ADOS scores when trained and tested on different sub-samples of the available data. Automated algorithms with the ability to predict ASD severity in a reliable and sensitive manner have the potential of revolutionizing early ASD identification, quantification of symptom severity, and assessment of treatment efficacy.
Sleep staging is essential for evaluating sleep and its disorders. Most sleep studies today incorporate contact sensors that may interfere with natural sleep and may bias results. Moreover, the ...availability of sleep studies is limited, and many people with sleep disorders remain undiagnosed. Here, we present a pioneering approach for rapid eye movement (REM), non-REM, and wake staging (macro-sleep stages, MSS) estimation based on sleep sounds analysis. Our working hypothesis is that the properties of sleep sounds, such as breathing and movement, within each MSS are different. We recorded audio signals, using non-contact microphones, of 250 patients referred to a polysomnography (PSG) study in a sleep laboratory. We trained an ensemble of one-layer, feedforward neural network classifiers fed by time-series of sleep sounds to produce real-time and offline analyses. The audio-based system was validated and produced an epoch-by-epoch (standard 30-sec segments) agreement with PSG of 87% with Cohen's kappa of 0.7. This study shows the potential of audio signal analysis as a simple, convenient, and reliable MSS estimation without contact sensors.
Audio analysis of cough sounds can provide objective measures of respiratory clinical features such as cough frequency. Audio-based 24-hour ambulatory cough monitoring systems currently lead the way ...in providing these objective measures across a range of respiratory diseases. However, to preserve data privacy in cough audio recordings, there is interest to remove any identifiable information contained within patient and third-party speech. In this study we employed real-life patient audio recordings from the VitaloJAK 24-hour ambulatory cough monitoring device. We developed an audio-based speech obfuscation system that specifically detects and obfuscates intelligible speech while retaining cough events. An algorithm was developed to detect vowel sounds since most intelligible information is contained here. The detection algorithm employed audio features including energy, spectral centroid and an adaptive voiced speech feature. The detected vowel sounds were obfuscated by replacing the original audio signal with a synthetic version generated using the original energy and pitch but without formants information. The system was designed using seven hours of audio recordings from seven different patients with respiratory disease. The system was then evaluated on five 24-hour real-life patient audio recordings (120 hours in total) which consisted of 21.6 hours of intelligible speech along with 3,376 coughs. The system obfuscated 99.3% (21.5 hours) of intelligible speech while retaining 99.6% (3,362) of coughs. This speech obfuscation system can preserve data privacy while using 24-hour ambulatory cough monitors. Furthermore, it can retain cough events and other aspects of 24-hour cough recordings which may be of clinical interest.
Sound level meter is the gold standard approach for snoring evaluation. Using this approach, it was established that snoring intensity (in dB) is higher for men and is associated with increased ...apnea-hypopnea index (AHI). In this study, we performed a systematic analysis of breathing and snoring sound characteristics using an algorithm designed to detect and analyze breathing and snoring sounds. The effect of sex, sleep stages, and AHI on snoring characteristics was explored.
We consecutively recruited 121 subjects referred for diagnosis of obstructive sleep apnea. A whole night audio signal was recorded using noncontact ambient microphone during polysomnography. A large number (> 290,000) of breathing and snoring (> 50 dB) events were analyzed. Breathing sound events were detected using a signal-processing algorithm that discriminates between breathing and nonbreathing (noise events) sounds.
Snoring index (events/h, SI) was 23% higher for men (p = 0.04), and in both sexes SI gradually declined by 50% across sleep time (p < 0.01) independent of AHI. SI was higher in slow wave sleep (p < 0.03) compared to S2 and rapid eye movement sleep; men have higher SI in all sleep stages than women (p < 0.05). Snoring intensity was similar in both genders in all sleep stages and independent of AHI. For both sexes, no correlation was found between AHI and snoring intensity (r = 0.1, p = 0.291).
This audio analysis approach enables systematic detection and analysis of breathing and snoring sounds from a full night recording. Snoring intensity is similar in both sexes and was not affected by AHI.
Purpose
This study aims to develop and test a new computer‐aided detection (CAD) approach and scheme, assessing the likelihood of a subject harboring breast abnormalities.
Methods
The proposed scheme ...is based on the analysis of both local and global bilateral mammographic feature asymmetries. The level of local or global asymmetry is assessed by analyzing mammographic features extracted from the bilaterally matched regions of interest (ROIs), or from the entire breast, respectively. The selected local and global feature vectors are combined and classified using a maximum likelihood obtained from a naïve Bayes classifier. This scheme was evaluated using a leave‐one‐case‐out cross‐validation method that was applied to 243 subjects from mini‐MIAS and INbreast databases. In addition, the result is compared with a conventional unilateral (or single) image‐based CAD scheme.
Results
Using a case‐based evaluation approach and an area under curve (AUC) of the receiver operating characteristic (ROC) as a performance index, the new scheme yielded AUC = 0.79 ± 0.07, an 8.2% increase compared with AUC = 0.73 ± 0.08 obtained using the unilateral image‐based CAD scheme.
Conclusions
This work demonstrates that applying bilateral asymmetry analysis increases the discriminatory power of CAD schemes while optimizing the likelihood assessment of breast abnormalities presence. Therefore, the proposed CAD approach provides the radiologist with beneficial supplementary information and can indicate high‐risk cases.