Wearable devices have evolved as screening tools for atrial fibrillation (AF). A photoplethysmographic (PPG) AF detection algorithm was developed and applied to a convenient smartphone-based device ...with good accuracy. However, patients with paroxysmal AF frequently exhibit premature atrial complexes (PACs), which result in poor unmanned AF detection, mainly because of rule-based or handcrafted machine learning techniques that are limited in terms of diagnostic accuracy and reliability.
This study aimed to develop deep learning (DL) classifiers using PPG data to detect AF from the sinus rhythm (SR) in the presence of PACs after successful cardioversion.
We examined 75 patients with AF who underwent successful elective direct-current cardioversion (DCC). Electrocardiogram and pulse oximetry data over a 15-min period were obtained before and after DCC and labeled as AF or SR. A 1-dimensional convolutional neural network (1D-CNN) and recurrent neural network (RNN) were chosen as the 2 DL architectures. The PAC indicator estimated the burden of PACs on the PPG dataset. We defined a metric called the confidence level (CL) of AF or SR diagnosis and compared the CLs of true and false diagnoses. We also compared the diagnostic performance of 1D-CNN and RNN with previously developed AF detectors (support vector machine with root-mean-square of successive difference of RR intervals and Shannon entropy, autocorrelation, and ensemble by combining 2 previous methods) using 10 5-fold cross-validation processes.
Among the 14,298 training samples containing PPG data, 7157 samples were obtained during the post-DCC period. The PAC indicator estimated 29.79% (2132/7157) of post-DCC samples had PACs. The diagnostic accuracy of AF versus SR was 99.32% (70,925/71,410) versus 95.85% (68,602/71,570) in 1D-CNN and 98.27% (70,176/71,410) versus 96.04% (68,736/71,570) in RNN methods. The area under receiver operating characteristic curves of the 2 DL classifiers was 0.998 (95% CI 0.995-1.000) for 1D-CNN and 0.996 (95% CI 0.993-0.998) for RNN, which were significantly higher than other AF detectors (P<.001). If we assumed that the dataset could emulate a sufficient number of patients in training, both DL classifiers improved their diagnostic performances even further especially for the samples with a high burden of PACs. The average CLs for true versus false classification were 98.56% versus 78.75% for 1D-CNN and 98.37% versus 82.57% for RNN (P<.001 for all cases).
New DL classifiers could detect AF using PPG monitoring signals with high diagnostic accuracy even with frequent PACs and could outperform previously developed AF detectors. Although diagnostic performance decreased as the burden of PACs increased, performance improved when samples from more patients were trained. Moreover, the reliability of the diagnosis could be indicated by the CL. Wearable devices sensing PPG signals with DL classifiers should be validated as tools to screen for AF.
The growing public interest and awareness regarding the significance of sleep is driving the demand for sleep monitoring at home. In addition to various commercially available wearable and nearable ...devices, sound-based sleep staging via deep learning is emerging as a decent alternative for their convenience and potential accuracy. However, sound-based sleep staging has only been studied using in-laboratory sound data. In real-world sleep environments (homes), there is abundant background noise, in contrast to quiet, controlled environments such as laboratories. The use of sound-based sleep staging at homes has not been investigated while it is essential for practical use on a daily basis. Challenges are the lack of and the expected huge expense of acquiring a sufficient size of home data annotated with sleep stages to train a large-scale neural network.
This study aims to develop and validate a deep learning method to perform sound-based sleep staging using audio recordings achieved from various uncontrolled home environments.
To overcome the limitation of lacking home data with known sleep stages, we adopted advanced training techniques and combined home data with hospital data. The training of the model consisted of 3 components: (1) the original supervised learning using 812 pairs of hospital polysomnography (PSG) and audio recordings, and the 2 newly adopted components; (2) transfer learning from hospital to home sounds by adding 829 smartphone audio recordings at home; and (3) consistency training using augmented hospital sound data. Augmented data were created by adding 8255 home noise data to hospital audio recordings. Besides, an independent test set was built by collecting 45 pairs of overnight PSG and smartphone audio recording at homes to examine the performance of the trained model.
The accuracy of the model was 76.2% (63.4% for wake, 64.9% for rapid-eye movement REM, and 83.6% for non-REM) for our test set. The macro F1-score and mean per-class sensitivity were 0.714 and 0.706, respectively. The performance was robust across demographic groups such as age, gender, BMI, or sleep apnea severity (accuracy 73.4%-79.4%). In the ablation study, we evaluated the contribution of each component. While the supervised learning alone achieved accuracy of 69.2% on home sound data, adding consistency training to the supervised learning helped increase the accuracy to a larger degree (+4.3%) than adding transfer learning (+0.1%). The best performance was shown when both transfer learning and consistency training were adopted (+7.0%).
This study shows that sound-based sleep staging is feasible for home use. By adopting 2 advanced techniques (transfer learning and consistency training) the deep learning model robustly predicts sleep stages using sounds recorded at various uncontrolled home environments, without using any special equipment but smartphones only.
Multinight monitoring can be helpful for the diagnosis and management of obstructive sleep apnea (OSA). For this purpose, it is necessary to be able to detect OSA in real time in a noisy home ...environment. Sound-based OSA assessment holds great potential since it can be integrated with smartphones to provide full noncontact monitoring of OSA at home.
The purpose of this study is to develop a predictive model that can detect OSA in real time, even in a home environment where various noises exist.
This study included 1018 polysomnography (PSG) audio data sets, 297 smartphone audio data sets synced with PSG, and a home noise data set containing 22,500 noises to train the model to predict breathing events, such as apneas and hypopneas, based on breathing sounds that occur during sleep. The whole breathing sound of each night was divided into 30-second epochs and labeled as "apnea," "hypopnea," or "no-event," and the home noises were used to make the model robust to a noisy home environment. The performance of the prediction model was assessed using epoch-by-epoch prediction accuracy and OSA severity classification based on the apnea-hypopnea index (AHI).
Epoch-by-epoch OSA event detection showed an accuracy of 86% and a macro F
-score of 0.75 for the 3-class OSA event detection task. The model had an accuracy of 92% for "no-event," 84% for "apnea," and 51% for "hypopnea." Most misclassifications were made for "hypopnea," with 15% and 34% of "hypopnea" being wrongly predicted as "apnea" and "no-event," respectively. The sensitivity and specificity of the OSA severity classification (AHI≥15) were 0.85 and 0.84, respectively.
Our study presents a real-time epoch-by-epoch OSA detector that works in a variety of noisy home environments. Based on this, additional research is needed to verify the usefulness of various multinight monitoring and real-time diagnostic technologies in the home environment.
Consumer sleep trackers (CSTs) have gained significant popularity because they enable individuals to conveniently monitor and analyze their sleep. However, limited studies have comprehensively ...validated the performance of widely used CSTs. Our study therefore investigated popular CSTs based on various biosignals and algorithms by assessing the agreement with polysomnography.
This study aimed to validate the accuracy of various types of CSTs through a comparison with in-lab polysomnography. Additionally, by including widely used CSTs and conducting a multicenter study with a large sample size, this study seeks to provide comprehensive insights into the performance and applicability of these CSTs for sleep monitoring in a hospital environment.
The study analyzed 11 commercially available CSTs, including 5 wearables (Google Pixel Watch, Galaxy Watch 5, Fitbit Sense 2, Apple Watch 8, and Oura Ring 3), 3 nearables (Withings Sleep Tracking Mat, Google Nest Hub 2, and Amazon Halo Rise), and 3 airables (SleepRoutine, SleepScore, and Pillow). The 11 CSTs were divided into 2 groups, ensuring maximum inclusion while avoiding interference between the CSTs within each group. Each group (comprising 8 CSTs) was also compared via polysomnography.
The study enrolled 75 participants from a tertiary hospital and a primary sleep-specialized clinic in Korea. Across the 2 centers, we collected a total of 3890 hours of sleep sessions based on 11 CSTs, along with 543 hours of polysomnography recordings. Each CST sleep recording covered an average of 353 hours. We analyzed a total of 349,114 epochs from the 11 CSTs compared with polysomnography, where epoch-by-epoch agreement in sleep stage classification showed substantial performance variation. More specifically, the highest macro F1 score was 0.69, while the lowest macro F1 score was 0.26. Various sleep trackers exhibited diverse performances across sleep stages, with SleepRoutine excelling in the wake and rapid eye movement stages, and wearables like Google Pixel Watch and Fitbit Sense 2 showing superiority in the deep stage. There was a distinct trend in sleep measure estimation according to the type of device. Wearables showed high proportional bias in sleep efficiency, while nearables exhibited high proportional bias in sleep latency. Subgroup analyses of sleep trackers revealed variations in macro F1 scores based on factors, such as BMI, sleep efficiency, and apnea-hypopnea index, while the differences between male and female subgroups were minimal.
Our study showed that among the 11 CSTs examined, specific CSTs showed substantial agreement with polysomnography, indicating their potential application in sleep monitoring, while other CSTs were partially consistent with polysomnography. This study offers insights into the strengths of CSTs within the 3 different classes for individuals interested in wellness who wish to understand and proactively manage their own sleep.
This paper presents the first full end-to-end deep learning framework for the swift prediction of lithium-ion battery remaining useful life. While lithium-ion batteries offer advantages of high ...efficiency and low cost, their instability and varying lifetimes remain challenges. To prevent the sudden failure of lithium-ion batteries, researchers have worked to develop ways of predicting the remaining useful life of lithium-ion batteries, especially using data-driven approaches. In this study, we sought a higher resolution of inter-cycle aging for faster and more accurate predictions, by considering temporal patterns and cross-data correlations in the raw data, specifically, terminal voltage, current, and cell temperature. We took an in-depth analysis of the deep learning models using the uncertainty metric, t-SNE of features, and various battery related tasks. The proposed framework significantly boosted the remaining useful life prediction (25X faster) and resulted in a 10.6% mean absolute error rate.
•The first full end-to-end deep learning framework for battery RUL prediction.•Swift RUL prediction with only four cycles of the target battery (25X faster).•Analyzing temporal patterns of terminal voltage, current, and cell temperature.•Reliable use of deep learning-based RUL prediction via uncertainty estimates.•Interpretable analysis on RUL prediction of deep neural networks.
Abstract
Introduction
Convenient sleep tracking with mobile devices such as smartphones is desirable for people who want to easily objectify their sleep. The objective of this study was to introduce ...a deep learning model for sound-based sleep staging using audio data recorded with smartphones during sleep.
Methods
Two different audio datasets were used. One (N = 1,154) was extracted from polysomnography (PSG) data and the other (N = 327) was recorded using a smartphone during PSG from independent subjects. The performance of sound-based sleep staging would always depend on the quality of the audio. In practical conditions (non-contact and smartphone microphones), breathing and body movement sounds during night are so weak that the energy of such signals is sometimes smaller than that of ambient noise. The audio was converted into Mel spectrogram to detect latent temporal frequency patterns of breathing and body movement sound from ambient noise. The proposed neural network model consisted of two sub-models. The first sub-model extracted features from each 30-second epoch Mel spectrogram and the second one classified sleep stages through inter-epoch analysis of extracted features.
Results
Our model achieved 70 % epoch-by-epoch agreement for 4-class (wake, light, deep, rapid eye movement) stage classification and robust performance across various signal-to-noise conditions. More precisely, the model was correct in 77% of wake, 73% of light, 46% of deep, and 66% of REM. The model performance was not considerably affected by existence of sleep apnea but degradation observed with severe periodic limb movement. External validation with smartphone dataset also showed 68 % epoch-by-epoch agreement. Compared with some commercially available sleep trackers such as Fitbit Alta HR (0.6325 in mean per-class sensitivity) and SleepScore Max (0.565 in mean per-class sensitivity), our model showed superior performance in both PSG audio (0.655 in mean per-class sensitivity) and smartphone audio (0.6525 in mean per-class sensitivity).
Conclusion
To the best of our knowledge, this is the first end (Mel spectrogram-based feature extraction)-to-end (sleep staging) deep learning model that can work with audio data in practical conditions. Our proposed deep learning model of sound-based sleep staging has potential to be integrated in smartphone application for reliable at-home sleep tracking.
Support (If Any)
Abstract
Introduction
Daily sleep tracking at home is growing in demand as more and more people are aware of the significance of sleep. The objective of this study is to propose a sound-based sleep ...staging model based on deep learning that works well in home environments with recorded audio data from general smartphones.
Methods
Three different audio datasets were used. A labeled hospital dataset (PSG and audio, N=812) and an unlabeled home dataset (audio only, N=829) were used for training. A limited number of labeled sound data from home (PSG and audio, N=45) were used for testing. Our proposed HomeSleepNet has three components: (1) supervised learning using the labeled hospital data that trains the model to make correct predictions in hospital environments; (2) unsupervised domain adaptation (UDA), which used both the labeled hospital data and unlabeled home data, and transferred the sleep staging power from hospital domain to home domain by adversarial training; (3) unsupervised data augmentation for consistency training (UDC), which augmented hospital data by adding home noise and trained the model to make consistent predictions on original and augmented data. After all training, HomeSleepNet is expected to make robust sleep staging despite the home noise presence.
Results
HomeSleepNet achieved 76.2% accuracy on the sleep staging task in home environments for the 3-stage classification case (Wake, NREM, REM). Specifically, it correctly predicted 63.4% of wake, 83.6% of NREM sleep, and 64.9% of REM sleep. The contributions of UDA and UDC were demonstrated by the following results. The accuracy of the model trained without both was 69.2%. Either addition of UDA or UDC training to the model improved the performance, with increased accuracy of 69.3% for UDA and 73.5% for UDC. As expected, using both UDA and UDC (i.e., HomeSleepNet) achieved the best performance, with a 7% increase in accuracy compared to the model trained without both components.
Conclusion
To the best of our knowledge, this is the first sound-based sleep staging study conducted in home environments. Moreover, the sounds were recorded by commercial smartphones and not through professional devices. Our proposed model introduced a reliable and convenient method for daily sleep tracking at home.
Support (if any)
Continuous photoplethysmography (PPG) monitoring with a wearable device may aid the early detection of atrial fibrillation (AF).
We aimed to evaluate the diagnostic performance of a ring-type ...wearable device (CardioTracker, CART), which can detect AF using deep learning analysis of PPG signals.
Patients with persistent AF who underwent cardioversion were recruited prospectively. We recorded PPG signals at the finger with CART and a conventional pulse oximeter before and after cardioversion over a period of 15 min (each instrument). Cardiologists validated the PPG rhythms with simultaneous single-lead electrocardiography. The PPG data were transmitted to a smartphone wirelessly and analyzed with a deep learning algorithm. We also validated the deep learning algorithm in 20 healthy subjects with sinus rhythm (SR).
In 100 study participants, CART generated a total of 13,038 30-s PPG samples (5850 for SR and 7188 for AF). Using the deep learning algorithm, the diagnostic accuracy, sensitivity, specificity, positive-predictive value, and negative-predictive value were 96.9%, 99.0%, 94.3%, 95.6%, and 98.7%, respectively. Although the diagnostic accuracy decreased with shorter sample lengths, the accuracy was maintained at 94.7% with 10-s measurements. For SR, the specificity decreased with higher variability of peak-to-peak intervals. However, for AF, CART maintained consistent sensitivity regardless of variability. Pulse rates had a lower impact on sensitivity than on specificity. The performance of CART was comparable to that of the conventional device when using a proper threshold. External validation showed that 94.99% (16,529/17,400) of the PPG samples from the control group were correctly identified with SR.
A ring-type wearable device with deep learning analysis of PPG signals could accurately diagnose AF without relying on electrocardiography. With this device, continuous monitoring for AF may be promising in high-risk populations.
ClinicalTrials.gov NCT04023188; https://clinicaltrials.gov/ct2/show/NCT04023188.
Abstract
Introduction
For diagnosis and management of Obstructive Sleep Apnea (OSA), long-term multi-night monitoring is crucial. Convenient detection of OSA at home is required for this purpose. ...Using sound recorded by smartphone can provide a convenient way to detect OSA. In this study, we present a sound-based OSA detection deep learning model that can detect OSA in real-time even in a home environment where various noises exist. The model is trained with home noise simulated sound to be robust for detecting home noises.
Methods
Two types of data were used for training and testing. The first type was sleep breathing sound data collected at the hospital while patients underwent a PSG. It included 1,154 and 297 nights recorded by a PSG microphone and a smartphone, respectively. We split them into 150 nights of smartphone data for testing and the rest for training. The second type was home noise data, which included 22,500 noises that might occur in a residential environment. The proposed acoustic apnea event detector inputs Mel spectrograms of sleep breathing sounds and outputs OSA event classes for each epoch (APNEA, HYPOPNEA, or NO-EVENT). The home noises were used to make the model robust to a noisy home environment. The performance of the prediction model was assessed by epoch-by-epoch prediction accuracy and OSA severity classification based on the apnea-hypopnea index (AHI).
Results
Our model achieved 86 % epoch-by-epoch agreement (0.75 in macro F1) for 3-class event detection task. The model had an accuracy of 92% for NO-EVENT, 84% for APNEA, and 51% for HYPOPNEA. Most misclassifications were made for HYPOPNEA, with 15% and 34% of HYPOPNEA being wrongly predicted as APNEA and NO-EVENT, respectively. The sensitivity and specificity of OSA severity classification (AHI ≥ 15) were 0.85 and 0.84, respectively.
Conclusion
Our study presents a real-time epoch-by-epoch OSA detector that works in a variety of noisy home environments. Based on this, additional research is needed to verify the usefulness of various multi-night monitoring and real-time diagnostic technologies in the home environment.
Support (if any)
EVs (Electric vehicle) generally have only around 22% driving ranges compared with ICEVs (Internal combustion engine vehicle) with a similar price range. Running out of the EV battery SoC (State of ...charge) while driving gives the same inconvenience as a vehicle breakdown. In this paper, we emphasize that an accurate remaining range estimation can efficiently mitigate the range anxiety of EV drivers. Most EV drivers reserve 30% of the on-dash estimated remaining range gauge of their EV because they do not trust the current remaining range estimation accuracy of production EVs. In other words, an accurate remaining range estimation is equivalent to increasing the EV battery capacity up to 30%. Just like the analogous concepts used in the power estimation of digital circuits, a model-based remaining range estimation consists of the two consecutive steps, a driving profile estimation and a power consumption estimation using the power model. In this paper, we focus on increasing the accuracy of the power model. We come up with a hybrid modeling methodology combining a physics equation based model with empirical data. We validate the accuracy of the hybrid model in the remaining range estimation with the target EV. We collect the power consumption, velocity, road inclination, etc. of the EV in every half second with an onboard monitoring system, a perform multivariable linear regression and create an accurate EV power model. The proposed remaining range estimation yields only 2.52% error while the state-of-the-art model-based EV remaining range estimation shows 9.33% error when the same future route and speed estimation are given.