: Wearable technologies have added completely new and fast emerging tools to the popular field of personal gadgets. Aside from being fashionable and equipped with advanced hardware technologies such ...as communication modules and networking, wearable devices have the potential to fuel artificial intelligence (AI) methods with a wide range of valuable data.
: Various AI techniques such as supervised, unsupervised, semi-supervised and reinforcement learning (RL) have already been used to carry out various tasks. This paper reviews the recent applications of wearables that have leveraged AI to achieve their objectives.
: Particular example applications of supervised and unsupervised learning for medical diagnosis are reviewed. Moreover, examples combining the internet of things, wearables, and RL are reviewed. Application examples of wearables will be also presented for specific domains such as medical, industrial, and sport. Medical applications include fitness, movement disorder, mental health, etc. Industrial applications include employee performance improvement with the aid of wearables. Sport applications are all about providing better user experience during workout sessions or professional gameplays.
: The most important challenges regarding design and development of wearable devices and the computation burden of using AI methods are presented. Finally, future challenges and opportunities for wearable devices are presented.
•Applications of artificial intelligence methods such as supervised, unsupervised, etc. in wearable devices are reviewed.•Different types of wearable devices are introduced.•The body parts on which the wearable devises are installed are investigated.•Distribution of existing wearable devices based on target body parts and their applications is presented.•Comprehensive treatment of domain-specific applications of wearable devices such as sports, healthcare, industrial and manufacturing is provided.•Wearable technologies challenges such as data collection, data transmission, etc. are pointed out.•Future directions regarding application of artificial intelligence methods in wearable devices are discussed.
Deep neural networks (DNNs) have been widely applied for detecting COVID-19 in medical images. Existing studies mainly apply transfer learning and other data representation strategies to generate ...accurate point estimates. The generalization power of these networks is always questionable due to being developed using small datasets and failing to report their predictive confidence. Quantifying uncertainties associated with DNN predictions is a prerequisite for their trusted deployment in medical settings. Here we apply and evaluate three uncertainty quantification techniques for COVID-19 detection using chest X-Ray (CXR) images. The novel concept of uncertainty confusion matrix is proposed and new performance metrics for the objective evaluation of uncertainty estimates are introduced. Through comprehensive experiments, it is shown that networks pertained on CXR images outperform networks pretrained on natural image datasets such as ImageNet. Qualitatively and quantitatively evaluations also reveal that the predictive uncertainty estimates are statistically higher for erroneous predictions than correct predictions. Accordingly, uncertainty quantification methods are capable of flagging risky predictions with high uncertainty estimates. We also observe that ensemble methods more reliably capture uncertainties during the inference. DNN-based solutions for COVID-19 detection have been mainly proposed without any principled mechanism for risk mitigation. Previous studies have mainly focused on on generating single-valued predictions using pretrained DNNs. In this paper, we comprehensively apply and comparatively evaluate three uncertainty quantification techniques for COVID-19 detection using chest X-Ray images. The novel concept of uncertainty confusion matrix is proposed and new performance metrics for the objective evaluation of uncertainty estimates are introduced for the first time. Using these new uncertainty performance metrics, we quantitatively demonstrate when we could trust DNN predictions for COVID-19 detection from chest X-rays. It is important to note the proposed novel uncertainty evaluation metrics are generic and could be applied for evaluation of probabilistic forecasts in all classification problems.
Epilepsy is a brain disorder disease that affects people’s quality of life. Electroencephalography (EEG) signals are used to diagnose epileptic seizures. This paper provides a computer-aided ...diagnosis system (CADS) for the automatic diagnosis of epileptic seizures in EEG signals. The proposed method consists of three steps, including preprocessing, feature extraction, and classification. In order to perform the simulations, the Bonn and Freiburg datasets are used. Firstly, we used a band-pass filter with 0.5–40 Hz cut-off frequency for removal artifacts of the EEG datasets. Tunable-Q Wavelet Transform (TQWT) is used for EEG signal decomposition. In the second step, various linear and nonlinear features are extracted from TQWT sub-bands. In this step, various statistical, frequency, and nonlinear features are extracted from the sub-bands. The nonlinear features used are based on fractal dimensions (FDs) and entropy theories. In the classification step, different approaches based on conventional machine learning (ML) and deep learning (DL) are discussed. In this step, a CNN–RNN-based DL method with the number of layers proposed is applied. The extracted features have been fed to the input of the proposed CNN–RNN model, and satisfactory results have been reported. In the classification step, the K-fold cross-validation with k = 10 is employed to demonstrate the effectiveness of the proposed CNN–RNN classification procedure. The results revealed that the proposed CNN–RNN method for Bonn and Freiburg datasets achieved an accuracy of 99.71% and 99.13%, respectively.
Multiple Sclerosis (MS) is a type of brain disease which causes visual, sensory, and motor problems for people with a detrimental effect on the functioning of the nervous system. In order to diagnose ...MS, multiple screening methods have been proposed so far; among them, magnetic resonance imaging (MRI) has received considerable attention among physicians. MRI modalities provide physicians with fundamental information about the structure and function of the brain, which is crucial for the rapid diagnosis of MS lesions. Diagnosing MS using MRI is time-consuming, tedious, and prone to manual errors. Research on the implementation of computer aided diagnosis system (CADS) based on artificial intelligence (AI) to diagnose MS involves conventional machine learning and deep learning (DL) methods. In conventional machine learning, feature extraction, feature selection, and classification steps are carried out by using trial and error; on the contrary, these steps in DL are based on deep layers whose values are automatically learn. In this paper, a complete review of automated MS diagnosis methods performed using DL techniques with MRI neuroimaging modalities is provided. Initially, the steps involved in various CADS proposed using MRI modalities and DL techniques for MS diagnosis are investigated. The important preprocessing techniques employed in various works are analyzed. Most of the published papers on MS diagnosis using MRI modalities and DL are presented. The most significant challenges facing and future direction of automated diagnosis of MS using MRI modalities and DL techniques are also provided.
•A thorough review of the detection of MS with deep learning techniques are presented.•A discussion is done on various neuroimaging modalities and their pros/cons for the task at hand.•Papers published from 2016 are reviewed and structured in tabular form.•All main datasets with their specificities are listed and analyzed.•Challenges and possible future directions are discussed comprehensively.
Schizophrenia (SZ) is a mental disorder whereby due to the secretion of specific chemicals in the brain, the function of some brain regions is out of balance, leading to the lack of coordination ...between thoughts, actions, and emotions. This study provides various intelligent deep learning (DL)-based methods for automated SZ diagnosis
electroencephalography (EEG) signals. The obtained results are compared with those of conventional intelligent methods. To implement the proposed methods, the dataset of the Institute of Psychiatry and Neurology in Warsaw, Poland, has been used. First, EEG signals were divided into 25 s time frames and then were normalized by
-score or norm L2. In the classification step, two different approaches were considered for SZ diagnosis
EEG signals. In this step, the classification of EEG signals was first carried out by conventional machine learning methods, e.g., support vector machine,
-nearest neighbors, decision tree, naïve Bayes, random forest, extremely randomized trees, and bagging. Various proposed DL models, namely, long short-term memories (LSTMs), one-dimensional convolutional networks (1D-CNNs), and 1D-CNN-LSTMs, were used in the following. In this step, the DL models were implemented and compared with different activation functions. Among the proposed DL models, the CNN-LSTM architecture has had the best performance. In this architecture, the ReLU activation function with the
-score and L2-combined normalization was used. The proposed CNN-LSTM model has achieved an accuracy percentage of 99.25%, better than the results of most former studies in this field. It is worth mentioning that to perform all simulations, the
-fold cross-validation method with
= 5 has been used.
In this paper, a novel medical image encryption method based on multi-mode synchronization of hyper-chaotic systems is presented. The synchronization of hyper-chaotic systems is of great significance ...in secure communication tasks such as encryption of images. Multi-mode synchronization is a novel and highly complex issue, especially if there is uncertainty and disturbance. In this work, an adaptive-robust controller is designed for multimode synchronized chaotic systems with variable and unknown parameters, despite the bounded disturbance and uncertainty with a known function in two modes. In the first case, it is a main system with some response systems, and in the second case, it is a circular synchronization. Using theorems it is proved that the two synchronization methods are equivalent. Our results show that, we are able to obtain the convergence of synchronization error and parameter estimation error to zero using Lyapunov's method. The new laws to update time-varying parameters, estimating disturbance and uncertainty bounds are proposed such that stability of system is guaranteed. To assess the performance of the proposed synchronization method, various statistical analyzes were carried out on the encrypted medical images and standard benchmark images. The results show effective performance of the proposed synchronization technique in the medical images encryption for telemedicine application.
Chronic tinnitus is a debilitating condition which affects 10-20% of adults and can severely impact their quality of life. Currently there is no objective measure of tinnitus that can be used ...clinically. Clinical assessment of the condition uses subjective feedback from individuals which is not always reliable. We investigated the sensitivity of functional near-infrared spectroscopy (fNIRS) to differentiate individuals with and without tinnitus and to identify fNIRS features associated with subjective ratings of tinnitus severity. We recorded fNIRS signals in the resting state and in response to auditory or visual stimuli from 25 individuals with chronic tinnitus and 21 controls matched for age and hearing loss. Severity of tinnitus was rated using the Tinnitus Handicap Inventory and subjective ratings of tinnitus loudness and annoyance were measured on a visual analogue scale. Following statistical group comparisons, machine learning methods including feature extraction and classification were applied to the fNIRS features to classify patients with tinnitus and controls and differentiate tinnitus at different severity levels. Resting state measures of connectivity between temporal regions and frontal and occipital regions were significantly higher in patients with tinnitus compared to controls. In the tinnitus group, temporal-occipital connectivity showed a significant increase with subject ratings of loudness. Also in this group, both visual and auditory evoked responses were significantly reduced in the visual and auditory regions of interest respectively. Naïve Bayes classifiers were able to classify patients with tinnitus from controls with an accuracy of 78.3%. An accuracy of 87.32% was achieved using Neural Networks to differentiate patients with slight/ mild versus moderate/ severe tinnitus. Our findings show the feasibility of using fNIRS and machine learning to develop an objective measure of tinnitus. Such a measure would greatly benefit clinicians and patients by providing a tool to objectively assess new treatments and patients' treatment progress.
The first known case of Coronavirus disease 2019 (COVID-19) was identified in December 2019. It has spread worldwide, leading to an ongoing pandemic, imposed restrictions and costs to many countries. ...Predicting the number of new cases and deaths during this period can be a useful step in predicting the costs and facilities required in the future. The purpose of this study is to predict new cases and deaths rate one, three and seven-day ahead during the next 100 days. The motivation for predicting every n days (instead of just every day) is the investigation of the possibility of computational cost reduction and still achieving reasonable performance. Such a scenario may be encountered in real-time forecasting of time series. Six different deep learning methods are examined on the data adopted from the WHO website. Three methods are LSTM, Convolutional LSTM, and GRU. The bidirectional extension is then considered for each method to forecast the rate of new cases and new deaths in Australia and Iran countries.
This study is novel as it carries out a comprehensive evaluation of the aforementioned three deep learning methods and their bidirectional extensions to perform prediction on COVID-19 new cases and new death rate time series. To the best of our knowledge, this is the first time that Bi-GRU and Bi-Conv-LSTM models are used for prediction on COVID-19 new cases and new deaths time series. The evaluation of the methods is presented in the form of graphs and Friedman statistical test. The results show that the bidirectional models have lower errors than other models. A several error evaluation metrics are presented to compare all models, and finally, the superiority of bidirectional methods is determined. This research could be useful for organisations working against COVID-19 and determining their long-term plans.
Abstract
Coronary artery disease (CAD) is a prevalent disease with high morbidity and mortality rates. Invasive coronary angiography is the reference standard for diagnosing CAD but is costly and ...associated with risks. Noninvasive imaging like cardiac magnetic resonance (CMR) facilitates CAD assessment and can serve as a gatekeeper to downstream invasive testing. Machine learning methods are increasingly applied for automated interpretation of imaging and other clinical results for medical diagnosis. In this study, we proposed a novel CAD detection method based on CMR images by utilizing the feature extraction ability of deep neural networks and combining the features with the aid of a random forest for the very first time. It is necessary to convert image data to numeric features so that they can be used in the nodes of the decision trees. To this end, the predictions of multiple stand-alone convolutional neural networks (CNNs) were considered as input features for the decision trees. The capability of CNNs in representing image data renders our method a generic classification approach applicable to any image dataset. We named our method RF-CNN-F, which stands for Random Forest with CNN Features. We conducted experiments on a large CMR dataset that we have collected and made publicly accessible. Our method achieved excellent accuracy (99.18%) using Adam optimizer compared to a stand-alone CNN trained using fivefold cross validation (93.92%) tested on the same dataset.
Budget allocation across multiple advertising channels involves periodically dividing a fixed total budget among various channels. Yet, the challenge of making sequential decisions to optimize ...long-term benefits rather than short-term gains is often overlooked. Additionally, more apparent connections must be made between actions taken on one advertising channel and the outcomes on others. Furthermore, budget limitations narrow down the range of potential optimal strategies that can be pursued. In response to these challenges, this study unveils a pioneering multi-channel advertising budget allocation approach that leverages a reinforcement learning (RL) Q-learning framework enriched with an advanced Differential Evolution (DE) algorithm to refine the Q-learning methodology. The RL element makes informed sequential decisions, adeptly adjusting strategies to favor long-term rewards by assimilating environmental feedback. Complementing this, the enhanced DE algorithm introduces an inventive clustering-based mutation technique, exploiting key groupings within the DE population to generate novel and practical solutions. The model is further bolstered by a discretization tactic aimed at simplifying the model by streamlining costs. The proposed methodology is rigorously validated using two extensive datasets: the Chinese Internet Company Advertising Dataset (CICAD) and CRITEO-UPLIFT v2, employing metrics like Area Under the Cost Curve (AUCC) and Expected Outcome Metric (EOM) as measures of performance. The empirical results affirm the superiority of the model, showcasing its exceptional performance with significant scores (AUCC =0.750 and EOM =0.736 for CICAD; AUCC =0.813 and EOM =0.829 for CRITEO-UPLIFT v2), thereby illustrating the model's proficiency in navigating the multifaceted challenges associated with multi-channel budget allocation and establishing a new benchmark in the field.