Fibrillations and flutters are serious diseases influence the normal functioning of the heart. Among the most frequently occurring heart disorders belong atrial fibrillation (Afib), atrial flutter ...(Afl), and ventricular fibrillation (Vfib). Nowadays, heart failures are mostly detected by electrocardiogram (ECG) device by examining the signal transferred from electrodes placed on the human body to the output display. The signal is examined by professional health personnel, who are looking for an obvious pattern representing the normal or abnormal rhythm of the heart. Nevertheless, information from ECG can be distorted by noise on data transmission. Moreover,problematic pattern does not have to be so much different from normal and it can be difficult to recognize them just by human eye even by an expert in the field. An automated computer-aided diagnosis (CAD) is an approach to make decision support for elimination of these lacks. For early diagnosis, CAD tool should work in like real-time system without big time consuming and dependency on data and measuring differences of each device. This paper proposes a novel approach of a CAD system to the detection of fibrillations and flutters by our 8-layer deep convolutional neural network. Proposed model requires only basic data normalization without pre-processing and feature extraction from raw ECG samples. We have achieved the accuracy, specificity, and sensitivity of 98.45%, 99.27%, and 99.87% respectively. Designed system can be directly implemented like decision support system in clinical environment.
This paper studies clustering of multi-view data, known as multi-view clustering. Among existing multi-view clustering methods, one representative category of methods is the graph-based approach. ...Despite its elegant and simple formulation, the graph-based approach has not been studied in terms of (a) the generalization of the approach or (b) the impact of different graph metrics on the clustering results. This paper extends this important approach by first proposing a general Graph-Based System (GBS) for multi-view clustering, and then discussing and evaluating the impact of different graph metrics on the multi-view clustering performance within the proposed framework. GBS works by extracting data feature matrix of each view, constructing graph matrices of all views, and fusing the constructed graph matrices to generate a unified graph matrix, which gives the final clusters. A novel multi-view clustering method that works in the GBS framework is also proposed, which can (1) construct data graph matrices effectively, (2) weight each graph matrix automatically, and (3) produce clustering results directly. Experimental results on benchmark datasets show that the proposed method outperforms state-of-the-art baselines significantly.
COVID-2019 is a global threat, for this reason around the world, researches have been focused on topics such as to detect it, prevent it, cure it, and predict it. Different analyses propose models to ...predict the evolution of this epidemic. These analyses propose models for specific geographical areas, specific countries, or create a global model. The models give us the possibility to predict the virus behavior, it could be used to make future response plans. This work presents an analysis of COVID-19 spread that shows a different angle for the whole world, through 6 geographic regions (continents). We propose to create a relationship between the countries, which are in the same geographical area to predict the advance of the virus. The countries in the same geographic region have variables with similar values (quantifiable and non-quantifiable), which affect the spread of the virus. We propose an algorithm to performed and evaluated the ARIMA model for 145 countries, which are distributed into 6 regions. Then, we construct a model for these regions using the ARIMA parameters, the population per 1M people, the number of cases, and polynomial functions. The proposal is able to predict the COVID-19 cases with a RMSE average of 144.81. The main outcome of this paper is showing a relation between COVID-19 behavior and population in a region, these results show us the opportunity to create more models to predict the COVID-19 behavior using variables as humidity, climate, culture, among others.
The picture fuzzy sets (PFSs) state or model the voting information accurately without information loss. However, their existing operational laws usually generate unreasonable computing results, ...especially when the agreement degree (AD) or neutrality degree (ND) or opposition degree (OD) is zero. To tackle this issue, we propose the interactional operational laws (IOLs) to compute picture fuzzy numbers (PFNs), which can capture the interaction between the ADs and NDs in two PFNs, as well as the interaction between the ADs and ODs in two PFNs. Based on the proposed novel IOLs, partitioned Heronian mean (PHM) operator, and partitioned geometric Heronian mean (PGHM) operator, some picture fuzzy interactional PHM (PFIPHM), weighted PFIPHM (PFIWPHM), geometric PFIPHM (PFIPGHM), and weighted PFIPGHM (PFIWPGHM) operators are proposed in this paper. Afterwards, we investigate the properties of these operators. Using the PFIWPHM and PFIWPGHM operators, a novel multiple attribute decision-making (MADM) method with PFNs is elaborated. Finally, a study example that involves the service quality ranking of nursing facilities is provided to show the decision procedure of the proposed MADM method and we also give the comparative analysis between the proposed operators and the existing aggregation operators developed for PFNs.
•Sentiment analysis of tweets by a state-of-the-art classification model (BERT).•Evaluation of tweet pre-processing, to avoid noise and exploit hidden information.•Available data in two languages are ...considered, i.e., English and Italian.•The most convenient strategy to pre-process tweets is individuated.•The state of the art is improved in both languages for tweet sentiment analysis.
Social media offer a big amount of information, to exploit in many fields of research. However, while methods for Natural Language Processing are being developed with good results when applied to well-formed datasets made of written text with a clear syntax, these sources present text written in informal language, unstructured syntax, and with peculiar symbols; therefore, particular approaches are required for text processing in this case. In this paper, the task of sentiment analysis of tweets is regarded. In particular, in order to avoid noise constituted by some web constructs like URLs and mentions and by other text fragments, and to exploit information hidden in symbols like emoticons, emojis and hashtags, the pre-processing of tweets is analyzed. More in detail, a number of experiments, performed by a state-of-the-art classification model (BERT), are designed, to evaluate many currently available operations for pre-processing tweets, in terms of the statistical significance of their influence on sentiment analysis performances. Moreover, available data in two languages are considered, i.e., English and Italian, in order to also evaluate dependence on the language. Results allow to individuate the most convenient strategy to pre-process tweets, and thus to improve the state of the art in both languages for the considered task of sentiment analysis.
The objective of this article is to present a hybrid approach to the Sentiment Analysis problem at the sentence level. This new method uses natural language processing (NLP) essential techniques, a ...sentiment lexicon enhanced with the assistance of SentiWordNet, and fuzzy sets to estimate the semantic orientation polarity and its intensity for sentences, which provides a foundation for computing with sentiments. The proposed hybrid method is applied to three different data-sets and the results achieved are compared to those obtained using Naïve Bayes and Maximum Entropy techniques. It is demonstrated that the presented hybrid approach is more accurate and precise than both Naïve Bayes and Maximum Entropy techniques, when the latter are utilised in isolation. In addition, it is shown that when applied to datasets containing snippets, the proposed method performs similarly to state of the art techniques.
•Classification of normal and MI ECG beats.•With and without noise ECG beats are considered.•Convolutional neural network is employed.•R peak detection is not performed.•Accuracy of 93.53% and 95.22% ...obtained for with and without noise respectively
The electrocardiogram (ECG) is a useful diagnostic tool to diagnose various cardiovascular diseases (CVDs) such as myocardial infarction (MI). The ECG records the heart's electrical activity and these signals are able to reflect the abnormal activity of the heart. However, it is challenging to visually interpret the ECG signals due to its small amplitude and duration. Therefore, we propose a novel approach to automatically detect the MI using ECG signals. In this study, we implemented a convolutional neural network (CNN) algorithm for the automated detection of a normal and MI ECG beats (with noise and without noise). We achieved an average accuracy of 93.53% and 95.22% using ECG beats with noise and without noise removal respectively. Further, no feature extraction or selection is performed in this work. Hence, our proposed algorithm can accurately detect the unknown ECG signals even with noise. So, this system can be introduced in clinical settings to aid the clinicians in the diagnosis of MI.
One-class classification is a machine learning problem, where training data has only one class. The objective is to determine if the input data is seen class or unseen class. Traditional deep ...learning algorithms are not suitable for this task since the algorithm can predict only class in training data. In this paper, the one-class classification algorithm using construction error of image transformation network (OCITN) is proposed. In particular, image transformation network (ITN) is introduced as a subtask, which transforms input image into one image, namely goal image. Moreover, the error of ITN, namely construction error (CE), is computed as a distance metric between the goal image and model output. ITN model is trained using only one-class images and is applied for testing images. Since the model is trained with only one-class images, the CE for one-class is small relative to other classes. Thus, one-class classification is made determining CE is large or small. The proposed method is experimented with using MNIST, Fashion MNIST, CIFAR10, CIFAR100, and Cat-vs-Dog datasets. OCITN shows good results where the goal image has high entropy. Additionally, the extension of OCITN, namely OCITNE, is implemented. This method shows the state of the art performance in MNIST (98.0), Fashion MNIST(95.6), and acceptable performance in CIFAR10(78.4). Furthermore, these methods provide high-speed processing, OCITN process 5291 images, and OCITNE 1261 images per second, 137 times and 33 times faster than state of the art. The source code used in this paper can be downloaded from: https://github.com/ToshiHayashi/OCITN.
The amount of data collected from different real-world applications is increasing rapidly. When the volume of data is too large to be loaded to memory, it may be impossible to analyze it using a ...single computer. Although efforts have been taken to manage big data by using a single computer, the problem may not be solved in an acceptable time frame, making parallel computing an indispensable way to handle big data. In this paper, we investigate approaches to attribute reduction in parallel using dominance-based neighborhood rough sets (DNRS), which take into consideration the partial orders among numerical and categorical attribute values, and can be utilized in a multicriteria decision-making method. We first present some properties of attribute reduction in DNRS, and then investigate principles of parallel attribute reduction in DNRS. Parallelization on different components of attribute reduction are explored in detail. Furthermore, parallel attribute reduction algorithms in DNRS are proposed. Experimental results on UCI data and big data show that the proposed parallel algorithm is both effective and efficient.
•A review of the consensus processes in social network group decision making is presented.•Two approaches are identified: consensus based on trust relationships and based on opinion ...evolution.•Challenges and research future fields are presented.
In social network group decision making (SNGDM), the consensus reaching process (CRP) is used to help decision makers with social relationships reach consensus. Many CRP studies have been conducted in SNGDM until now. This paper provides a review of CRPs in SNGDM, and as a result it classifies them into two paradigms: (i) the CRP paradigm based on trust relationships, and (ii) the CRP paradigm based on opinion evolution. Furthermore, identified research challenges are put forward to advance this area of research.