Although biometrics systems using an electrocardiogram (ECG) have been actively researched, there is a characteristic that the morphological features of the ECG signal are measured differently ...depending on the measurement environment. In general, post-exercise ECG is not matched with the morphological features of the pre-exercise ECG because of the temporary tachycardia. This can degrade the user recognition performance. Although normalization studies have been conducted to match the post- and pre-exercise ECG, limitations related to the distortion of the P wave, QRS complexes, and T wave, which are morphological features, often arise. In this paper, we propose a method for matching pre- and post-exercise ECG cycles based on time and frequency fusion normalization in consideration of morphological features and classifying users with high performance by an optimized system. One cycle of post-exercise ECG is expanded by linear interpolation and filtered with an optimized frequency through the fusion normalization method. The fusion normalization method aims to match one post-exercise ECG cycle to one pre-exercise ECG cycle. The experimental results show that the average similarity between the pre- and post-exercise states improves by 25.6% after normalization, for 30 ECG cycles. Additionally, the normalization algorithm improves the maximum user recognition performance from 96.4 to 98%.
Saaty’s analytic hierarchy process (AHP) is widely used in many decision-making problems such as a choice of alternatives, prioritization, or ranking. Despite being a valuable tool based on pairwise ...comparisons of a set of alternatives the method is strongly connected with numeric or linguistic descriptors of the preferences. This can form a limitation to the users who do not feel comfortable with numbers or words strictly related with the articulation of the meaning of preference, i.e., with a predefined scale. Therefore, in this study, we develop a comprehensive approach based on a simple graphic interface. The results and their consistency as well as stability of the method are examined. Moreover, through a suite of experiments we observe how the method works when a group of experts does not provide answers to all questions. Finally, we analyze four variants of non-linear transforms which are used to minimize the inconsistency ratio of the AHP (fuzzy AHP) process.
Missing values are a common phenomenon when dealing with real-world data sets. Analysis of incomplete data sets has become an active area of research. In this paper, we focus on the problem of ...clustering incomplete data, which is intended to introduce some prior distribution information of the missing values into the algorithm of fuzzy clustering. First, non-parametric hypothesis testing is employed to describe the missing values adhering to a certain Gaussian distribution as probabilistic information granules based on the nearest neighbors of incomplete data. Second, we propose a novel clustering model, in which probabilistic information granules of missing values are incorporated into the Fuzzy C-Means clustering of incomplete data by involving the maximum likelihood criterion. Third, the clustering model is optimized by using a tri-level alternating optimization utilizing the method of Lagrange multipliers. The convergence and the time complexity of the clustering algorithm are also discussed. The experiments reported both on synthetic and real-world data sets demonstrate that the proposed approach can effectively realize clustering of incomplete data.
The aim of image fusion is to obtain a clear image by combining useful information coming from multiple images, so it is crucial to extract the salient features of source images as the activity level ...measurement effectively. In this paper, a novel algorithm called fractional-order differentiation based sparse representation (FD-SR) is presented for multi-focus image fusion. In this algorithm, the source images are first convoluted with fractional-order differentiation masks to acquire the feature maps, from which the histograms of oriented gradients (HOG) are computed to capture human vision-related salient information. Next, to construct a representative dictionary for sparse representation, the HOG patterns are then partitioned into many patches which are clustered to retain the structural information. From these clusters, compact sub-dictionaries are learned using orthogonal matching pursuit (OMP) respectively and then combined to form the overcomplete dictionary. Finally, the fused sub-images are reconstructed with the dictionary based on max l1 rule, and all these sub-images constitute the whole fused image. The experimental results on multi-focus image datasets and medical image dataset validate the effectiveness of the proposed method for image fusion task.
In this paper, we consider a generic class of adaptive optimization problems under uncertainty, and develop a data-driven paradigm of adaptive probabilistic robust optimization (APRO) in a robust and ...computationally tractable manner. The paradigm comprises two phases: 1) bilayer information granulation (IG), which involves the data-mining techniques and nested decomposition of convex sets that establish and restructure the knowledge from data and 2) robustization and optimization over the restructured knowledge by the IG, which forms the APRO model. The tradeoff between the solution optimality and the robustness of the resulting data-driven APRO model can be achieved by adjusting the number of clusters and the number of nested decomposition units of the IG process. We draw the connections of the APRO model with the stochastic programming and the regular robust optimization models, respectively, and show that the APRO model can be regarded as a generalized version of both models. We show that the APRO model can be transformed into the second-order conic programming which is computationally tractable and can be solved efficiently by the off-the-shelf solvers. Furthermore, the model can be extended by robustizing the probability parameters. Finally, an application on two-stage facility location planning is presented, and the computational results demonstrate the performance and the insights of using the data-driven APRO models.
Inspired by the collaborative mechanism among biological nervous, endocrine and immune systems, this paper proposes an algorithm of adaptive evolutionary based on biological cooperation (BCAE). This ...method can solve the dynamic multi-objective optimization problem of Industrial Internet of Things (IIoT) services to reduce the total service cost and service time. The BCAE algorithm consists of two parts: bottom level and top level. In the bottom level, different Pareto frontiers are obtained by coevolution of multiple subpopulations. In the top level, according to the distance between the service request and the service provider and the unit energy consumption of the service provider, the connection weight sequence is designed, and then the affinity matrix is constructed according to the connection weight sequence. Finally, the multi factor genetic algorithm (MFEA-II) is used to mate and imitate the service providers with different affinity, and the total service cost and total service time of the optimal solution are obtained, which are recorded in the top-level optimal antigen solution set. On the basis of single service strategy and collaborative service strategy, the IIoT services with dynamic requests are studied under different distributions. The obtained simulation results show that the performance of BCAE is better than the performance of the four existing algorithms, especially when solving high-dimensional problems.
In the era of advanced methodologies and practices of system modeling, we are faced with ever growing challenges of building models of complex systems that are in full rapport with reality. These ...challenges are multifaceted. Human centricity becomes of paramount relevance in system modeling and because of this models need to be customized and easily interpretable. More and more visibly, experimental data and knowledge of varying quality being directly acquired from experts have to be efficiently utilized in the construction of models. The quality of data and ensuing quality of models have to be prudently quantified. There are ongoing and even exacerbated challenges to build intelligent systems, modeling multifaceted phenomena, and deliver efficient models that help users describe and understand systems and support processes of decision-making. We have to become fully cognizant that processing and modeling has to be realized with the use of entities endowed with well-defined semantics, namely information granules. Human do not perceive reality and reason in terms of numbers but rather utilize more abstract constructs (information granules), which are helpful in setting up a certain cognitive perspective and ignore irrelevant details when dealing with the complexity of the systems.
To make this study self-contained, we briefly recall the key concepts of granular computing and demonstrate how this conceptual framework and its algorithmic fundamentals give rise to granular models. We discuss several representative formal setups used in describing and processing information granules including fuzzy sets, rough sets, and interval calculus. Key architectures of models dwell upon relationships among information granules. We demonstrate how information granularity and its optimization can be regarded as an important design asset to be exploited in system modeling and giving rise to granular models. With this regard, an important category of rule-based models along with their granular enrichments is studied in detail.
Deoxyribonucleic acid (DNA) microarray is an important technology, which supports a simultaneous measurement of thousands of genes for biological analysis. With the rapid development of the gene ...expression data characterized by uncertainty and being of high dimensionality, there is a genuine need for advanced processing techniques. With this regard, Fuzzy Possibilistic C-Means Clustering (FPCM) and Granular Computing (GrC) are introduced with the aim to solve problems of feature selection and outlier detection. In this study, by taking advantage of the FPCM and GrC, an Advanced Fuzzy Possibilistic C-Means Clustering based on Granular Computing (GrFPCM) is proposed to select features as a preprocessing phase for clustering problems while the developed granular space is used to cope with uncertainty. Experiments were completed for various gene expression datasets and a comparative analysis is reported.
Constructing information granules (IGs) has been of significant interest to the discipline of granular computing. The principle of justifiable granularity has been proposed to guide the design of ...IGs, opening an avenue of pursuits of building IGs carried out on a basis of well-defined and intuitively appealing principles. However, how to improve the efficiency and accuracy of the resulting constructs is an open issue. In this paper, we present a local-density-based optimal granulation model (LoDOG), exhibiting evident advantages: 1) it can detect arbitrarily-shaped IGs and 2) it finds the optimal granulation solutions with <inline-formula> <tex-math notation="LaTeX">{O}({N}) </tex-math></inline-formula> complexity, once the leading tree structure has been constructed. We describe IGs of arbitrary shapes using a small collection of landmark points positioned on the skeleton of the underlying manifold, which contribute to approximate reconstruction capabilities of the original dataset. A dissimilarity metric is developed to evaluate the quality of the obtained reconstruction. The interpretability of LoDOG IGs is discussed. Theoretical analysis and empirical evaluations are covered to demonstrate the effectiveness of LoDOG and the manifold description.
Numeric models (including fuzzy models) produce numeric results. There are no ideal models that deliver a complete match with the data. In this study, we advocate that a way of evaluating the quality ...of models can be realized at the higher level of abstraction by developing a concept of granular prediction. In this way, modeling results are expressed in the form of information granules, in particular as intervals or fuzzy sets. The study formulates a general conceptual and algorithmically supported statement: a meaningful evaluation framework to assess the quality of numeric models is the one engaging information granules. This general observation comprises a special case commonly investigated in regression analysis, where the quality of numeric results is expressed via granular constructs, namely, confidence or prediction intervals. The original design of prediction information granules is formulated as an optimization problem, in which the criteria of coverage of data and specificity of granular results are considered. In the optimization process, we also engage some nonlinear transformation of the level of information granularity depending upon the value of the numeric result. The proposed development is model agnostic and can support a variety of modeling architectures; the experimental part of the study is focused on rule-based models. Further generalizations of prediction information granules are covered by involving granular parameters in the design process.