Interval type-2 fuzzy set (IT2FS) offers interesting avenue to handle high order information and uncertainty in decision support system (DSS) when dealing with both extrinsic and intrinsic aspects of ...uncertainty. Recently, multiple attribute decision making (MADM) problems with interval type-2 fuzzy information have received increasing attentions both from researchers and practitioners. As a result, a number of interval type-2 fuzzy MADM methods have been developed. In this paper, we extend the VIKOR (VlseKriterijumska Optimizacijia I Kompromisno Resenje, in Serbian) method based on the prospect theory to accommodate interval type-2 fuzzy circumstances. First, we propose a new distance measure for IT2FS, which is comes as a sound alternative when being compared with the existing interval type-2 fuzzy distance measures. Then, a decision model integrating VIKOR method and prospect theory is proposed. A case study concerning a high-tech risk evaluation is provided to illustrate the applicability of the proposed method. In addition, a comparative analysis with interval type-2 fuzzy TOPSIS method is also presented.
•Performance of fuzzy model is influenced by the fraction of original input space.•Allocation of orders of polynomials dominates over the reduction of input space.•Optimizing condition and conclusion ...is helpful for the accuracy and complexity.
The primary aim of this study is concerned with the structural optimization of data-driven fuzzy rule-based systems (FRBS), with the intent of their complexity management. This is accomplished in two ways: the first one involves a structuralization of the antecedents and the second one deals with a structuralization of the consequents of the fuzzy rules. More specifically, this study contributes to the complexity management of fuzzy models by focusing on (i) the efficient arrangement (reduction) of the input spaces over which the antecedents of rules are formed and (ii) allocating the orders of local polynomial functions across the consequents of the rules. The originality of the study comes with the flexibility of FRBS that is endowed by admitting variability of input spaces standing in the antecedents of different rules as well as the variability of orders of polynomials (local functions) forming the consequents of the rules.
Particle swarm optimization (PSO) is guided by the root mean squared error (RMSE) accuracy criterion to realize the efficient arrangement of input spaces and an allocation of the orders of the individual polynomials. In this optimization process, the Fuzzy C-Means (FCM) algorithm is employed to create fuzzy sets in the antecedents of the rules, while the standard Least Square Error (LSE) criterion is minimized to determine the coefficients of the polynomials in the consequents. The performance of the proposed model is quantified using some numeric data, including both synthetic and machine learning datasets.
In the consensus reaching processes developed in group decision making problems we need to measure the closeness among experts’ opinions in order to obtain a consensus degree. As it is known, to ...achieve a full and unanimous consensus is often not reachable in practice. An alternative approach is to use softer consensus measures, which reflect better all possible partial agreements, guiding the consensus process until high agreement is achieved among individuals. Consensus models based on soft consensus measures have been widely used because these measures represent better the human perception of the essence of consensus. This paper presents an overview of consensus models based on soft consensus measures, showing the pioneering and prominent papers, the main existing approaches and the new trends and challenges.
In this article, K -meansclustering-based Kernel canonical correlation analysis algorithm is proposed for multimodal emotion recognition in human-robot interaction (HRI). The multimodal features ...(gray pixels; time and frequency domain) extracted from facial expression and speech are fused based on Kernel canonical correlation analysis. K -means clustering is used to select features from multiple modalities and reduce dimensionality. The proposed approach can improve the heterogenicity among different modalities and make multiple modalities complementary to promote multimodal emotion recognition. Experiments on two datasets, namely SAVEE and eNTERFACE'05, are conducted to evaluate the accuracy of the proposed method. The results show that the proposed method produces good recognition rates that are higher than the ones produced by the methods without K -means clustering; more specifically, they are 2.77% higher in SAVEE and 4.7% higher in eNTERFACE'05.
•Take morphology and shape features into account for granularity representation.•Using interval information granules to achieve system modelling.•A granular model is proposed and used in the field of ...anomaly detection.•Evaluation indexes are established to quantify the performance.•Greatly reduce the data volume and improve the detection performance.
Since time series are characterized by a substantial volume of data, high levels of noise and the correlation between data in the time series attributes, it becomes challenging to mine crucial information from the series and apply it to anomaly detection. In this study, inspired by the concept of information granularity being applied to the process of system modelling, a granular Markov model is proposed for time series anomaly detection. Anomalies are generally caused by the changes in amplitude and shape; in this study we take both the original time series data and their amplitude change data into consideration. First, we utilize an interval information granularity representation based on the principle of justifiable granularity to represent the original time series data in an abstract manner to arrive at the corresponding representation results-- that is, interval information granules. Then, based on the results of the interval information granularity representation and the Fuzzy C-Means (FCM) clustering algorithm, a granular Markov model is developed to produce anomaly scores to quantify possible anomalies. Compared with state-of-the-art methods, experimental studies completed for a large number of datasets demonstrate that the proposed method can significantly improve the anomaly detection process with higher data anomaly resolution. The obtained results are consistent across all datasets.
In this paper we present a comparative analysis of the predictive power of two different sets of metrics for defect prediction. We choose one set of product related and one set of process related ...software metrics and use them for classifying Java files of the Eclipse project as defective respective defect-free. Classification models are built using three common machine learners: logistic regression, Naïve Bayes, and decision trees. To allow different costs for prediction errors we perform cost-sensitive classification, which proves to be very successful: >75% percentage of correctly classified files, a recall of >80%, and a false positive rate <30%. Results indicate that for the Eclipse data, process metrics are more efficient defect predictors than code metrics.
Feature selection is a challenging problem in areas such as pattern recognition, machine learning and data mining. Considering a consistency measure introduced in rough set theory, the problem of ...feature selection, also called attribute reduction, aims to retain the discriminatory power of original features. Many heuristic attribute reduction algorithms have been proposed however, quite often, these methods are computationally time-consuming. To overcome this shortcoming, we introduce a theoretic framework based on rough set theory, called positive approximation, which can be used to accelerate a heuristic process of attribute reduction. Based on the proposed accelerator, a general attribute reduction algorithm is designed. Through the use of the accelerator, several representative heuristic attribute reduction algorithms in rough set theory have been enhanced. Note that each of the modified algorithms can choose the same attribute reduct as its original version, and hence possesses the same classification accuracy. Experiments show that these modified algorithms outperform their original counterparts. It is worth noting that the performance of the modified algorithms becomes more visible when dealing with larger data sets.
Feature selection (attribute reduction) from large-scale incomplete data is a challenging problem in areas such as pattern recognition, machine learning and data mining. In rough set theory, feature ...selection from incomplete data aims to retain the discriminatory power of original features. To address this issue, many feature selection algorithms have been proposed, however, these algorithms are often computationally time-consuming. To overcome this shortcoming, we introduce in this paper a theoretic framework based on rough set theory, which is called positive approximation and can be used to accelerate a heuristic process for feature selection from incomplete data. As an application of the proposed accelerator, a general feature selection algorithm is designed. By integrating the accelerator into a heuristic algorithm, we obtain several modified representative heuristic feature selection algorithms in rough set theory. Experiments show that these modified algorithms outperform their original counterparts. It is worth noting that the performance of the modified algorithms becomes more visible when dealing with larger data sets.
We investigate essential relationships between generalization capabilities and fuzziness of fuzzy classifiers (viz., the classifiers whose outputs are vectors of membership grades of a pattern to the ...individual classes). The study makes a claim and offers sound evidence behind the observation that higher fuzziness of a fuzzy classifier may imply better generalization aspects of the classifier, especially for classification data exhibiting complex boundaries. This observation is not intuitive with a commonly accepted position in "traditional" pattern recognition. The relationship that obeys the conditional maximum entropy principle is experimentally confirmed. Furthermore, the relationship can be explained by the fact that samples located close to classification boundaries are more difficult to be correctly classified than the samples positioned far from the boundaries. This relationship is expected to provide some guidelines as to the improvement of generalization aspects of fuzzy classifiers.