In spatiotemporal data commonly encountered in geographical systems, biomedical signals, and the like, each datum is composed of features comprising a spatial component and a temporal part. ...Clustering of data of this nature poses challenges, especially in terms of a suitable treatment of the spatial and temporal components of the data. In this study, proceeding with the objective function-based clustering (such as, e.g., fuzzy C-means), we revisit and augment the algorithm to make it applicable to spatiotemporal data. An augmented distance function is discussed, and the resulting clustering algorithm is provided. Two optimization criteria, i.e., a reconstruction error and a prediction error, are introduced and used as a vehicle to optimize the performance of the clustering method. Experimental results obtained for synthetic and real-world data are reported.
From Fuzzy Models to Granular Fuzzy Models Pedrycz, Witold
International journal of computational intelligence systems,
01/2016, Letnik:
9, Številka:
Suppl 1
Journal Article
Recenzirano
Odprti dostop
In this study, we offer a general view at the area of fuzzy modeling and elaborate on a new direction of system modeling by introducing a concept of granular models. Those models constitute a ...generalization of existing fuzzy models and, in contrast to existing models, generate results in the form of information granules (such as intervals, fuzzy sets, rough sets and others). We present a rationale and some key motivating arguments behind the emergence of granular models and discuss their underlying design process. Central to the development of granular models are granular spaces, namely a granular space of parameters of the models and a granular input space. The development of the granular model is completed through an optimal allocation of information granularity, which optimizes criteria of coverage and specificity of granular information. The emergence of granular models of type-2 and type-
n
, in general, is discussed along with an elaboration on their formation. It is shown that achieving a sound coverage-specificity tradeoff (compromise) is of essential relevance in the realization of the granular models.
In the process of modeling and forecasting of fuzzy time series, an issue on how to partition the universe of discourse impacts the quality of the forecasting performance of the constructed fuzzy ...time series model. In this paper, a novel method of partitioning the universe of discourse of time series based on interval information granules is proposed for improving forecasting accuracy of model. In the method, the universe of discourse of time series is first pre-divided into some intervals according to the predefined number of intervals to be partitioned, and then information granules are constructed in the amplitude-change space on the basis of data of time series belonging to each of intervals and their corresponding change (trends). In the sequel, optimal intervals are formed by continually adjusting width of these intervals to make information granules which associate with the corresponding intervals become most “informative”. Three benchmark time series are used to perform experiments to validate the feasibility and effectiveness of proposed method. The experimental results clearly show that the proposed method produces more reasonable intervals exhibiting sound semantics. When using the proposed partitioning method to determine intervals for modeling of fuzzy time series, forecasting accuracy of the constructed model are prominently enhanced.
•A method of partitioning universe based on interval information granules is proposed.•The method fully exploit the amplitude and trends information of time series.•The method can produce more reasonable intervals with profound semantics.•The method can prominently improve performance of fuzzy time series model.
A timely and effective scheduling of the byproduct gas system plays a pivotal role in realizing intelligent manufacturing and energy conservation in the steel industry. In order to realize real-time ...dynamic scheduling of the blast furnace gas (BFG) system, a granular prediction and dynamic scheduling process based on adaptive dynamic programming is proposed in this paper. To reflect the specificity of production reflected in the fluctuation of data, a series of information granules is constructed and described. In the dynamic scheduling phase, based on the granular feature description, a scheduling action network is established and further updates of information granules are realized. Considering a slow adjustment process and delay characteristics of the BFG system, the cumulative reward of the critic network is calculated on the basis of the data partition to construct a tendency attenuation-based cost function. In order to determine the future trends of the gas tank level that targets real-time determination of the scheduling moment, a reinforcement learning-based granulation and prediction process is also proposed. To demonstrate the performance of the proposed method, a number of comparative experiments are presented by using the practical industrial data. The results indicate that the proposed method exhibits high accuracy and can deliver an effective solution to justified scheduling of the BFG system.
In this paper, time-series clustering is discussed. At first ℓ1 trend filtering method is used to produce an optimal segmentation of time series. Next optimized fuzzy information granulation is ...completed for each segment to form a linear fuzzy information granule, which includes both average and trend information. Once the optimal segmentation and granulation have been completed, the original time series is transformed into a granular time series. To finalize time-series clustering, a distance measure for granular time series is established, and a linear fuzzy information granule-based dynamic time warping (LFIG_DTW) algorithm is developed for calculating the distance of two equal-length or unequal-length granular time series. Furthermore, the distance realized by the LFIG_DTW algorithm can detect not only the increasing or decreasing trends, but also the changing periods and rates of changes. After calculating all the distances between any two granular time series, a LFIG_DTW distance-based hierarchical clustering method is designed for time-series clustering. Experiment results involving several real datasets show the effectiveness of the proposed method.
•Propose an optimal method for time series granulation based on ℓ1 trend filtering and LFIG.•Give a new distance of LFIGs that generalizes the version of equal-size case.•Propose a distance measure method for granular time series composed of unequal-size LFIGs.•Propose a clustering method based on the proposed distance measure.
In artificial intelligence systems, a question on how to express the uncertainty in knowledge remains an open issue. The negation scheme provides a new perspective to solve this issue. In this paper, ...we study quantum decisions from the negation perspective. Specifically, complex evidence theory (CET) is considered to be effective to express and handle uncertain information in a complex plane. Therefore, we first express CET in the quantum framework of Hilbert space. On this basis, a generalized negation method is proposed for quantum basic belief assignment (QBBA), called QBBA negation. In addition, a QBBA entropy is revisited to study the QBBA negation process to reveal the variation tendency of negation iteration. Meanwhile, the properties of the QBBA negation function are analyzed and discussed along with special cases. Then, several multisource quantum information fusion (MSQIF) algorithms are designed to support decision making. Finally, these MSQIF algorithms are applied in pattern classification to demonstrate their effectiveness. This is the first work to design MSQIF algorithms to support quantum decision making from a new perspective of "negation", which provides promising solutions to knowledge representation, uncertainty measure, and fusion of quantum information.
This comprehensive textbook on data mining details the unique steps of the knowledge discovery process - an industry standard that prescribes the sequence in which projects should be performed, from ...data understanding and preprocessing to deployment of the results.
Federated learning addresses the issue of machine learning realized under constraints of privacy and security. While there have been intensive studies on building and analyzing federated regression ...models, this topic has not been analyzed so far in the area of fuzzy systems. To narrow down this gap, in this study, we formulate and solve a problem of unsupervised federated learning by designing an original federated FCM (F-FCM) clustering which could serve as a basis toward building a spectrum of fuzzy set constructs including rule-based models. Following a general client-server structure, where the local data residing with each client are not available globally and cannot be centralized (as commonly encountered in learning scenarios), the aim is to discover an overall structure across all data. The federated gradient-based optimization realized in the horizontal mode is developed. An overall learning process is derived, which is composed of communicating gradients coming from clients and providing updates of the prototypes at the server side and passing them on to the clients. It is also shown that the relevance of the globally constructed structure is conveniently assessed in terms of granular footprints of the prototypes constructed by the F-FCM. Some illustrative examples are covered to illustrate the efficiency of the developed federated algorithm.