► A fundamental concept of information granularity is introduced. ► Information granularity is regarded as a design asset in system design. ► Discussed is a problem of optimal allocation of ...information granularity. ► Various allocation protocols are studied. ► Main categories of granular models are given.
The highly diversified conceptual and algorithmic landscape of Granular Computing calls for the formation of sound fundamentals of the discipline, which cut across the diversity of formal frameworks (fuzzy sets, sets, rough sets) in which information granules are formed and processed. The study addresses this quest by introducing an idea of granular models – generalizations of numeric models that are formed as a result of an optimal allocation (distribution) of information granularity. Information granularity is regarded as a crucial design asset, which helps establish a better rapport of the resulting granular model with the system under modeling. A suite of modeling situations is elaborated on; they offer convincing examples behind the emergence of granular models. Pertinent problems showing how information granularity is distributed throughout the parameters of numeric functions (and resulting in granular mappings) are formulated as optimization tasks. A set of associated information granularity distribution protocols is discussed. We also provide a number of illustrative examples.
In the plethora of conceptual and algorithmic developments supporting data analytics and system modeling, humancentric pursuits assume a particular position owing to ways they emphasize and realize ...interaction between users and the data. We advocate that the level of abstraction, which can be flexibly adjusted, is conveniently realized through Granular Computing. Granular Computing is concerned with the development and processing information granules – formal entities which facilitate a way of organizing knowledge about the available data and relationships existing there. This study identifies the principles of Granular Computing, shows how information granules are constructed and subsequently used in describing relationships present among the data.
•Introduction of the principle of justifiable granularity.•Comprehensive algorithmic framework supporting the design of information granules.•Discussed are two essential requirements of information ...granularity.•Provided is a series of experimental studies.
The study introduces and discusses a principle of justifiable granularity, which supports a coherent way of designing information granules in presence of experimental evidence (either of numerical or granular character). The term “justifiable” pertains to the construction of the information granule, which is formed in such a way that it is (a) highly legitimate (justified) in light of the experimental evidence, and (b) specific enough meaning it comes with a well-articulated semantics (meaning). The design process associates with a well-defined optimization problem with the two requirements of experimental justification and specificity. A series of experiments is provided as well as a number of constructs carried for various formalisms of information granules (intervals, fuzzy sets, rough sets, and shadowed sets) are discussed as well.
This article investigates group decision-making (GDM) problems, where the decision makers' (DMs) preference information is represented by incomplete interval-valued intuitionistic fuzzy preference ...relations (IVIFPRs). First, a multiplicative consistency property and an acceptably multiplicative consistency property for IVIFPRs are offered. Then, an optimization model to estimate the missing values in an incomplete IVIFPR is constructed. Subsequently, two optimization models are, respectively, established to derive a perfectly consistent IVIFPR and an acceptably consistent IVIFPR from a given inconsistent IVIFPR. Furthermore, a model is offered to gain the DMs' weights. Afterward, the consensus index is defined. When the consensus for IVIFPRs is unacceptable, a model is presented to reach the consensus requirement. Moreover, a novel GDM method for incomplete IVIFPRs is presented. Finally, the presented method is applied to an illustrative example that shows the feasibility of the offered method.
Attacks over the Internet are becoming more and more complex and sophisticated. How to detect security threats and measure the security of the Internet arises a significant research topic. For ...detecting the Internet attacks and measuring its security, collecting different categories of data and employing methods of data analytics are essential. However, the literature still lacks a thorough review on security-related data collection and analytics on the Internet. Therefore, it becomes a necessity to review the current state of the art in order to gain a deep insight on what categories of data should be collected and which methods should be used to detect the Internet attacks and to measure its security. In this paper, we survey existing studies about security-related data collection and analytics for the purpose of measuring the Internet security. We first divide the data related to network security measurement into four categories: 1) packet-level data; 2) flow-level data; 3) connection-level data; and 4) host-level data. For each category of data, we provide a specific classification and discuss its advantages and disadvantages with regard to the Internet security threat detection. We also propose several additional requirements for security-related data analytics in order to make the analytics flexible and scalable. Based on the usage of data categories and the types of data analytic methods, we review current detection methods for distributed denial of service flooding and worm attacks by applying the proposed requirements to evaluate their performance. Finally, based on the completed review, a list of open issues is outlined and future research directions are identified.
Perception techniques for autonomous driving should be adaptive to various environments. In essential perception modules for traffic line detection, many conditions should be considered, such as a ...number of traffic lines and computing power of the target system. To address these problems, in this paper, we propose a traffic line detection method called Point Instance Network (PINet); the method is based on the key points estimation and instance segmentation approach. The PINet includes several hourglass models that are trained simultaneously with the same loss function. Therefore, the size of the trained models can be chosen according to the target environment's computing power. We cast a clustering problem of the predicted key points as an instance segmentation problem; the PINet can be trained regardless of the number of the traffic lines. The PINet achieves competitive accuracy and false positive on CULane and TuSimple datasets, popular public datasets for lane detection. Our code is available at https://github.com/koyeongmin/PINet_new
Clustering is a powerful vehicle to reveal and visualize structure of data. When dealing with time series, selecting a suitable measure to evaluate the similarities/dissimilarities within the data ...becomes necessary and subsequently it exhibits a significant impact on the results of clustering. This selection should be based upon the nature of time series and the application itself. When grouping time series based on their shape information is of interest (shape-based clustering), using a Dynamic Time Warping (DTW) distance is a desirable choice. Using stretching or compressing segments of temporal data, DTW determines an optimal match between any two time series. In this way, time series exhibiting similar patterns occurring at different time periods, are considered as being similar. Although DTW is a suitable choice for comparing data with respect to their shape information, calculating the average of a collection of time series (which is required in clustering methods) based on this distance becomes a challenging problem. As the result, employing clustering techniques like K-Means and Fuzzy C-Means (where the cluster centers – prototypes are calculated through averaging the data) along with the DTW distance is a challenging task and may produce unsatisfactory results. In this study, three alternatives for fuzzy clustering of time series using DTW distance are proposed. In the first method, a DTW-based averaging technique proposed in the literature, has been applied to the Fuzzy C-Means clustering. The second method considers a Fuzzy C-Medoids clustering, while the third alternative comes as a hybrid technique, which exploits the advantages of both the Fuzzy C-Means and Fuzzy C-Medoids when clustering time series. Experimental studies are reported over a set of time series coming from the UCR time series database.
•Information Sciences becomes fifty years old.•A bibliometric overview of the journal between 1968 and 2016.•Identification of the leading topics, authors, universities and countries.•A graphical ...visualization by using VOS viewer software.
Information Sciences is a leading international journal in computer science launched in 1968, so becoming fifty years old in 2018. In order to celebrate its anniversary, this study presents a bibliometric overview of the leading publication and citation trends occurring in the journal. The aim of the work is to identify the most relevant authors, institutions, countries, and analyze their evolution through time. The paper uses the Web of Science Core Collection in order to search for the bibliographic information. Our study also develops a graphical mapping of the bibliometric material by using the visualization of similarities (VOS) viewer. With this software, the work analyzes bibliographic coupling, citation and co-citation analysis, co-authorship, and co-occurrence of keywords. The results underline the significant growth of the journal through time and its international diversity having publications from countries all over the world.
Interval type-2 fuzzy set (IT2FS) offers interesting avenue to handle high order information and uncertainty in decision support system (DSS) when dealing with both extrinsic and intrinsic aspects of ...uncertainty. Recently, multiple attribute decision making (MADM) problems with interval type-2 fuzzy information have received increasing attentions both from researchers and practitioners. As a result, a number of interval type-2 fuzzy MADM methods have been developed. In this paper, we extend the VIKOR (VlseKriterijumska Optimizacijia I Kompromisno Resenje, in Serbian) method based on the prospect theory to accommodate interval type-2 fuzzy circumstances. First, we propose a new distance measure for IT2FS, which is comes as a sound alternative when being compared with the existing interval type-2 fuzzy distance measures. Then, a decision model integrating VIKOR method and prospect theory is proposed. A case study concerning a high-tech risk evaluation is provided to illustrate the applicability of the proposed method. In addition, a comparative analysis with interval type-2 fuzzy TOPSIS method is also presented.