•Formulate a problem on three-way concept lattice incremental construction.•Develop AE/OE concept lattices incremental construction algorithms.•Use 3WCA to explore the knowledge discovery of social ...networks.
Three-way concept analysis (3WCA), a combination of three-way decision and formal concept analysis, is widely used in the field of knowledge discovery. Generally, constructing three-way concept lattices requires the original formal context and its complement context simultaneously. Additionally, the existing three-way concept lattice construction algorithms focus on the static formal context, and cannot cope with the dynamic formal context that is an essential representation in social networks. Toward this end, this paper pioneers a novel problem and method for the incremental construction of three-way concept lattice for knowledge discovery in social networks. Aiming to facilitate the construction efficiency, this paper firstly investigates the three-way concept lattice construction for attribute-incremental/object-incremental formal contexts, respectively. Then, the dynamic formal context of a social network can be viewed as a special formal context with both attribute-increment and object-increment. Further, we develop the AE/OE concept lattice incremental construction algorithms, called SNS-AE and SNS-OE. Extensive experiments are conducted on various formal contexts to evaluate the effectiveness of our incremental algorithms. The experimental results demonstrate that our proposed incremental algorithms can significantly decrease the construction time of three-way concept lattice compared to the non-incremental algorithm.
Nowadays, Artificial Intelligence (AI) is widely applied in every area of human being's daily life. Despite the AI benefits, its application suffers from the opacity of complex internal mechanisms ...and doesn't satisfy by design the principles of Explainable Artificial Intelligence (XAI). The lack of transparency further exacerbates the problem in the field of CyberSecurity because entrusting crucial decisions to a system that cannot explain itself presents obvious dangers. There are several methods in the literature capable of providing explainability of AI results. Anyway, the application of XAI in CyberSecurity can be a double-edged sword. It substantially improves the CyberSecurity practices but simultaneously leaves the system vulnerable to adversary attacks. Therefore, there is a need to analyze the state-of-the-art of XAI methods in CyberSecurity to provide a clear vision for future research. This study presents an in-depth examination of the application of XAI in CyberSecurity. It considers more than 300 papers to comprehensively analyze the main CyberSecurity application fields, like Intrusion Detection Systems, Malware detection, Phishing and Spam detection, BotNets detection, Fraud detection, Zero-Day vulnerabilities, Digital Forensics and Crypto-Jacking. Specifically, this study focuses on the explainability methods adopted or proposed in these fields, pointing out promising works and new challenges.
Energy efficiency and sustainability are important factors to address in the context of smart cities. In this sense, smart metering and nonintrusive load monitoring play a crucial role in fighting ...energy thefts and for optimizing the energy consumption of the home, building, city, and so forth. The estimated number of smart meters will exceed 800 million by 2020. By providing near real-time data about power consumption, smart meters can be used to analyze electricity usage trends and to point out anomalies guaranteeing companies' safety and avoiding energy wastes. In literature, there are many proposals approaching the problem of anomaly detection. Most of them are limited because they lack context and time awareness and the false positive rate is affected by the change in consumer habits. This research work focuses on the need to define anomaly detection method capable of facing the concept drift, for instance, family structure changes; a house becomes a second residence, and so forth. The proposed methodology adopts long short term memory network in order to profile and forecast the consumers' behavior based on their recent past consumptions. The continuous monitoring of the consumption prediction errors allows us to distinguish between possible anomalies and changes (drifts) in normal behavior that correspond to different error motifs. The experimental results demonstrate the suitability of the proposed framework by pointing out an anomaly in a near real-time after a training period of one week.
•Security issues and challenges in social network service are studied.•We discuss different security and privacy threats in social network service.•This paper presents several possible defense ...solutions to secure social network service.•A novel research direction for security of social network service is presented.
Social networks are very popular in today's world. Millions of people use various forms of social networks as they allow individuals to connect with friends and family, and share private information. However, issues related to maintaining the privacy and security of a user's information can occur, especially when the user's uploaded content is multimedia, such as photos, videos, and audios. Uploaded multimedia content carries information that can be transmitted virally and almost instantaneously within a social networking site and beyond. In this paper, we present a comprehensive survey of different security and privacy threats that target every user of social networking sites. In addition, we separately focus on various threats that arise due to the sharing of multimedia content within a social networking site. We also discuss current state-of- the-art defense solutions that can protect social network users from these threats. We then present future direction and discuss some easy-to-apply response techniques to achieve the goal of a trustworthy and secure social network ecosystem.
Display omitted
•Spam activities on Facebook are studied.•A novel feature set is introduced to the task of spammer detection on Facebook.•A baseline dataset of Facebook user profiles is ...constructed.•We propose a SapmSpotter framework based on intelligent decision support system.
Facebook is one of the most popular and leading social network services online. With the increasing amount of users on Facebook, the probability of broadcasting spam content on it is also escalating day by day. There are a few existing techniques to combat spam on Facebook. However, due to the public unavailability of critical pieces of Facebook information, like profiles, network information, an unlimited number of posts and more, the existing techniques do not work efficiently for detecting many spammers. In this paper, we propose an efficient spammer detection framework (we called as SpamSpotter) that distinguishes spammers from legitimate users on Facebook. Based on Facebook's recent characteristics, the framework introduces a novel feature set to facilitate spammer detection. We use a baseline dataset from Facebook that included 300 spammers and 700 legitimate user profiles. The baseline dataset contains a set of features for each profile, which are extracted using a novel dataset construction mechanism. In addition, an intelligent decision support system that uses eight different machine learning classifiers on the baseline dataset is designed to distinguish spammers from legitimate users. To evaluate the efficiency and accuracy of our proposed framework, we implemented and compared it with existing frameworks. The evaluation results demonstrate that our proposed framework is accurate and efficient to deliver first-rate performance. It attains a higher accuracy of 0.984 and Mathew correlation coefficient of 0.977.
This paper presents a comprehensive model for representing and reasoning on situations to support decision makers in Intelligence analysis activities. The main result presented in the paper stems ...from a work of refinement and abstraction of previous results of the authors related to the use of Situation Awareness and Granular Computing for the development of analysis methods and techniques to support Intelligence. This work made it possible to derive the characteristics of the model from previous case studies and applications with real data, and to link the reasoning techniques to concrete approaches used by intelligence analysts such as, for example, the Structured Analytic Techniques. The model allows to represent an operational situation according to three complementary perspectives: descriptive, relational and behavioral. These three perspectives are instantiated on the basis of the principles and methods of Granular Computing, mainly based on the theories of fuzzy and rough sets, and with the help of further structures such as graphs. As regards the reasoning on the situations thus represented, the paper presents four methods with related case studies and applications validated on real data.
•We model an ontology alignment process based on memetic algorithms.•We use memetic algorithms to opportunely aggregate semantic similarity measures.•Performances of our approach are evaluated with ...respect to the top-performers of OAEI campaigns.
Modern infrastructures for information and communication technologies are aimed at providing enhanced services by integrating the knowledge spread on the web through an ontological representation of information. However, ontology usefulness in managing different knowledge sources is limited by the so-called semantic heterogeneity problem arising when several interacting software components use different ontologies for representing the same information. In order to bridge this gap and, consequently, enable a full interoperability across the software components, it is necessary to bring the corresponding ontologies into a mutual agreement by identifying a set of semantic relationships among their entities. This result is achieved by means of a so-called ontology alignment process that, for each pair of entities belonging to the ontologies under alignment, computes their semantic closeness through an optimized aggregation of different similarity measures. Unfortunately, this similarity aggregation is a hard optimization process, above all, when no information is known about ontology features. The aim of this paper is to define an ontology alignment process based on a memetic algorithm able to efficiently aggregate similarity measures without using a priori knowledge about ontologies under alignment. As shown by a statistical multiple comparison procedure, our approach yields high performance in terms of alignment quality with respect to top-performers of well-known Ontology Alignment Evaluation Initiative campaigns.
The technologies of Industry 4.0 provide an opportunity to improve the effectiveness of Visual Management in manufacturing. The opportunity of improvement is twofold. From one side, Visual Management ...theory and practice can inspire the design of new software tools suitable for Industry 4.0; on the other side, the technology of Industry 4.0 can be used to increase the effectiveness of visual software tools. The paper first explores how the theoretical result on Visual Management can be used as a guideline to improve human-computer interaction, then a methodology is proposed for the design of visual patterns for manufacturing. Four visual patterns are presented that contribute to the solution of problems frequently encountered in discrete manufacturing industries; these patterns help to solve planning and control problems thus providing support to various management functions. Positive implications of this research concern people engagement and empowerment as well as improved problem solving, decision-making and management of manufacturing processes.
We present a computing scheme as a variant of a recently proposed granular recurrent neural network. Being deduced from a generic system of partial differential equations, this variant is able to ...capture the spatiotemporal variability of some datasets and problems. The convergence of the computing scheme has been formally discussed. Some preliminary numerical experiments were first performed by using synthetic datasets, inferring some particular partial differential equations. Then two application examples were considered (by using publicly available datasets), namely the prediction of dissolved oxygen in surface water simultaneously at different depths (unlike the current literature) and the prediction of the concentration of particulate matter less than 2.5
μ
m in diameter at different sites. The numerical results show the potential of the approach for forecast against well-known techniques such as Long Short-Term Memory networks.
Nowadays, Massive Open Online Courses (MOOCs) are adopted by students worldwide. One of the main critical issues often associated with MOOCs is the dropout phenomenon. In other words, the percentage ...of students abandoning a MOOC-based study path is considered still too high. Therefore, an increasing number of scientific works, coming from several heterogeneous communities (e.g., computer science, data science, statistics, education) propose approaches trying to mitigate such a problem. The majority of the aforementioned works focus on machine learning methods to define classifiers able to be trained and, subsequently, to predict students who are going to abandon a course before it ends. Among such approaches, the ones achieving the best performance use enriched sets of features (to train their models) and produce results that cannot be used to easily clearly characterize the different behaviors of dropping-out and non-dropping-out students. The present work proposes the design of a novel process to train a set of dropout predictors leveraging on a reduced set of features. The underlying idea is to exploit weekly data in order to classify, with acceptable levels of precision, students who are likely going towards dropout or not. In cases of uncertainty, the classification decision is deferred to the next week, when new data is available. Such an approach, which takes care and is aware of the course timeline, offers several advantages. The first one is the chance to build a real-time educational decision support system able to support decision as sufficient information is available (as the time goes on). The second one is to preserve resources and avoiding wasting them with students erroneously classified at risk of dropout. The third one is to allow explicit characterization of dropout-conducing behavior by using a rule mining approach.