In this paper, we research the community deception problem. Tackling this problem consists in developing techniques to hide a target community (C) from community detection algorithms. This need ...emerges whenever a group (e.g., activists, police enforcements, or network participants in general) want to observe and cooperate in a social network while avoiding to be detected. We introduce and formalize the community deception problem and devise an efficient algorithm that allows to achieve deception by identifying a certain number (b) of C's members connections to be rewired. Deception can be practically achieved in social networks like Facebook by friending or unfriending network members as indicated by our algorithm. We compare our approach with another technique based on modularity. By considering a variety of (large) real networks, we provide a systematic evaluation of the robustness of community detection algorithms to deception techniques. Finally, we open some challenging research questions about the design of detection algorithms robust to deception techniques.
Linear temporal logic (LTL) is a modal logic where formulas are built over temporal operators relating events happening in different time instants. According to the standard semantics, LTL formulas ...are interpreted on traces spanning over an infinite timeline. However, applications related to the specification and verification of business processes have recently pointed out the need for defining and reasoning about a variant of LTL, which we name LTLp, whose semantics is defined over process traces, that is, over finite traces such that, at each time instant, precisely one propositional variable (standing for the execution of some given activity) evaluates true.
The paper investigates the theoretical underpinnings of LTLp and of a related logic formalism, named LTLf, which had already attracted attention in the literature and where formulas have the same syntax as in LTLp and are evaluated over finite traces, but without any constraint on the number of variables simultaneously evaluating true. The two formalisms are comparatively analyzed, by pointing out similarities and differences. In addition, a thorough complexity analysis has been conducted for reasoning problems about LTLp and LTLf, by considering arbitrary formulas as well as classes of formulas defined in terms of restrictions on the temporal operators that are allowed. Finally, based on the theoretical findings of the paper, a practical reasoner specifically tailored for LTLp and LTLf has been developed by leveraging state-of-the-art SAT solvers. The behavior of the reasoner has been experimentally compared with other systems available in the literature.
The mechanisms underlying life machinery are still not completely understood. Something is known, something is “probably” known, other things are still unknown. Scientists all over the world are ...working very hard to clarify the processes regulating the cell life cycle and bioinformaticians try to support them by developing specialized automated tools. Within the plethora of applications devoted to the study of life mechanisms, tools for the analysis and comparison of biological networks are catching the attention of many researchers. It is interesting to investigate why.
Community Deception in Attributed Networks Fionda, Valeria; Pirro, Giuseppe
IEEE transactions on computational social systems,
02/2024, Volume:
11, Issue:
1
Journal Article
Peer reviewed
Community detection algorithms that analyze networks to identify communities of nodes are an essential part of the network analysis toolkit used daily by different analysts (e.g., data scientists and ...law enforcement). However, there is not enough awareness that members of a community <inline-formula> <tex-math notation="LaTeX">\comH</tex-math> </inline-formula> (either revealed or not) inside a network <inline-formula> <tex-math notation="LaTeX">G</tex-math> </inline-formula> could act strategically to evade such tools either for legitimate (e.g., activist groups in authoritarian regimes) or malicious (e.g., terrorists) purpose. Community deception offers this possibility. By identifying a certain number of <inline-formula> <tex-math notation="LaTeX">\comH</tex-math> </inline-formula>'s member connections to be rewired, community deception algorithms can successfully hide a community that wants to stay below the radar of detection techniques. However, the state-of-the-art deception approaches have focused on networks without attributes, although real-world networks (e.g., Facebook) include attributes (e.g., age and sex) that play a central role in detecting more accurate communities. This article faces three novel challenges introduced when designing deception techniques for networks with attributes. The first concerns how to model and encode attributes most flexibly. The second is about framing attribute-aware community deception as an optimization problem. Finally, the challenge of solving the optimization problem by leveraging network topology and attributes also arises. We leverage a simple way to model network attributes as edge weights, a novel optimization function called community diffusion, and a greedy algorithm to optimize diffusion, to solve the above challenges. We evaluated against several community detection algorithms and compared it with state-of-the-art deception approaches on various real-world networks. From the evaluation, we can draw two main observations. First, adopting attribute-oblivious deception techniques leads to unsatisfactory results. Second, community diffusion as an optimization function specific to attributed networks is preferred to community safeness, the state-of-the-art deception optimization function, even when recasting the latter as an attribute-aware function.
Mixed multi-unit combinatorial auctions (MMUCAs) are extensions of classical combinatorial auctions (CAs) where bidders trade transformations of goods rather than just sets of goods. Solving MMUCAs, ...i.e., determining the sequences of bids to be accepted by the auctioneer, is computationally intractable in general. However, differently from classical combinatorial auctions, little was known about whether polynomial-time solvable classes of MMUCAs can be singled out on the basis of their characteristics. The paper fills this gap, by studying the computational complexity of MMUCA instances under structural and qualitative restrictions, which characterize interactions among bidders and types of bids involved in the various transformations, respectively.
Community deception is the problem of hiding a community from community detection algorithms. This is an important task whenever a group (e.g., activists, police enforcements) want to observe and ...cooperate in a social network while avoiding to be detected. We formalize the community deception problem and propose an efficient algorithm, based on the concept of community safeness, which allows to achieve deception by carefully identifying and rewiring a certain number of the community members' connections. Deception can be practically achieved in social networks like Facebook or Twitter by friending (following) or unfriending (unfollowing) network members. We validated our approach and compared it with related research on a variety of (large) real networks with encouraging results.
Building knowledge maps of Web graphs Fionda, Valeria; Gutierrez, Claudio; Pirrò, Giuseppe
Artificial intelligence,
October 2016, 2016-10-00, 20161001, Volume:
239
Journal Article
Peer reviewed
We research the problem of building knowledge maps of graph-like information. There exist well-consolidated cartographic principles and techniques for mapping physical landscapes. However, we live in ...the digital era and similarly to the Earth, the Web is simply too large and its interrelations too complex for anyone to grasp much of it through direct observation. Thus, the problem of applying cartographic principles also to digital landscapes is intriguing. We introduce a mathematical formalism that captures the general notion of map of a graph and enables its development and manipulation in a semi-automated way. We present an implementation of our formalism on the Web of Linked Data graph and discuss algorithms that efficiently generate and combine (via an algebra) regions and maps. We present the MaGe tool, implementing the map framework, and discuss examples of knowledge maps.
Community deception is about hiding a target community that wants to remain below the radar of community detection algorithms. The goal is to devise algorithms that, given a maximum number of updates ...(e.g., edge additions and removal), strive to find the best way to perform such updates in order to hide the target community inside the community structure found by a detection algorithm. So far, community deception has only been studied for undirected networks, although many real-world networks (e.g., Twitter) are directed. One way to overcome this problem would be to treat the network as undirected. However, this approach discards potentially helpful information in the edge directions (e.g., A follows B does not imply that B follows A). The aim of this paper is threefold. First, to give an account of the state-of-the-art community deception techniques in undirected networks underlying their peculiarities. Second, to investigate the community deception problem in directed networks and to show how deception techniques proposed for undirected networks should be modified and adapted to work on directed networks. Third, to evaluate deception techniques both in undirected and directed networks. Our experimental evaluation on a variety of (large) directed networks shows that techniques that work well for undirected networks fail short when directly applied to directed networks, thus underlying the need for specific approaches.
We demonstrate RECAP, a tool that
explains relatedness
between entities in Knowledge Graphs (KGs) and implements a
query by relatedness
paradigm that allows to retrieve entities related to those in ...input. One of the peculiarities of RECAP is that it does not require any data preprocessing and can combine knowledge from multiple KGs. The underlying algorithmic techniques are reduced to the execution of SPARQL queries plus some local refinement. This makes the tool readily available on a large variety of KGs accessible via SPARQL endpoints. To show the general applicability of the tool, we will cover a set of use cases drawn from a variety of knowledge domains (e.g., biology, movies, co-authorship networks) and report on the concrete usage of RECAP in the SENSE4US FP7 project. We will underline the technical aspects of the system and give details on its implementation. The target audience of the demo includes both researchers and practitioners and aims at reporting on the benefits of RECAP in practical knowledge discovery applications.