Vulnerable node detection in uncertain graphs is a typical graph mining problem that seeks to identify nodes at a high risk of breakdown under the joint effect from both the self and contagion risk ...probability. This is an NP-hard problem that is crucial for risk management in many real-world applications such as networked loans and smart grids. Monte Carlo (MC) simulation and its optimized algorithms are commonly used to approximate the breakdown probability, but these methods require a large number of samples to ensure accuracy, which is computationally expensive for large-scale networks. Although recent studies employ Graph Neural Networks (GNNs) to model the contagion process and accelerate the inference, many of these methods suffer from the over-smoothing problem, leading to suboptimal performance under the long-distance risk contagion process. To this end, we propose a novel risk-adaptive deep reinforcement learning-based framework (AdaRisk) for vulnerable nodes detection in uncertain graphs. In particular, we design the Markov Decision Process (MDP) of the vulnerability estimation process in which our agent would approach the risk adaptively based on contagion probability accumulated in prior iterations. To encode state embeddings that incorporate multi-hop contagion information, the agent utilizes a long-distance adaptable policy network to process the input graph and output actions as the vulnerable probability of nodes. We conducted extensive experiments on four benchmark networks and three real-world financial networks to evaluate our proposed framework's performance. Our results demonstrate that AdaRisk outperforms state-of-the-art baselines in terms of detection performance, and also offers significant running time reductions compared to MC simulation.
Marine environmental issues have become increasingly prominent in recent years, and ocean transportation has also received extensive attention. Relevant personnel is concerned about the minimum fuel ...consumption and the shortest sailing time, which are two contradictory goals. This paper describes the ship path problem model as a multi-objective optimization model considering fuel consumption and navigation time. The waves, wind speed, and wind direction in the marine meteorological environment significantly impact the ship’s actual navigation, making it challenging to design a reasonable path. Ship path planning in the marine environment is an NP-Hard Problem. The paper applies an improved whale optimization algorithm to find Pareto solutions in the multi-objective optimization model considering marine meteorology. This paper discusses the performance of the enhanced algorithm. Compared with other advanced algorithms, the algorithm is applied to five benchmark multi-objective optimization functions. The test results show that the scheme can balance the ship’s fuel consumption and sailing time to provide a reasonable candidate path for decision-makers.
•This paper describes the ship path problem model as a multi-objective ship path optimization model considering fuel consumption and navigation time.•The improved whale optimization algorithm solves the multi-objective optimization problem.•Non-dominated relations determine the solution to multi-objective problems.•Ship path planning considers the influence of ship parameters and marine meteorological data.
Data association is the backbone to many multiple object tracking (MOT) methods. In this paper we formulate data association as a Generalized Maximum Multi Clique problem (GMMCP). We show that this ...is the ideal case of modeling tracking in real world scenario where all the pairwise relationships between targets in a batch of frames are taken into account. Previous works assume simplified version of our tracker either in problem formulation or problem optimization. However, we propose a solution using GMMCP where no simplification is assumed in either steps. We show that the NP hard problem of GMMCP can be formulated through Binary-Integer Program where for small and medium size MOT problems the solution can be found efficiently. We further propose a speed-up method, employing Aggregated Dummy Nodes for modeling occlusion and miss-detection, which reduces the size of the input graph without using any heuristics. We show that, using the speedup method, our tracker lends itself to real-time implementation which is plausible in many applications. We evaluated our tracker on six challenging sequences of Town Center, TUD-Crossing, TUD-Stadtmitte, Parking-lot 1, Parking-lot 2 and Parking-lot pizza and show favorable improvement against state of art.
The Hafnian Master Theorem Kocharovsky, Vitaly V.; Kocharovsky, Vladimir V.; Tarasov, Sergey V.
Linear algebra and its applications,
10/2022, Volume:
651
Journal Article
This paper considers unimodular sequence synthesis under similarity constraint for both the continuous and discrete phase cases. A computationally efficient iterative algorithm for the continuous ...phase case (IA-CPC) is proposed to sequentially optimize the quadratic objective function. The quadratic optimization problem is turned into multiple one-dimensional optimization problems with closed-form solutions. For the discrete phase case, we present an iterative block optimization algorithm. Specifically, we partition the design variables into K blocks, and then, we sequentially optimize each block via exhaustive search while fixing the remaining K-1 blocks. Finally, we evaluate the computational costs and performance gains of the proposed algorithms in comparison with power method-like and semidefinite relaxation related techniques.
In a graph, a matching cut is an edge cut that is a matching. Matching Cut, which is known to be NP-complete, is the problem of deciding whether or not a given graph G has a matching cut. In this ...paper we show that Matching Cut admits a quadratic-vertex kernel for the parameter distance to cluster and a linear-vertex kernel for the parameter distance to clique. We further provide an O∗(2dc(G))-time and an O∗(2dc¯(G))-time FPT algorithm for Matching Cut, where dc(G) and dc¯(G) are the distance to cluster and distance to co-cluster, respectively. We also improve the running time of the best known branching algorithm to solve Matching Cut from O∗(1.4143n) to O∗(1.3071n) where n is the number of vertices in G. Moreover, we point out that, unless NP⊆coNP∕poly, Matching Cut does not admit a polynomial kernel when parameterized simultaneously by treewidth, the number of edges crossing the cut, and the maximum degree of G.
Attribute reduction is one of the most meaningful research topics in the existing fuzzy rough sets, and the approach of discernibility matrix is the mathematical foundation of computing reducts. When ...computing reducts with discernibility matrix, we find that only the minimal elements in a discernibility matrix are sufficient and necessary. This fact motivates our idea in this paper to develop a novel algorithm to find reducts that are based on the minimal elements in the discernibility matrix. Relative discernibility relations of conditional attributes are defined and minimal elements in the fuzzy discernibility matrix are characterized by the relative discernibility relations. Then, the algorithms to compute minimal elements and reducts are developed in the framework of fuzzy rough sets. Experimental comparison shows that the proposed algorithms are effective.
GSI: GPU-friendly Subgraph Isomorphism Zeng, Li; Zou, Lei; Ozsu, M. Tamer ...
2020 IEEE 36th International Conference on Data Engineering (ICDE),
2020-April
Conference Proceeding
Open access
Subgraph isomorphism is a well-known NP-hard problem that is widely used in many applications, such as social network analysis and querying over the knowledge graph. Due to the inherent hardness, its ...performance is often a bottleneck in various real-world applications. We address this by designing an efficient subgraph isomorphism algorithm leveraging features of GPU architecture, such as massive parallelism and memory hierarchy. Existing GPU-based solutions adopt two-step output scheme, performing the same join twice in order to write inter-mediate results concurrently. They also lack GPU architecture-aware optimizations that allow scaling to large graphs. In this paper, we propose a GPU-friendly subgraph isomorphism algorithm, GSI. Different from existing edge join-based GPU solutions, we propose a Prealloc-Combine strategy based on the vertex-oriented framework, which avoids joining-twice in existing solutions. Also, a GPU-friendly data structure (called PCSR) is proposed to represent an edge-labeled graph. Extensive experiments on both synthetic and real graphs show that GSI outperforms the state-of-the-art algorithms by up to several orders of magnitude and has good scalability with graph size scaling to hundreds of millions of edges.
Clustering sensor nodes is an effective topology control method to reduce energy consumption of the sensor nodes for maximizing lifetime of Wireless Sensor Networks (WSNs). However, in a cluster ...based WSN, the leaders (cluster heads) bear some extra load for various activities such as data collection, data aggregation and communication of the aggregated data to the base station. Therefore, balancing the load of the cluster heads is a challenging issue for the long run operation of the WSNs. Load balanced clustering is known to be an NP-hard problem for a WSN with unequal load of the sensor nodes. Genetic Algorithm (GA) is one of the most popular evolutionary approach that can be applied for finding the fast and efficient solution of such problem. In this paper, we propose a novel GA based load balanced clustering algorithm for WSN. The proposed algorithm is shown to perform well for both equal as well as unequal load of the sensor nodes. We perform extensive simulation of the proposed method and compare the results with some evolutionary based approaches and other related clustering algorithms. The results demonstrate that the proposed algorithm performs better than all such algorithms in terms of various performance metrics such as load balancing, execution time, energy consumption, number of active sensor nodes, number of active cluster heads and the rate of convergence.
•The proposed work is a novel GA based load balanced clustering scheme for WSNs.•The algorithm works for both equal and unequal load of the sensor nodes.•Initial population generation and mutation are very strategic to make the algorithm faster.•Experimental results demonstrate the superiority over existing algorithms.
Recently, the problem of scheduling mobile sensors to cover all targets and maintain network connectivity such that the total movement distance is minimized, termed the mobile sensor deployment (MSD) ...problem, has received a great deal of attention. However, the complexity of the MSD problem remains unknown because no exact proof has been provided before. In this paper, we show that not only the MSD problem, but also its special case, termed the target coverage problem, are NP-hard.