Influence maximization has recently received significant attention for scheduling online campaigns or advertisements on social network platforms. However, most studies only focus on user influence ...via cyber interactions while ignoring their physical interactions which are also essential to gauge influence propagation. Additionally, targeted campaigns or advertisements have not received sufficient attention. To address these issues, we first devise a novel holistic influence diffusion model that takes into account both cyber and physical user interactions in an effective and practical way. Based on the new diffusion model, we formulate a new problem of holistic influence maximization , denoted as HIM query, for targeted advertisements in a spatial social network. The HIM query problem aims to find a minimum set of users whose holistic influence can cover all target users in the network, which belongs to a set covering problem. Since the HIM query problem is NP-hard, we develop a greedy baseline algorithm and then improve on this algorithm to reduce the computational cost. To deal with large networks, we also design a spatial-social index to maintain the social, spatial and textual information of users, as well as developing an index-based efficient solution. Finally, we conduct extensive experiments using one synthetic and three real-world datasets to validate the efficiency and effectiveness of the proposed holistic influence diffusion model and our developed algorithms.
Graph partitioning, a classical NP-hard combinatorial optimization problem, is widely applied to industrial or management problems. In this study, an approximated solution of the graph partitioning ...problem is obtained by using a deterministic annealing neural network algorithm. The algorithm is a continuation method that attempts to obtain a high-quality solution by following a path of minimum points of a barrier problem as the barrier parameter is reduced from a sufficiently large positive number to 0. With the barrier parameter assumed to be any positive number, one minimum solution of the barrier problem can be found by the algorithm in a feasible descent direction. With a globally convergent iterative procedure, the feasible descent direction could be obtained by renewing Lagrange multipliers red. A distinctive feature of it is that the upper and lower bounds on the variables will be automatically satisfied on the condition that the step length is a value from 0 to 1. Four well-known algorithms are compared with the proposed one on 100 test samples. Simulation results show effectiveness of the proposed algorithm.
Radar signal design in spectrally dense environments is a very challenging and topical problem. This letter deals with the synthesis of waveforms optimizing radar performance while satisfying ...multiple spectral compatibility constraints. Unlike some counterparts available in the open literature, a specific control on the interference energy radiated on each shared bandwidth is enforced. To tackle the resulting NP-hard optimization problem, a polynomial computational complexity procedure based on semidefinite relaxation (SDR) and randomization is developed. Hence, some numerical results are shown to highlight the effectiveness of the new technique to devise high-performance radar waveforms complying with the spectral compatibility requirements.
Practical Annealing-Based Quantum Computing McGeoch, Catherine C.; Harris, Richard; Reinhardt, Steven P. ...
Computer (Long Beach, Calif.),
06/2019, Volume:
52, Issue:
6
Journal Article
Peer reviewed
An overview of quantum computers based on the annealing paradigm and manufactured by D-Wave is given. An introductory survey of this approach to quantum computing (QC), together with a snapshot of ...what is known about performance, is presented. Evidence-based predictions about developments in this region of the QC space are presented.
The NP-hard graphical traveling salesman problem (GTSP) is to find a closed walk of total minimum weight that visits each vertex in an undirected edge-weighted and not necessarily complete graph. We ...present a problem kernel with τ2+τ vertices for GTSP, where τ is the vertex cover number of the input graph. Any α-approximate solution for the problem kernel also gives an α-approximate solution for the original instance, for any α≥1.
Abstract We consider a variant of vertex cover on temporal graphs that has been recently defined for summarization of timeline activities in temporal graphs. The problem has been proved to be ...NP-hard, even for several restrictions of the time domain and vertex degree. We present novel algorithmic contributions for the problem and we give an approximation algorithm of factor $$O(T \log {n})$$ O ( T log n ) , on a temporal graph of T timestamps and n vertices. We focus then on the NP-hard restriction of the problem, where at most one temporal edge is defined in each timestamp. For this restriction we present a $$4(T-1)$$ 4 ( T - 1 ) approximation algorithm and a parameterized algorithm (a reduction to kernel) for parameter the cost, called span, of the solution.
Clustering is a fundamental topic in machine learning and various methods are proposed, in which K-Means (KM) and min cut clustering are typical ones. However, they may produce empty or skewed ...clustering results, which are not as expected. In KM, the constrained clustering methods have been fully studied while in min cut clustering, it still needs to be developed. In this paper, we propose a parameter-insensitive min cut clustering with flexible size constraints. Specifically, we add lower limitations on the number of samples for each cluster, which can perfectly avoid the trivial solution in min cut clustering. As far as we are concerned, this is the first attempt of directly incorporating size constraints into min cut. However, it is a NP-hard problem and difficult to solve. Thus, the upper limits is also added in but it is still difficult to solve. Therefore, an additional variable that is equivalent to label matrix is introduced in and the augmented Lagrangian multiplier (ALM) is used to decouple the constraints. In the experiments, we find that the our algorithm is less sensitive to lower bound and is practical in image segmentation. A large number of experiments demonstrate the effectiveness of our proposed algorithm.
Given an undirected graph with edge weights and a subset R of its edges, the Rural Postman Problem (RPP) is to find a closed walk of minimum total weight containing all edges of R. We prove that RPP ...is WK1‐complete parameterized by the number and weight d of edges traversed additionally to the required ones. Thus RPP instances cannot be polynomial‐time compressed to instances of size polynomial in d unless the polynomial‐time hierarchy collapses. In contrast, denoting by b ≤ 2d the number of vertices incident to an odd number of edges of R and by c ≤ d the number of connected components formed by the edges in R, we show how to reduce any RPP instance I to an RPP instance I′ with 2b + O(c/ϵ) vertices in O(n3) time so that any α‐approximate solution for I′ gives an α(1 + ϵ)‐approximate solution for I, for any α ≥ 1 and ϵ > 0. That is, we provide a polynomial‐size approximate kernelization scheme (PSAKS). We experimentally evaluate it on wide‐spread benchmark data sets as well as on two real snow plowing instances from Berlin. We also make first steps toward a PSAKS for the parameter c.
This article introduces a new deep learning approach to approximately solve the covering salesman problem (CSP). In this approach, given the city locations of a CSP as input, a deep neural network ...model is designed to directly output the solution. It is trained using the deep reinforcement learning without supervision. Specifically, in the model, we apply the multihead attention (MHA) to capture the structural patterns, and design a dynamic embedding to handle the dynamic patterns of the problem. Once the model is trained, it can generalize to various types of CSP tasks (different sizes and topologies) without the need of retraining. Through controlled experiments, the proposed approach shows desirable time complexity: it runs more than 20 times faster than the traditional heuristic solvers with a tiny gap of optimality. Moreover, it significantly outperforms the current state-of-the-art deep learning approaches for combinatorial optimization in the aspect of both training and inference. In comparison with traditional solvers, this approach is highly desirable for most of the challenging tasks in practice that are usually large scale and require quick decisions.
Vulnerable node detection in uncertain graphs is a typical graph mining problem that seeks to identify nodes at a high risk of breakdown under the joint effect from both the self and contagion risk ...probability. This is an NP-hard problem that is crucial for risk management in many real-world applications such as networked loans and smart grids. Monte Carlo (MC) simulation and its optimized algorithms are commonly used to approximate the breakdown probability, but these methods require a large number of samples to ensure accuracy, which is computationally expensive for large-scale networks. Although recent studies employ Graph Neural Networks (GNNs) to model the contagion process and accelerate the inference, many of these methods suffer from the over-smoothing problem, leading to suboptimal performance under the long-distance risk contagion process. To this end, we propose a novel risk-adaptive deep reinforcement learning-based framework (AdaRisk) for vulnerable nodes detection in uncertain graphs. In particular, we design the Markov Decision Process (MDP) of the vulnerability estimation process in which our agent would approach the risk adaptively based on contagion probability accumulated in prior iterations. To encode state embeddings that incorporate multi-hop contagion information, the agent utilizes a long-distance adaptable policy network to process the input graph and output actions as the vulnerable probability of nodes. We conducted extensive experiments on four benchmark networks and three real-world financial networks to evaluate our proposed framework's performance. Our results demonstrate that AdaRisk outperforms state-of-the-art baselines in terms of detection performance, and also offers significant running time reductions compared to MC simulation.