Graph matching involves combinatorial optimization based on edge-to-edge affinity matrix, which can be generally formulated as Lawler's quadratic assignment problem (QAP). This paper presents a QAP ...network directly learning with the affinity matrix (equivalently the association graph) whereby the matching problem is translated into a constrained vertex classification task. The association graph is learned by an embedding network for vertex classification, followed by Sinkhorn normalization and a cross-entropy loss for end-to-end learning. We further improve the embedding model on association graph by introducing Sinkhorn based matching-aware constraint, as well as dummy nodes to deal with unequal sizes of graphs. To our best knowledge, this is one of the first network to directly learn with the general Lawler's QAP. In contrast, recent deep matching methods focus on the learning of node/edge features in two graphs respectively. We also show how to extend our network to hypergraph matching, and matching of multiple graphs. Experimental results on both synthetic graphs and real-world images show its effectiveness. For pure QAP tasks on synthetic data and QAPLIB benchmark, our method can perform competitively and even surpass state-of-the-art graph matching and QAP solvers with notable less time cost. We provide a project homepage at http://thinklab.sjtu.edu.cn/project/NGM/index.html .
The Quadratic Assignment Problem (QAP) is one of the most studied classical combinatorial optimization problems. QAP has many practical applications. Designing enhanced meta-heuristic approaches for ...the QAP is an active research area. In this work, we propose a hybrid algorithm (EGATS) that combines an elite genetic algorithm and tabu search to solve the QAP. In the optimization process, EGATS employs two kinds of elite crossovers, repeated 2-exchange mutation, and tabu search to strike a balance between exploitation and exploration. We evaluated the performance of EGATS through computational experiments on 135 well-known benchmark instances from the quadratic assignment problem library, QAPLIB. EGATS obtained the best-known solution for 131 instances. Compared to other popular meta-heuristic algorithms in the literature, EGATS is a competitive method for the QAP.
Although metaheuristics are used for solving complex and real-world problems, they do not provide an exact solution, but only an approximate solution in feasible time. However, for very large ...instances, a metaheuristic may take a considerable amount of time. Therefore, we utilize a highly parallel metaheuristic to run on a modern massively parallel graphics processing unit (GPU) to further reduce the execution time. Additionally, because metaheuristics are not problem-specific, it is interesting to determine which metaheuristic is best for each problem type. In this study, using a massive parallel machine such as a GPU, we evaluate the performance of different metaheuristics such as the iterated local search, simulated annealing, genetic algorithm, tabu search, particle swarm optimization, and crow search algorithm with respect to the quadratic assignment problem.
•Determined the metaheuristic that is most suitable for solving the QAP.•Compared the sequential and parallel execution of metaheuristic algorithms.•Analyzed the performance of metaheuristics by finding the most suitable parallel section.•Among the six studied metaheuristics, PSO shows the highest speedup on GPU.
Given two graphs, the graph matching problem is to align the two vertex sets so as to minimize the number of adjacency disagreements between the two graphs. The seeded graph matching problem is the ...graph matching problem when we are first given a partial alignment that we are tasked with completing. In this article, we modify the state-of-the-art approximate graph matching algorithm “FAQ” of Vogelstein et al. (2015) to make it a fast approximate seeded graph matching algorithm, adapt its applicability to include graphs with differently sized vertex sets, and extend the algorithm so as to provide, for each individual vertex, a nomination list of likely matches. We demonstrate the effectiveness of our algorithm via simulation and real data experiments; indeed, knowledge of even a few seeds can be extremely effective when our seeded graph matching algorithm is used to recover a naturally existing alignment that is only partially observed.
Factorized Graph Matching Feng Zhou; De la Torre, Fernando
IEEE transactions on pattern analysis and machine intelligence,
2016-Sept.-1, 2016-09-01, 2016-9-1, 20160901, Volume:
38, Issue:
9
Journal Article
Peer reviewed
Open access
Graph matching (GM) is a fundamental problem in computer science, and it plays a central role to solve correspondence problems in computer vision. GM problems that incorporate pairwise constraints ...can be formulated as a quadratic assignment problem (QAP). Although widely used, solving the correspondence problem through GM has two main limitations: (1) the QAP is NP-hard and difficult to approximate; (2) GM algorithms do not incorporate geometric constraints between nodes that are natural in computer vision problems. To address aforementioned problems, this paper proposes factorized graph matching (FGM). FGM factorizes the large pairwise affinity matrix into smaller matrices that encode the local structure of each graph and the pairwise affinity between edges. Four are the benefits that follow from this factorization: (1) There is no need to compute the costly (in space and time) pairwise affinity matrix; (2) The factorization allows the use of a path-following optimization algorithm, that leads to improved optimization strategies and matching performance; (3) Given the factorization, it becomes straight-forward to incorporate geometric transformations (rigid and non-rigid) to the GM problem. (4) Using a matrix formulation for the GM problem and the factorization, it is easy to reveal commonalities and differences between different GM methods. The factorization also provides a clean connection with other matching algorithms such as iterative closest point; Experimental results on synthetic and real databases illustrate how FGM outperforms state-of-the-art algorithms for GM. The code is available at <;monospace><;uri xlink:type="simple">http://humansensing.cs.cmu.edu/fgm<;/uri><;/monospace>.
Operations Research (OR) analytics play a crucial role in optimizing decision-making processes for real-world problems. The assignment problem, with applications in supply chains, healthcare ...logistics, and production scheduling, represents a prominent optimization challenge. This paper focuses on addressing the Generalized Quadratic Assignment Problem (GQAP), a well-known NP-hard combinatorial optimization problem. To tackle the GQAP, we propose an OR analytical approach that incorporates efficient relaxations, reformulations, heuristics, and a metaheuristic algorithm. Initially, we employ the Reformulation Linearization Technique (RLT) to generate various linear relaxation models, carefully selecting the most efficient ones. Building upon this foundation, we introduce a robust reformulation based on Benders Decomposition (BD), which serves as the basis for an iterative optimization algorithm applied to the GQAP. Furthermore, we develop a constructive heuristic algorithm to identify near-optimal solutions, followed by an enhancement utilizing an Adaptive Large Neighborhood Search (ALNS) metaheuristic algorithm. This ALNS algorithm is enhanced through the integration of a tabu list derived from Tabu Search (TS) and a decision rule inspired by Simulated Annealing (SA). To validate our approach and evaluate its performance, we conduct a comparative analysis against state-of-the-art algorithms documented in the literature. This comparison confirms the significant improvements achieved in terms of solution quality and computational efficiency through our proposed methodology. These advancements contribute to the state of the art in solving the GQAP and hold the potential to enhance decision-making processes in a wide range of domains. Our methodology demonstrates remarkable improvements in solution quality and computational efficiency when compared to existing approaches, as evidenced by the comparative results with state-of-the-art algorithms. The potential implications of our research extend to optimizing decision-making processes in diverse fields, rendering it highly relevant and impactful.
•Analyzing different reformulation linearization inequalities for the GQA.•Introducing a novel Benders decomposition reformulation method for the GQAP.•Developing a new constructive heuristic algorithm for the initial solution of the GQAP.•Proposing different removal–insertion heuristics and a local search using a tabu list.•Introducing an efficient adaptive large neighborhood search algorithm for solving the GQAP.
Knowledge graph (KG) is a kind of structured human knowledge of modeling the relationships between real-world entities. High quality KG is of crucial importance for many knowledge-based applications, ...e.g., question answering, recommender systems, etc. This paper studies the problem of entity alignment in KGs to promote knowledge fusion. Existing methods model the semantic representation of entities by using graph structural information or attribute information of the KG and then align the entities across different domains by calculating the distances between entities’ embeddings. However, these methods only consider the node-to-node similarity in the alignment procedure while the edge-to-edge similarity is ignored. Our research hypothesis is that the graph edge alignment information is critical in entity alignment. We reformulate the knowledge entity alignment as a quadratic assignment problem (QAP) by adding relation alignment under the one-to-one mapping constraint. To solve the notorious QAP in a large-scale heterogeneous graph like KG, we propose a model, dual neighborhood consensus network (DNCN), which approximately decomposes the QAP into two small-scale linear assignment problems, i.e., entity alignment and relation alignment. After that, an edge-coloring propagation method is proposed to refine the coarse entity alignment result using the relation correspondence. Theoretical proof shows that this method can guarantee the isomorphism between local sub-graphs. The performance of DNCN is evaluated using the DBP15K and DWY100K benchmarks. Experimental results show that DNCN achieves the best performance on the DBP15K benchmark, and is computationally efficient. Ablation studies verify the importance of graph edge alignment information.
•Novel DNCN model incorporates graph edge alignment into entity alignment.•Proof shows method ensures entity and relation neighborhood consistency.•DNCN performs well on DBP15K and DWY100K datasets.•Ablation studies confirm model’s effectiveness. Code available.
We consider three known bounds for the quadratic assignment problem (QAP): an eigenvalue, a convex quadratic programming (CQP), and a semidefinite programming (SDP) bound. Since the last two bounds ...were not compared directly before, we prove that the SDP bound is stronger than the CQP bound. We then apply these to improve known bounds on a discrete energy minimization problem, reformulated as a QAP, which aims to minimize the potential energy between repulsive particles on a toric grid. Thus we are able to prove optimality for several configurations of particles and grid sizes, complementing earlier results by Bouman et al. (2013). The semidefinite programs in question are too large to solve without pre-processing, and we use a symmetry reduction method by Permenter and Parrilo (2020) to make computation of the SDP bounds possible.
Minimization with orthogonality constraints (e.g.,
) and/or spherical constraints (e.g.,
) has wide applications in polynomial optimization, combinatorial optimization, eigenvalue problems, sparse ...PCA, p-harmonic flows, 1-bit compressive sensing, matrix rank minimization, etc. These problems are difficult because the constraints are not only non-convex but numerically expensive to preserve during iterations. To deal with these difficulties, we apply the Cayley transform—a Crank-Nicolson-like update scheme—to preserve the constraints and based on it, develop curvilinear search algorithms with lower flops compared to those based on projections and geodesics. The efficiency of the proposed algorithms is demonstrated on a variety of test problems. In particular, for the maxcut problem, it exactly solves a decomposition formulation for the SDP relaxation. For polynomial optimization, nearest correlation matrix estimation and extreme eigenvalue problems, the proposed algorithms run very fast and return solutions no worse than those from their state-of-the-art algorithms. For the quadratic assignment problem, a gap 0.842 % to the best known solution on the largest problem “tai256c” in QAPLIB can be reached in 5 min on a typical laptop.