We establish two versions of Vizing's theorem for Borel multi-graphs whose vertex degrees and edge multiplicities are uniformly bounded by respectively Δ and π. The “approximate” version states that, ...for any Borel probability measure on the edge set and any ϵ>0, we can properly colour all but ϵ-fraction of edges with Δ+π colours in a Borel way. The “measurable” version, which is our main result, states that if, additionally, the measure is invariant, then there is a measurable proper edge colouring of the whole edge set with at most Δ+π colours.
In the online bipartite matching problem with replacements, all the vertices on one side of the bipartition are given, and the vertices on the other side arrive one-by-one with all their incident ...edges. The goal is to maintain a maximum matching while minimizing the number of changes (replacements) to the matching. We show that the greedy algorithm that always takes the shortest augmenting path from the newly inserted vertex (denoted the SAP protocol) uses at most amortized
O
(log
2
n
) replacements per insertion, where
n
is the total number of vertices inserted. This is the first analysis to achieve a polylogarithmic number of replacements for
any
replacement strategy, almost matching the Ω (log
n
) lower bound. The previous best strategy known achieved amortized
O
(√
n
) replacements Bosek, Leniowski, Sankowski, Zych, FOCS 2014. For the SAP protocol in particular, nothing better than the trivial
O
(
n
) bound was known except in special cases. Our analysis immediately implies the same upper bound of
O
(log
2
n
) reassignments for the capacitated assignment problem, where each vertex on the static side of the bipartition is initialized with the capacity to serve a number of vertices.
We also analyze the problem of minimizing the maximum server load. We show that if the final graph has maximum server load
L
, then the SAP protocol makes amortized
O
(min {
L
log
2
n
, √
n
log
n
}) reassignments. We also show that this is close to tight, because Ω (min {
L
, √
n
}) reassignments can be necessary.
In this paper, we propose a general unsupervised feature selection method named unsupervised feature selection using principal component analysis (UFSPCA). Repetitive information causes redundancy in ...information and if there is a correlation between features, it is not easy to understand the information is repetitive. Accordingly, we first use PCA to create uncorrelated and orthogonal features, then calculate the similarities between the original and uncorrelated features. Next, we modeled two sets of original and orthogonal features and their similarity between them to a weighted bipartite graph. Finally, we obtained a matching with the maximum weight using the Hungarian algorithm. The vertices of the original features that are in this matching are the selected features. To illustrate the optimality and efficiency of the proposed method, we evaluated the performance of our proposed method on five datasets using the KNN classifier and compared it with seven well-known unsupervised feature selection algorithms. The evaluation results show that the UFSPCA method is superior to the other seven algorithms.
This article proposes a hybrid method to find the optimal place of dc power flow controllers (DC-PFC)s in HVdc grids. This hybrid method consists of max flow-min cut theory with Ford-Fulkerson ...augmenting paths and combining sensitivity analysis to find the most important HVdc lines and the best placement. In this article, series DC-PFC is modeled and get placed. A significant advantage of this method is, reducing the computational burden of calculations by finding the most important and susceptible HVdc lines to get overloaded without focusing on other unnecessary HVdc lines. The static security of the power system strongly depends on the placement of DC-PFCs, which has become an important issue because of rapid-developing of HVdc grids. Static security studies have been done on an eight-terminal HVdc grid incorporating double scenarios.
The purpose of the work is to develop models and algorithms for optimizing matching in dynamically generated graphs of asymmetric relations in coordinated open systems of interacting agents with ...centralized and collective control. The dynamic asymmetric matching optimization problem arises here as a result of a compromise approximation of the mapping of the dynamic programming method onto a stream of known open assignment problems or several traveling salesmen. However, the branching alternatives presented in this way for independent tasks do not take into account the interdependence of real relationships between agents and their tasks, including their relationship to time. Ignoring the dependence of branching alternatives leads to a delay in the moment or to a loss in the quality of assignment of tasks to coordinated agents. The main idea of the proposed implementation of the principle known for effective control is to postpone the moment the final decision is made to the latest moment, taking into account the susceptibility of the system to local changes in state variables. The interdependence of states is revealed on the basis of the analysis of the correspondence of the graph of the current matching with the optimal solution on the subgraph of perfect matching. The transition between states is implemented by the incremental version of the reoptimization algorithm for solving linear problems of assigning the shortest replenishing path using the method. The space of search states is a dynamically generated bipartite sparse graph of alternatives for a combination of agents and tasks, represented by a list of arcs. To highlight the sets of changed arcs, it is proposed to supplement the weight of the arcs with the boundaries of the stability intervals of the solution, optionally formed in the background. By default, the weight of the modified arc matches the boundary of the stability interval. On each correction cycle of the lists of agents, tasks, and their associations, subsets of elements are selected for which reconsideration of matching is required. An enhanced condition for the selection of such elements is to go beyond the boundaries of the stability interval. In this case, the asymmetry of the assignment problem is taken into account by choosing the adjacency structure for the fraction of the graph with a minimum of vertices. As a result, the reaction time of procedures for solving the assignment problem is reduced by an order of magnitude.
We consider the maximum vertex-weighted matching problem (MVM) for non-bipartite graphs in which non-negative weights are assigned to the vertices of a graph and a matching that maximizes the sum of ...the weights of the matched vertices is desired. In earlier work we have described a 2/3-approximation algorithm for the MVM on bipartite graphs (Florin Dobrian et al., 2019). Here we show that a 2/3-approximation algorithm for MVM on non-bipartite graphs can be obtained by restricting the length of augmenting paths to at most three. The algorithm has time complexity O(mlogΔ+nlogn), where n is the number of vertices, m is the number of edges, and Δ is the maximum degree of a vertex.
The approximation ratio of the algorithm is obtained by considering failed vertices, i.e., vertices that the approximation algorithm fails to match but the exact algorithm does. We show that there are two distinct heavier matched vertices that we can charge each failed vertex to. Our proof techniques characterize the structure of augmenting paths in a novel way.
We have implemented the 2/3-approximation algorithm and show that it runs in under a minute on graphs with tens of millions of vertices and hundreds of millions of edges. We compare its performance with five other algorithms: an exact algorithm for MVM, an exact algorithm for the maximum edge-weighted matching (MEM) problem, as well as three approximation algorithms. The approximation algorithms include a 1/2-approximation algorithm for MVM, and (2/3−ε)- and (1−ε)-approximation algorithms for the MEM. In our test set of nineteen problems, there are graphs on which the exact algorithms fail to terminate in 100 hours. In addition, the new 2/3-approximation algorithm for MVM outperforms the other approximation algorithms by either being faster (often by orders of magnitude) or obtaining better weights.
Let
S
be a subset of
V
(
G
). The vertices of
S
are colored black and the vertices of
V
(
G
)
-
S
are colored white. The color-change-rule is defined as if there is a black vertex
u
having an unique ...white neighbor
v
, then change the color of
v
to black. The zero forcing number of a graph, denoted by
Z
(
G
), is the minimum cardinality of a set
S
such that all vertices of
V
(
G
) are black by repeating the color-change-rule, which was proposed at the AIM-minimum rank group in 2008 to bound the maximal nullity of
G
. Hence it has received much attention. Our interest is to study the zero forcing number of graphs with matching number
α
′
(
G
)
and cyclomatic number
c
(
G
). In the paper, utilizing the alternating path and an operation associated with a vertex partition, we verify the sharp bounds that
n
(
G
)
-
2
α
′
(
G
)
≤
Z
(
G
)
≤
n
(
G
)
-
α
′
(
G
)
+
c
(
G
)
-
1
for
n
(
G
)
≥
3
. The extremal graphs attain the upper bound are completely characterized.
An efficient polynomial time algorithm for solving maximum flow problems in directed networks has been proposed in this paper. The algorithm is basically based on successive divisions of capacities ...by multiples of two; it solves the maximum flow problem as a sequence of O(m) shortest path problems on residual networks with n nodes and m arcs. It runs in O(m2 r) time, where r is the smallest integer greater than or equal to log B, and B is the largest arc capacity of the network. A numerical example has been illustrated using this proposed algorithm.
Max-flow has been adopted for semi-supervised data modelling, yet existing algorithms were derived only for the learning from static data. This paper proposes an online max-flow algorithm for the ...semi-supervised learning from data streams. Consider a graph learned from labelled and unlabelled data, and the graph being updated dynamically for accommodating online data adding and retiring. In learning from the resulting non stationary graph, we augment and de-augment paths to update max-flow with a theoretical guarantee that the updated max-flow equals to that from batch retraining. For classification, we compute min-cut over current max-flow, so that minimized number of similar sample pairs are classified into distinct classes. Empirical evaluation on real-world data reveals that our algorithm outperforms state-of-the-art stream classification algorithms.
In this paper, the binary representation of arc capacity has been used in developing an efficient polynomial time algorithm for the constrained maximum flow problem in directed networks. The ...algorithm is basically based on solving the maximum flow problem as a sequence of O(n2) shortest path problems on residual directed networks with n nodes generated during iterations. The complexity of the algorithm is estimated to be no more than O(n2mr) arithmetic operations, where m denotes the number of arcs in the network, and r is the smallest integer greater than or equal to log B (B denotes the largest arc capacity in the directed network). Generalization of the algorithm has been also performed in order to solve the maximum flow problem in a directed network subject to non-negative lower bound on the flow vector. A formulation of the simple transportation problem, as a maximal network flow problem has been also performed. Numerical example has been inserted to illustrate the use of the proposed algorithm.