Extending the well‐known star–comb lemma for infinite graphs, we characterise the graphs that do not contain an infinite comb or an infinite star, respectively, attached to a given set of vertices. ...We offer several characterisations: in terms of normal trees, tree‐decompositions, ranks of rayless graphs and tangle‐distinguishing separators.
While distribution grids are often operated radially, they are typically designed to be more redundant, so that each load has multiple connections to the main grid. For complex networks like these, ...the notion of treewidth can be used to quantify their complexity. In this paper, we propose a new conceptual framework and derive an exact formula for computing treewidth with the help of our constructs. We argue that our framework effectively captures complexities in the structure of distribution grids and has a potential to simplify the calculation of treewidth. After analysing our findings, we hypothesise that the treewidth of distribution grids will typically be low implying that some difficult power system problems can be solved on them in parameterised polynomial time with dynamic programming. We demonstrate this with an example problem of dividing a distribution grid into tree-like operational subgraphs around the primary substations so that no voltage violations occur.
•The novel concept of a DG-kernel captures important structural properties of distribution grids.•DG-kernels are useful for determining the treewidth and constructing tree decompositions of distribution grids.•Several examples of real distribution grids have low treewidth.•An example distribution grid problem is solved efficiently with dynamic programming based on tree decompositions.
In a series of four papers we determine structures whose existence is dual, in the sense of complementary, to the existence of stars or combs. Here, in the second paper of the series, we present ...duality theorems for combinations of stars and combs: dominating stars and dominated combs. As dominating stars exist if and only if dominated combs do, the structures complementary to them coincide. Like for arbitrary stars and combs, our duality theorems for dominated combs (and dominating stars) are phrased in terms of normal trees or tree‐decompositions. The complementary structures we provide for dominated combs unify those for stars and combs and allow us to derive our duality theorems for stars and combs from those for dominated combs. This is surprising given that our complementary structures for stars and combs are quite different: Those for stars are locally finite whereas those for combs are rayless.
•We study the existence of two packings of a tree into a bipartite graph with restrained maximum degree.•We present a constructive proof and our proof is Algorithmic.•Our proof maybe applied for the ...study of two packings of a tree into a balanced bipartite graph.
We state that τ is an embedding of bipartite graph G(X0,X1) in the complete bipartite graph Bn(Y0,Y1) provided τ:V(G)→V(Bn), with σ(Xi)⊆Yi(i=0,1). Suppose that there are two embeddings of G in Bn such that both imagines under these two embeddings are edge-disjoint, we called that there is 2-packing of G in Bn. Let G(X1,X2) be a bipartite graph. For i=1,2, we use Δi to denote the maximum degree of the vertex in Xi. Let T(V1,V2) be a tree of order n with |V1|=a and |V2|=b. We demonstrate that if b≥a−1, there exists a 2-packing (σ,τ) of T in some Bn+1 such that Δ2(σ(T)∪τ(T))≤Δ2(T)+2. In general, Δ2(T)+2 can not be reduced to Δ2(T)+1, making this result sharp.
•A haze removal optimization algorithm based on region decomposition and features fusion.•Image is decomposed with quad-tree method based on gradient and grayscale information to obtain the sky ...sub-region.•The smoothed image is then used as a weight map to optimize the transmission image.•Haze-free images are obtained based on an atmospheric scattering model and color compensation.
This paper introduces a haze removal algorithm based on region decomposition and features fusion to overcome the challenges of the dark channel prior-based algorithm, such as block effect and color distortion. In our proposed method, an image is decomposed with the quad-tree method based on gradient and grayscale information to obtain the sky regions. These sky regions are used as the seed point for region-growing, which will segment the image into sky and non-sky regions. A Gaussian filter is applied for smoothing on the segmented image, which is then used as a weight map to optimize the transmission image in the dark channel prior algorithm. Finally, the haze-free images are obtained based on an atmospheric scattering model and color compensation. Our experimental results demonstrated that images restored using this algorithm are generally clear and natural, and the algorithm is especially suitable for hazy images with large sky regions.
•Parameterized analysis of a general notion of diversity of solutions that suits a large class of combinatorial problems.•Introduction of the notion of dynamic programming core.•Efficient dynamic ...cores for computing one solution yield efficient dynamic cores for computing a diverse set of solutions.•The notion of diversity of solutions is also compatible with certain notions of kernel.
When modeling an application of practical relevance as an instance of a combinatorial problem X, we are often interested not merely in finding one optimal solution for that instance, but in finding a sufficiently diverse collection of good solutions. In this work we initiate a systematic study of diversity from the point of view of fixed-parameter tractability theory. First, we consider an intuitive notion of diversity of a collection of solutions which suits a large variety of combinatorial problems of practical interest. We then present an algorithmic framework which –automatically– converts a tree-decomposition-based dynamic programming algorithm for a given combinatorial problem X into a dynamic programming algorithm for the diverse version of X. Surprisingly, our algorithm has a polynomial dependence on the diversity parameter.
•Analyze and improve the shortcomings of the current multi-focus image fusion methods.•A novel focus measure of the sum of edge-weighted modified Laplacian is proposed.•A new multi-focus image fusion ...method based on quad-tree decomposition is proposed.
Multi-focus image fusion is an important way to obtain all-in-focus images. The goal is to reconstruct all the focused pixels in the source image into the fused image. Generally, in block-based fusion methods, the source image is often decomposed into fixed-size blocks. However, the size of the blocks will affect the fusion quality, and the problems such as blockiness is prone to occur in the fusion results. To this end, we propose a novel multi-focus image fusion method based on optimal block decomposition. Different from the conventional fixed-block-based methods, our method adopts the optimal decomposition of the quad-tree to process the source images. First, a new sum of edge-weighted modified Laplacian (SEWML) is proposed based on the sum of modified Laplacian (SML), which is used as a focus measure to detect the focus information of the source image, and this improved focus measure is more robust than SML. Then, an efficient quad-tree decomposition strategy is proposed to decompose the source images into optimally sized blocks. At the same time, using SEWML to detect the focused blocks in the quad-tree structure of the source images, the focused blocks are naturally combined to form the initial decision map, and the initial decision map is optimized to obtain the final decision map. Finally, the fused image is obtained by the weighted-average rule according to the final decision map. Experimental results show that the proposed method achieves better performance compared to 14 other state-of-the-art multi-focus image fusion methods.