Given a graph G = (V, E), two vertices s, t ∈ V, and two integers k, ℓ, the Short Secluded Path problem is to find a simple s‐t‐path with at most k vertices and ℓ neighbors. We study the ...parameterized complexity of the problem with respect to four structural graph parameters: the vertex cover number, treewidth, feedback vertex number, and feedback edge number. In particular, we completely settle the question of the existence of problem kernels with size polynomial in these parameters and their combinations with k and ℓ. We also obtain a 2O(tw) · ℓ2 · n‐time algorithm for n‐vertex graphs of treewidth tw, which yields subexponential‐time algorithms in several graph classes.
The known linear-time kernelizations for d-Hitting Set guarantee linear worst-case running times using a quadratic-size data structure (that is not fully initialized). Getting rid of this data ...structure, we show that problem kernels of asymptotically optimal size O(kd) for d-Hitting Set are computable in linear time and space. Additionally, we experimentally compare the linear-time kernelizations for d-Hitting Set to each other and to a classical data reduction algorithm due to Weihe.
•Linear-time and space kernelizations for d-Hitting Set.•Experimental comparison to well-known data reduction algorithm of Weihe.•For low parameter values, kernelization is superior.
Combinatorial filters are formal structures for filtering and reasoning over discrete sensor data. This article presents a series of results addressing the question whether the filter minimization ...(FM) problem, an NP-hard problem of state-space reduction on such filters, and a variant of it, the filter partitioning minimization (FPM) problem, which requires the reduced filter to partition the state space of the original filter, can be solved via quotient operations under equivalence relations of the state space. We first consider the well-known notion of bisimulation and show that, although bisimulation always yields feasible solutions to FM and FPM problems, it does not necessarily induce optimal solutions. We also establish a connection between filter reduction and the notion of simulation ; specifically, we show that the FM problem is equivalent to the problem of inducing a minimal filter that simulates a given filter. We then introduce a variant of bisimulation, which we call compatibility , and prove that the FPM problem can always be solved by computing the quotient of the input filter under a compatibility equivalence relation having a minimum number of equivalence classes. On the other hand, computing optimal solutions to the FM problem requires to look for relations beyond equivalence relations, and in fact, the FM problem can be solved by computing the quotient of the original filter under a closed covering of the state space with the minimum number of compatibility classes. Subsequently, we introduce two special relations, the union of all compatibility relations and the mergeability relation , which are both computable in polynomial time. By analyzing where these two relations become an equivalence relation, we identify several classes of filters for which FM and FPM problems are solvable in polynomial time.
We prove that pseudorandom sets in the Grassmann graph have near-perfect expansion. This completes the last missing piece of the proof of the 2-to-2-Games Conjecture (albeit with imperfect ...completeness). The Grassmann graph has induced subgraphs that are themselves isomorphic to Grassmann graphs of lower orders. A set of vertices is called pseudorandom if its density within all such subgraphs (of constant order) is at most slightly higher than its density in the entire graph. We prove that pseudorandom sets have almost no edges within them. Namely, their edge-expansion is very close to 1.
We encounter optimization problems in our daily lives and in various research domains. Some of them are so hard that we can, at best, approximate the best solutions with (meta-) heuristic methods. ...However, the huge number of optimization problems and the small number of generally acknowledged methods mean that more metaheuristics are needed to fill the gap. We propose a new metaheuristic, called chemical reaction optimization (CRO), to solve optimization problems. It mimics the interactions of molecules in a chemical reaction to reach a low energy stable state. We tested the performance of CRO with three nondeterministic polynomial-time hard combinatorial optimization problems. Two of them were traditional benchmark problems and the other was a real-world problem. Simulation results showed that CRO is very competitive with the few existing successful metaheuristics, having outperformed them in some cases, and CRO achieved the best performance in the real-world problem. Moreover, with the No-Free-Lunch theorem, CRO must have equal performance as the others on average, but it can outperform all other metaheuristics when matched to the right problem type. Therefore, it provides a new approach for solving optimization problems. CRO may potentially solve those problems which may not be solvable with the few generally acknowledged approaches.
•A two-stage solution approach is proposed for a multi-floor facility layout problem.•The Dantzig-Wolfe decomposition algorithm was first applied to the multi-floor facility layout problem.•84 new ...test problems with different properties were produced for MFLP.•Better results were found for all test problems of the literature.
The multi-floor facility layout problem (MFLP) is one of the most important and complex facility layout problems that has many applications in designing the facilities of manufacturing and service sectors. In this study, a hybrid version of the Dantzig-Wolfe decomposition algorithm is proposed to solve the MFLP for the first time. The proposed solution approach is performed in two steps. In the first step, a mathematical formulation is applied to assign the departments to the floors in a way that the departments with higher material flow between them be located on the same or closer floors. In the second step, the output of the first step is considered and the MFLP is decomposed into a master problem and some sub-problems to form the Dantzig-Wolfe decomposition algorithm and find the optimal layout of each floor separately. Then the integrated layout of multiple floors is formed easily. The proposed algorithm is evaluated using some sample problems from the literature and some newly generated test problems. The obtained results show the superiority of the proposed algorithm compared to the approaches of the literature.
We consider minimal controllability problems (MCPs) on linear structural descriptor systems. We address two problems of determining the minimum number of input nodes such that a descriptor system is ...structurally controllable. We show that MCP0 for structural descriptor systems can be solved in polynomial time. This is the same as the existing results on typical structural linear time-invariant (LTI) systems. However, the derivation of the result is considerably different because the derivation technique of the existing result cannot be used for descriptor systems. Instead, we use the Dulmage-Mendelsohn decomposition. Moreover, we prove that the results for MCP1 are different from those for usual LTI systems. In fact, MCP1 for descriptor systems is an NP-hard problem, while MCP1 for LTI systems can be solved in polynomial time.
Hopfield neural network, as a recurrent neural network, has been widely used to solve non-deterministic polynomial time-hard problems. However, the network tends to get trapped into local minima and ...thus converge to sub-optimal solutions. In this work, the intrinsic read noise in the memristive Hopfield network was harnessed as the random perturbation source to mitigate this problem. Firstly, the read noise in devices (TiN/TaO x /HfO x /TiN) at different resistance levels from a 1 Kb array was statistically measured, and the distribution of it was extracted. Then the effect of reading noise levels on the performance of a <inline-formula> <tex-math notation="LaTeX">64 \times 64 </tex-math></inline-formula> network solver is quantitatively evaluated through confining all devices with the identical distribution. Based on such a strategy, the success probability, the distribution of distance, the energy consumption, and time to solution, at different noise levels and iteration cycles were investigated. The simulated results demonstrate that the intrinsic read noise in the resistive weight matrix is indeed helpful for the network to escape from local minima, serving as a useful computing resource.