We prove that unless the Exponential Time Hypothesis (ETH) fails, deciding if there is a homomorphism from graph
G
to graph
H
cannot be done in time |
V
(
H
)|
o
(|
V
(
G
)|)
. We also show an ...exponential-time reduction from Graph Homomorphism to Subgraph Isomorphism. This rules out (subject to ETH) a possibility of |
V
(
H
)|
o
(|
V
(
H
)|)
-time algorithm deciding if graph
G
is a subgraph of
H
. For both problems our lower bounds asymptotically match the running time of brute-force algorithms trying all possible mappings of one graph into another. Thus, our work closes the gap in the known complexity of these fundamental problems.
Moreover, as a consequence of our reductions, conditional lower bounds follow for other related problems such as Locally Injective Homomorphism, Graph Minors, Topological Graph Minors, Minimum Distortion Embedding and Quadratic Assignment Problem.
Improving 3N Circuit Complexity Lower Bounds Find, Magnus Gausdal; Golovnev, Alexander; Hirsch, Edward A. ...
Computational complexity,
2023/12, Letnik:
32, Številka:
2
Journal Article
Recenzirano
While it can be easily shown by counting that almost all Boolean predicates of n variables have circuit size
Ω
(
2
n
/
n
)
, we have no example of an
NP
function requiring even a superlinear number ...of gates. Moreover, only modest linear lower bounds are known. Until recently, the strongest known lower bound was
3
n
-
o
(
n
)
presented by Blum in 1984. In 2011, Demenkov and Kulikov presented a much simpler proof of the same lower bound, but for a more complicated function —an affine disperser for sublinear dimension. Informally, this is a function that is resistant to any
n
-
o
(
n
)
affine substitutions. In 2011, Ben-Sasson and Kopparty gave an explicit construction of such a function. The proof of the lower bound basically goes by showing that for any circuit there exists an affine hyperplane where the function complexity decreases at least by three gates. In this paper, we prove the following two extensions.
1. A
(
3
+
1
86
n
-
o
(
n
)
lower bound for the circuit size of an affine disperser for sublinear dimension. The proof is based on the gate elimination technique extended with the following three ideas: (i) generalizing the computational model by allowing circuits to contain cycles, this in turn allows us to perform affine substitutions, (ii) a carefully chosen circuit complexity measure to track the progress of the gate elimination process, and (iii) quadratic substitutions that may be viewed as delayed affine substitutions.
2. A much simpler proof of a stronger lower bound of
3.11
n
for a quadratic disperser. Informally, a quadratic disperser is resistant to sufficiently many substitutions of the form
x
←
p
, where p is a polynomial of degree at most two. Currently, there are no constructions of quadratic dispersers in
NP
(although there are constructions over large fields, and constructions with weaker parameters over GF(2)). The key ingredient of this proof is the induction on the size of the underlying quadratic variety instead of the number of variables as in the previously known proofs.
Most of the known lower bounds for binary Boolean circuits with unrestricted depth are proved by the gate elimination method. The most efficient known algorithms for the #SAT problem on binary ...Boolean circuits use similar case analyses to the ones in gate elimination. Chen and Kabanets recently showed that the known case analyses can also be used to prove average case circuit lower bounds, that is, lower bounds on the size of approximations of an explicit function.
In this paper, we provide a general framework for proving worst/average case lower bounds for circuits and upper bounds for #SAT that is built on ideas of Chen and Kabanets. A proof in such a framework goes as follows. One starts by fixing three parameters: a class of circuits, a circuit complexity measure, and a set of allowed substitutions. The main ingredient of a proof goes as follows: by going through a number of cases, one shows that for any circuit from the given class, one can find an allowed substitution such that the given measure of the circuit reduces by a sufficient amount. This case analysis immediately implies an upper bound for #SAT. To obtain worst/average case circuit complexity lower bounds one needs to present an explicit construction of a function that is a disperser/extractor for the class of sources defined by the set of substitutions under consideration.
We show that many known proofs (of circuit size lower bounds and upper bounds for #SAT ) fall into this framework. Using this framework, we prove the following new bounds: average case lower bounds of 3.24n and 2.59n for circuits over U2 and B2, respectively (though the lower bound for the basis B2 is given for a quadratic disperser whose explicit construction is not currently known), and faster than 2n #SAT -algorithms for circuits over U2 and B2 of size at most 3.24n and 2.99n, respectively. Here by B2 we mean the set of all bivariate Boolean functions, and by U2 the set of all bivariate Boolean functions except for parity and its complement.
We present a new framework for designing worst-case to average-case reductions. For a large class of problems, it provides an explicit transformation of algorithms running in time T that are only ...correct on a small (subconstant) fraction of their inputs into algorithms running in time O(T) that are correct on all inputs.
Using our framework, we obtain such efficient worst-case to average-case reductions for fundamental problems in a variety of computational models; namely, algorithms for matrix multiplication, streaming algorithms for the online matrix-vector multiplication problem, and static data structures for all linear problems as well as for the multivariate polynomial evaluation problem.
Our techniques crucially rely on additive combinatorics. In particular, we show a local correction lemma that relies on a new probabilistic version of the quasi-polynomial Bogolyubov-Ruzsa lemma.
Many optimization problems can be phrased in terms of constraint satisfaction. In particular MAX-2-SAT and MAX-2-CSP are known to generalize many hard combinatorial problems on graphs. Algorithms ...solving the problem exactly have been designed but the running time is improved over trivial brute-force solutions only for very sparse instances. Despite many efforts, the only known algorithm 28 solving MAX-2-CSP over n variables in less than O⁎(2n) steps uses exponential space.
Several authors have designed algorithms with running time O⁎(2nf(d)) where f:R+→(0,1) is a slowly growing function and d is the average variable degree of the input formula. The current best known algorithm for MAX-2-CSP 25 runs in time O⁎(2n(1−2d+1)) and polynomial space. In this paper we continue this line of research and design new algorithms for the MAX-2-SAT and MAX-2-CSP problems.
First, we present a general technique for obtaining new bounds on the running time of a simple algorithm for MAX-2-CSP analyzed with respect to the number of vertices from algorithms that are analyzed with respect to the number of constraints. The best known bound for the problem is improved to O⁎(2n(1−3d+1)) for d⩾3. We further improve the bound for MAX-2-SAT, in particular for d⩾6 we achieve O⁎(2n(1−3.677d+1)).
As a second result we present an algorithm with asymptotically better running time for the case when the input instance is not very sparse. Building on recent work of Feige and Kogan we derive an upper bound on the size of a vertex separator for graphs in terms of the average degree of the graph. We then design a simple algorithm solving MAX-2-CSP in time O⁎(2cdn), cd=1−2αlndd for some α<1 and d=o(n).
The Minrank of Random Graphs Golovnev, Alexander; Regev, Oded; Weinstein, Omri
IEEE transactions on information theory,
11/2018, Letnik:
64, Številka:
11
Journal Article
Recenzirano
Odprti dostop
The minrank of a directed graph <inline-formula> <tex-math notation="LaTeX">G </tex-math></inline-formula> is the minimum rank of a matrix <inline-formula> <tex-math notation="LaTeX">M ...</tex-math></inline-formula> that can be obtained from the adjacency matrix of <inline-formula> <tex-math notation="LaTeX">G </tex-math></inline-formula> by switching some ones to zeros (i.e., deleting edges) and then setting all diagonal entries to one. This quantity is closely related to the fundamental information-theoretic problems of (linear) index coding (Bar-Yossef et al. ), network coding (Effros et al. ), and distributed storage (Mazumdar, ISIT, 2014). We prove tight bounds on the minrank of directed Erdős-Rényi random graphs <inline-formula> <tex-math notation="LaTeX">G(n,p) </tex-math></inline-formula> for all regimes of <inline-formula> <tex-math notation="LaTeX">p\in {0,1} </tex-math></inline-formula>. In particular, for any constant <inline-formula> <tex-math notation="LaTeX">p </tex-math></inline-formula>, we show that <inline-formula> <tex-math notation="LaTeX">\mathsf {minrk}(G) = \Theta (n/\log n) </tex-math></inline-formula> with high probability, where <inline-formula> <tex-math notation="LaTeX">G </tex-math></inline-formula> is chosen from <inline-formula> <tex-math notation="LaTeX">G(n,p) </tex-math></inline-formula>. This bound gives a near quadratic improvement over the previous best lower bound of <inline-formula> <tex-math notation="LaTeX">\Omega (\sqrt {n}) </tex-math></inline-formula> (Haviv and Langberg), and partially settles an open problem raised by Lubetzky and Stav. Our lower bound matches the well-known upper bound obtained by the "clique covering" solution and settles the linear index coding problem for random knowledge graphs.
A constraint satisfaction problem (CSP), \(\textsf {Max-CSP}(\mathcal {F})\) , is specified by a finite set of constraints \(\mathcal {F}\subseteq \lbrace q^k \rightarrow \lbrace 0,1\rbrace \rbrace\) ...for positive integers q and k. An instance of the problem on n variables is given by m applications of constraints from \(\mathcal {F}\) to subsequences of the n variables, and the goal is to find an assignment to the variables that satisfies the maximum number of constraints. In the (γ ,β)-approximation version of the problem for parameters 0 ≤ β ≤ γ ≤ 1, the goal is to distinguish instances where at least γ fraction of the constraints can be satisfied from instances where at most β fraction of the constraints can be satisfied. In this work, we consider the approximability of this problem in the context of sketching algorithms and give a dichotomy result. Specifically, for every family \(\mathcal {F}\) and every β < γ, we show that either a linear sketching algorithm solves the problem in polylogarithmic space or the problem is not solvable by any sketching algorithm in \(o(\sqrt {n})\) space. In particular, we give non-trivial approximation algorithms using polylogarithmic space for infinitely many constraint satisfaction problems. We also extend previously known lower bounds for general streaming algorithms to a wide variety of problems, and in particular the case of q=k=2, where we get a dichotomy, and the case when the satisfying assignments of the constraints of \(\mathcal {F}\) support a distribution on \(q^k\) with uniform marginals. Prior to this work, other than sporadic examples, the only systematic classes of CSPs that were analyzed considered the setting of Boolean variables q = 2, binary constraints k =2, and singleton families \(|\mathcal {F}|=1\) and only considered the setting where constraints are placed on literals rather than variables. Our positive results show wide applicability of bias-based algorithms used previously by 47 and 41, which we extend to include richer norm estimation algorithms, by giving a systematic way to discover biases. Our negative results combine the Fourier analytic methods of 56, which we extend to a wider class of CSPs, with a rich collection of reductions among communication complexity problems that lie at the heart of the negative results. In particular, previous works used Fourier analysis over the Boolean cube to initiate their results and the results seemed particularly tailored to functions on Boolean literals (i.e., with negations). Our techniques surprisingly allow us to get to general q-ary CSPs without negations by appealing to the same Fourier analytic starting point over Boolean hypercubes.
We prove tight upper and lower bounds on approximation ratios of all Boolean Max-2CSP problems in the streaming model. Specifically, for every type of Max-2CSP problem, we give an explicit constant ...\alpha , s.t. for any \epsilon > 0 (i) there is an ( \alpha-\epsilon )-streaming approximation using space O(\log n) ; and (ii) any ( \alpha+\epsilon )-streaming approximation requires space \Omega(\sqrt{n}) . This generalizes the celebrated work of Kapralov, Khanna, Sudan SODA 2015; Kapralov, Krachun STOC 2019, who showed that the optimal approximation ratio for Max-CUT was 1/2. Prior to this work, the problem of determining this ratio was open for all other Max-2CSPs. Our results are quite surprising for some specific Max-2CSPs. For the Max-DICUT problem, there was a gap between an upper bound of 1/2 and a lower bound of 2/5 Guruswami, Velingker, Velusamy APPROX 2017. We show that neither of these bounds is tight, and the optimal ratio for Max-DICUT is 4/9. We also establish that the tight approximation for Max-2SAT is \sqrt{2}/2 , and for Exact Max-2SAT it is 3/4. As a byproduct, our result gives a separation between space-efficient approximations for Max-2SAT and Exact Max-2SAT. This is in sharp contrast to the setting of polynomial-time algorithms with polynomial space, where the two problems are known to be equally hard to approximate. Finally, we prove that the tight streaming approximation for \text{Max}-k\text{SAT} is \sqrt{2}/2 for every k\geq 2 .