On the Quantitative Hardness of CVP Bennett, Huck; Stephens-Davidowitz, Noah; Golovnev, Alexander
2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS),
2017-Oct.
Conference Proceeding
Odprti dostop
For odd integers p ≥ 1 (and p = ∞), we show that the Closest Vector Problem in the ℓ p norm (CVP p ) over rank n lattices cannot be solved in 2 (1-ε)n time for any constant ε > 0 unless the Strong ...Exponential Time Hypothesis (SETH) fails. We then extend this result to "almost all" values of p ≥ 1, not including the even integers. This comes tantalizingly close to settling the quantitative time complexity of the important special case of CVP 2 (i.e., CVP in the Euclidean norm), for which a 2 n+o(n) -time algorithm is known. In particular, our result applies for any p = p(n) ≠ 2 that approaches 2 as n → ∞. We also show a similar SETH-hardness result for SVP ∞ ; hardness of approximating CVP p to within some constant factor under the so-called Gap-ETH assumption; and other hardness results for CVP p and CVPP p for any 1 ≤ p <; ∞ under different assumptions.
Range Avoidance (AVOID) is a total search problem where, given a Boolean circuit \(C\colon\{0,1\}^n\to\{0,1\}^m\), \(m>n\), the task is to find a \(y\in\{0,1\}^m\) outside the range of \(C\). For an ...integer \(k\geq 2\), \(\mathrm{NC}^0_k\)-AVOID is a special case of AVOID where each output bit of \(C\) depends on at most \(k\) input bits. While there is a very natural randomized algorithm for AVOID, a deterministic algorithm for the problem would have many interesting consequences. Ren, Santhanam, and Wang (FOCS 2022) and Guruswami, Lyu, and Wang (RANDOM 2022) proved that explicit constructions of functions of high formula complexity, rigid matrices, and optimal linear codes, reduce to \(\mathrm{NC}^0_4\)-AVOID, thus establishing conditional hardness of the \(\mathrm{NC}^0_4\)-AVOID problem. On the other hand, \(\mathrm{NC}^0_2\)-AVOID admits polynomial-time algorithms, leaving the question about the complexity of \(\mathrm{NC}^0_3\)-AVOID open. We give the first reduction of an explicit construction question to \(\mathrm{NC}^0_3\)-AVOID. Specifically, we prove that a polynomial-time algorithm (with an \(\mathrm{NP}\) oracle) for \(\mathrm{NC}^0_3\)-AVOID for the case of \(m=n+n^{2/3}\) would imply an explicit construction of a rigid matrix, and, thus, a super-linear lower bound on the size of log-depth circuits. We also give deterministic polynomial-time algorithms for all \(\mathrm{NC}^0_k\)-AVOID problems for \(m\geq n^{k-1}/\log(n)\). Prior work required an \(\mathrm{NP}\) oracle, and required larger stretch, \(m \geq n^{k-1}\).
For \(S\subseteq \mathbb{F}^n\), consider the linear space of restrictions of degree-\(d\) polynomials to \(S\). The Hilbert function of \(S\), denoted \(\mathrm{h}_S(d,\mathbb{F})\), is the ...dimension of this space. We obtain a tight lower bound on the smallest value of the Hilbert function of subsets \(S\) of arbitrary finite grids in \(\mathbb{F}^n\) with a fixed size \(|S|\). We achieve this by proving that this value coincides with a combinatorial quantity, namely the smallest number of low Hamming weight points in a down-closed set of size \(|S|\). Understanding the smallest values of Hilbert functions is closely related to the study of degree-\(d\) closure of sets, a notion introduced by Nie and Wang (Journal of Combinatorial Theory, Series A, 2015). We use bounds on the Hilbert function to obtain a tight bound on the size of degree-\(d\) closures of subsets of \(\mathbb{F}_q^n\), which answers a question posed by Doron, Ta-Shma, and Tell (Computational Complexity, 2022). We use the bounds on the Hilbert function and degree-\(d\) closure of sets to prove that a random low-degree polynomial is an extractor for samplable randomness sources. Most notably, we prove the existence of low-degree extractors and dispersers for sources generated by constant-degree polynomials and polynomial-size circuits. Until recently, even the existence of arbitrary deterministic extractors for such sources was not known.
A constraint satisfaction problem (CSP), \(\textsf{Max-CSP}(\mathcal{F})\), is specified by a finite set of constraints \(\mathcal{F} \subseteq \{q^k \to \{0,1\}\}\) for positive integers \(q\) and ...\(k\). An instance of the problem on \(n\) variables is given by \(m\) applications of constraints from \(\mathcal{F}\) to subsequences of the \(n\) variables, and the goal is to find an assignment to the variables that satisfies the maximum number of constraints. In the \((\gamma,\beta)\)-approximation version of the problem for parameters \(0 \leq \beta < \gamma \leq 1\), the goal is to distinguish instances where at least \(\gamma\) fraction of the constraints can be satisfied from instances where at most \(\beta\) fraction of the constraints can be satisfied. In this work we consider the approximability of this problem in the context of sketching algorithms and give a dichotomy result. Specifically, for every family \(\mathcal{F}\) and every \(\beta < \gamma\), we show that either a linear sketching algorithm solves the problem in polylogarithmic space, or the problem is not solvable by any sketching algorithm in \(o(\sqrt{n})\) space. In particular, we give non-trivial approximation algorithms using polylogarithmic space for infinitely many constraint satisfaction problems. We also extend previously known lower bounds for general streaming algorithms to a wide variety of problems, and in particular the case of \(q=k=2\), where we get a dichotomy, and the case when the satisfying assignments of the constraints of \(\mathcal{F}\) support a distribution on \(q^k\) with uniform marginals. Prior to this work, other than sporadic examples, the only systematic classes of CSPs that were analyzed considered the setting of Boolean variables \(q=2\), binary constraints \(k=2\), singleton families \(|\mathcal{F}|=1\) and only considered the setting where constraints are placed on literals rather than variables.
We present a new framework for designing worst-case to average-case reductions. For a large class of problems, it provides an explicit transformation of algorithms running in time \(T\) that are only ...correct on a small (subconstant) fraction of their inputs into algorithms running in time \(\widetilde{O}(T)\) that are correct on all inputs. Using our framework, we obtain such efficient worst-case to average-case reductions for fundamental problems in a variety of computational models; namely, algorithms for matrix multiplication, streaming algorithms for the online matrix-vector multiplication problem, and static data structures for all linear problems as well as for the multivariate polynomial evaluation problem. Our techniques crucially rely on additive combinatorics. In particular, we show a local correction lemma that relies on a new probabilistic version of the quasi-polynomial Bogolyubov-Ruzsa lemma.
We study the Matrix Multiplication Verification Problem (MMV) where the goal is, given three \(n \times n\) matrices \(A\), \(B\), and \(C\) as input, to decide whether \(AB = C\). A classic ...randomized algorithm by Freivalds (MFCS, 1979) solves MMV in \(\widetilde{O}(n^2)\) time, and a longstanding challenge is to (partially) derandomize it while still running in faster than matrix multiplication time (i.e., in \(o(n^{\omega})\) time). To that end, we give two algorithms for MMV in the case where \(AB - C\) is sparse. Specifically, when \(AB - C\) has at most \(O(n^{\delta})\) non-zero entries for a constant \(0 \leq \delta < 2\), we give (1) a deterministic \(O(n^{\omega - \varepsilon})\)-time algorithm for constant \(\varepsilon = \varepsilon(\delta) > 0\), and (2) a randomized \(\widetilde{O}(n^2)\)-time algorithm using \(\delta/2 \cdot \log_2 n + O(1)\) random bits. The former algorithm is faster than the deterministic algorithm of K\"{u}nnemann (ESA, 2018) when \(\delta \geq 1.056\), and the latter algorithm uses fewer random bits than the algorithm of Kimbrel and Sinha (IPL, 1993), which runs in the same time and uses \(\log_2 n + O(1)\) random bits (in turn fewer than Freivalds's algorithm). We additionally study the complexity of MMV. We first show that all algorithms in a natural class of deterministic linear algebraic algorithms for MMV (including ours) require \(\Omega(n^{\omega})\) time. We also show a barrier to proving a super-quadratic running time lower bound for matrix multiplication (and hence MMV) under the Strong Exponential Time Hypothesis (SETH). Finally, we study relationships between natural variants and special cases of MMV (with respect to deterministic \(\widetilde{O}(n^2)\)-time reductions).
This paper shows several connections between data structure problems and cryptography against preprocessing attacks. Our results span data structure upper bounds, cryptographic applications, and data ...structure lower bounds, as summarized next. First, we apply Fiat--Naor inversion, a technique with cryptographic origins, to obtain a data structure upper bound. In particular, our technique yields a suite of algorithms with space \(S\) and (online) time \(T\) for a preprocessing version of the \(N\)-input 3SUM problem where \(S^3\cdot T = \widetilde{O}(N^6)\). This disproves a strong conjecture (Goldstein et al., WADS 2017) that there is no data structure that solves this problem for \(S=N^{2-\delta}\) and \(T = N^{1-\delta}\) for any constant \(\delta>0\). Secondly, we show equivalence between lower bounds for a broad class of (static) data structure problems and one-way functions in the random oracle model that resist a very strong form of preprocessing attack. Concretely, given a random function \(F: N \to N\) (accessed as an oracle) we show how to compile it into a function \(G^F: N^2 \to N^2\) which resists \(S\)-bit preprocessing attacks that run in query time \(T\) where \(ST=O(N^{2-\varepsilon})\) (assuming a corresponding data structure lower bound on 3SUM). In contrast, a classical result of Hellman tells us that \(F\) itself can be more easily inverted, say with \(N^{2/3}\)-bit preprocessing in \(N^{2/3}\) time. We also show that much stronger lower bounds follow from the hardness of kSUM. Our results can be equivalently interpreted as security against adversaries that are very non-uniform, or have large auxiliary input, or as security in the face of a powerfully backdoored random oracle. Thirdly, we give non-adaptive lower bounds for 3SUM and a range of geometric problems which match the best known lower bounds for static data structure problems.