Static data structure lower bounds imply rigidity Dvir, Zeev; Golovnev, Alexander; Weinstein, Omri
Proceedings of the 51st Annual ACM SIGACT Symposium on Theory of Computing,
06/2019
Conference Proceeding
Recenzirano
Odprti dostop
We show that static data structure lower bounds in the group (linear) model imply semi-explicit lower bounds on matrix rigidity. In particular, we prove that an explicit lower bound of t ≥ ω(log2 n) ...on the cell-probe complexity of linear data structures in the group model, even against arbitrarily small linear space (s= (1+)n), would already imply a semi-explicit (PNP) construction of rigid matrices with significantly better parameters than the current state of art (Alon, Panigrahy and Yekhanin, 2009). Our results further assert that polynomial (t≥ nδ) data structure lower bounds against near-optimal space, would imply super-linear circuit lower bounds for log-depth linear circuits (a four-decade open question). In the succinct space regime (s=n+o(n)), we show that any improvement on current cell-probe lower bounds in the linear model would also imply new rigidity bounds. Our results rely on a new connection between the “inner” and “outer” dimensions of a matrix (Paturi and Pudlák, 2006), and on a new reduction from worst-case to average-case rigidity, which is of independent interest.
On the limits of gate elimination Golovnev, Alexander; Hirsch, Edward A.; Knop, Alexander ...
Journal of computer and system sciences,
September 2018, 2018-09-00, Letnik:
96
Journal Article
Recenzirano
Although a simple counting argument shows the existence of Boolean functions of exponential circuit complexity, proving superlinear circuit lower bounds for explicit functions seems to be out of ...reach of the current techniques. There has been a (very slow) progress in proving linear lower bounds with the latest record of 3186n−o(n). All known lower bounds are based on the so-called gate elimination technique. A typical gate elimination argument shows that it is possible to eliminate several gates from a circuit by making one or several substitutions to the input variables and repeats this inductively. In this paper we prove that this method cannot achieve linear bounds of cn beyond a certain constant c, where c depends only on the number of substitutions made at a single step of the induction.
•Gate elimination with O(1) circuit complexity decrease only proves O(n) lower bounds.•Circuit complexity is reduced by O(1) after O(1) arbitrary substitutions.•Subadditive complexity is reduced by O(1) after O(1) arbitrary substitutions.•Gate elimination counting gates and inputs cannot prove a bound of cn for explicit c.
This paper shows several connections between data structure problems and cryptography against preprocessing attacks. Our results span data structure upper bounds, cryptographic applications, and data ...structure lower bounds, as summarized next.
First, we apply Fiat-Naor inversion, a technique with cryptographic origins, to obtain a data structure upper bound. In particular, our technique yields a suite of algorithms with space S and (online) time T for a preprocessing version of the N-input 3SUM problem where S 3· T = O(N 6). This disproves a strong conjecture (Goldstein et al., WADS 2017) that there is no data structure that solves this problem for S=N 2−δ and T = N 1−δ for any constant δ>0.
Secondly, we show equivalence between lower bounds for a broad class of (static) data structure problems and one-way functions in the random oracle model that resist a very strong form of preprocessing attack. Concretely, given a random function F: N → N (accessed as an oracle) we show how to compile it into a function G F : N 2 → N 2 which resists S-bit preprocessing attacks that run in query time T where ST=O(N 2−ε) (assuming a corresponding data structure lower bound on 3SUM). In contrast, a classical result of Hellman tells us that F itself can be more easily inverted, say with N 2/3-bit preprocessing in N 2/3 time. We also show that much stronger lower bounds follow from the hardness of kSUM. Our results can be equivalently interpreted as security against adversaries that are very non-uniform, or have large auxiliary input, or as security in the face of a powerfully backdoored random oracle.
Thirdly, we give non-adaptive lower bounds for 3SUM which match the best known lower bounds for static data structure problems. Moreover, we show that our lower bound generalizes to a range of geometric problems, such as three points on a line, polygon containment, and others.
Families with Infants Golovnev, Alexander; Kulikov, Alexander S.; Mihajlin, Ivan
ACM transactions on algorithms,
06/2016, Letnik:
12, Številka:
3
Journal Article
Recenzirano
Assume that a group of
n
people is going to an excursion and our task is to seat them into buses with several constraints each saying that a pair of people does not want to see each other in the same ...bus. This is a well-known graph coloring problem (with
n
being the number of vertices) and it can be solved in
O
*(2
n
) time by the inclusion-exclusion principle as shown by Björklund, Husfeldt, and Koivisto in 2009. Another approach to solve this problem in
O
*(2
n
) time is to use the Fast Fourier Transform (FFT). For this, given a graph
G
one constructs a polynomial
P
G
(
x
) of degree
O
*(2
n
) with the following property:
G
is
k
-colorable if and only if the coefficient of
x
m
(for some particular value of
m
) in the
k
-th power of
P
(
x
) is nonzero. Then, it remains to compute this coefficient using FFT.
Assume now that we have additional constraints: the group of people contains several infants and these infants should be accompanied by their relatives in a bus. We show that if the number of infants is linear, then the problem can be solved in
O
*((2 − ε)
n
) time, where ε is a positive constant independent of
n
. We use this approach to improve known bounds for several NP-hard problems (the traveling salesman problem, the graph coloring problem, the problem of counting perfect matchings) on graphs of bounded average degree, as well as to simplify the proofs of several known results.
A constraint satisfaction problem (CSP), \text{Max}-\text{CSP}(\mathcal{F}) , is specified by a finite set of constraints \mathcal{F}\subseteq\{q^{k}\rightarrow\{0,1\}\} for positive integers q and k .... An instance of the problem on n variables is given by m applications of constraints from \mathcal{F} to subsequences of the n variables, and the goal is to find an assignment to the variables that satisfies the maximum number of constraints. In the ( \gamma, \beta )-approximation version of the problem, for parameters 0\leq\beta < \gamma\leq 1 , the goal is to distinguish instances where at least \gamma fraction of the constraints can be satisfied from instances where at most \beta fraction of the constraints can be satisfied. In this work we consider the approximability of this problem in the context of sketching algorithms and give a dichotomy result. Specifically, for every family \mathcal{F} and every \beta < \gamma , we show that either a linear sketching algorithm solves the problem in polylogarithmic space, or the problem is not solvable by any sketching algorithm in o(\sqrt{n}) space. We also extend previously known lower bounds for general streaming algorithms to a wide variety of problems, and in particular the case of q=k=2 where we get a dichotomy and the case when the satisfying assignments of f support a distribution on q^{k} with uniform marginals. Prior to this work, other than sporadic examples, the only systematic class of CSPs that were analyzed considered the setting of Boolean variables q=2 , binary constraints k=2 , singleton families \vert \mathcal{F}\vert =1 and only considered the setting where constraints are placed on literals rather than variables. Our positive results show wide applicability of bias-based algorithms used previously by 2 and 3, which we extend to include richer norm estimation algorithms, by giving a systematic way to discover biases. Our negative results combine the Fourier analytic methods of 4, which we extend to a wider class of CSPs, with a rich collection of reductions among communication complexity problems that lie at the heart of the negative results. In particular, previous works used Fourier analysis over the Boolean cube to initiate their results and the results seemed particularly tailored to functions on Boolean literals (i.e., with negations). Our techniques surprisingly allow us to get to general q -ary CSPs without negations by appealing to the same Fourier analytic starting point over Boolean hypercubes.
Lattice Problems beyond Polynomial Time Aggarwal, Divesh; Bennett, Huck; Brakerski, Zvika ...
Proceedings of the 55th Annual ACM Symposium on Theory of Computing,
06/2023
Conference Proceeding
Recenzirano
Odprti dostop
We study the complexity of lattice problems in a world where algorithms, reductions, and protocols can run in superpolynomial time. Specifically, we revisit four foundational results in this ...context—two protocols and two worst-case to average-case reductions. We show how to improve the approximation factor in each result by a factor of roughly √n/logn when running the protocol or reduction in 2є n time instead of polynomial time, and we show a novel protocol with no polynomial-time analog. Our results are as follows.
(1) We show a worst-case to average-case reduction proving that secret-key cryptography (specifically, collision-resistant hash functions) exists if the (decision version of the) Shortest Vector Problem (SVP) cannot be approximated to within a factor of Õ(√n) in 2є n time. This extends to our setting Ajtai’s celebrated polynomial-time reduction for the Short Integer Solutions (SIS) problem (1996),which showed (after improvements by Micciancio and Regev (2004, 2007)) that secret-key cryptography exists if SVP cannot be approximated to within a factor of Õ(n) in polynomial time.
(2) We show another worst-case to average-case reduction proving that public-key cryptography exists if SVP cannot be approximated to within a factor of Õ(n) in 2є n time. This extends Regev’s celebrated polynomial-time reduction for the Learning with Errors (LWE) problem (2005, 2009), which achieved an approximation factor of Õ(n1.5). In fact, Regev’s reduction is quantum, but we prove our result under a classical reduction, generalizing Peikert’s polynomial-time classical reduction (2009), which achieved an approximation factor of Õ(n2).
(3) We show that the (decision version of the) Closest Vector Problem (CVP) with a constant approximation factor has a coAM protocol with a 2є n-time verifier. We prove this via a (very simple) generalization of the celebrated polynomial-time protocol due to Goldreich and Goldwasser (1998, 2000). It follows that the recent series of 2є n-time and even 2(1−є)n-time hardness results for CVP cannot be extended to large constant approximation factors γ unless AMETH is false. We also rule out 2(1−є)n-time lower bounds for any constant approximation factor γ > √2, under plausible complexity-theoretic assumptions. (These results also extend to arbitrary norms, with different constants.)
(4) We show that O(√logn)-approximate SVP has a coNTIME protocol with a 2є n-time verifier. Here, the analogous (also celebrated!) polynomial-time result is due to Aharonov and Regev (2005), who showed a polynomial-time protocol achieving an approximation factor of √n (for both SVP and CVP, while we only achieve this result for CVP). This result implies similar barriers to hardness, with a larger approximation factor under a weaker complexity-theoretic conjectures (as does the next result).
(5) Finally, we give a novel coMA protocol for constant-factor-approximate CVP with a 2є n-time verifier. Unlike our other results, this protocol has no known analog in the polynomial-time regime.
All of the results described above are special cases of more general theorems that achieve time-approximation factor tradeoffs. In particular, the tradeoffs for the first four results smoothly interpolate from the polynomial-time results in prior work to our new results in the exponential-time world.
Proving super-logarithmic data structure lower bounds in the static group model has been a fundamental challenge in computational geometry since the early 80's. We prove a polynomial (n^{\Omega(1)}) ...lower bound for an explicit range counting problem of n^{3} convex polygons in \mathbb{R}^{2} (each with n^{\tilde{O}(1)} facets/semialgebraic-complexity), against linear storage arithmetic data structures in the group model. Our construction and analysis are based on a combination of techniques in Diophantine approximation, pseudorandomness, and compressed sensing-in particular, on the existence and partial derandomization of optimal binary compressed sensing matrices in the polynomial sparsity regime (k=n^{1-\delta}) . As a byproduct, this establishes a (logarithmic) separation between compressed sensing matrices and the stronger RIP property.
We consider the approximability of constraint satisfaction problems in the streaming setting. For every constraint satisfaction problem (CSP) on n variables taking values in {0,…,q−1}, we prove that ...improving over the trivial approximability by a factor of q requires Ω(n) space even on instances with O(n) constraints. We also identify a broad subclass of problems for which any improvement over the trivial approximability requires Ω(n) space. The key technical core is an optimal, q−(k−1)-inapproximability for the Max k-LIN-mod q problem, which is the Max CSP problem where every constraint is given by a system of k−1 linear equations mod q over k variables.
Our work builds on and extends the breakthrough work of Kapralov and Krachun (Proc. STOC 2019) who showed a linear lower bound on any non-trivial approximation of the MaxCut problem in graphs. MaxCut corresponds roughly to the case of Max k-LIN-mod q with k=q=2. For general CSPs in the streaming setting, prior results only yielded Ω(√n) space bounds. In particular no linear space lower bound was known for an approximation factor less than 1/2 for any CSP. Extending the work of Kapralov and Krachun to Max k-LIN-mod q to k>2 and q>2 (while getting optimal hardness results) is the main technical contribution of this work. Each one of these extensions provides non-trivial technical challenges that we overcome in this work.
It is still not known whether a shortest common superstring (SCS) of n input strings can be found faster than in O...(2...) time (O...(-) suppresses polynomial factors of the input length). In this ...short note, this paper shows that for any constant r , SCS for strings of length at most r can be solved in time O...(2(...) where c(r)=(1+2r...)... For this, this paper introduces so-called hierarchical graphs that allow them to reduce SCS on strings of length at most r to the directed rural postman problem on a graph with at most k=(1-c(r))n weakly connected components. One can then use a recent O...(2...) time algorithm by Gutin, Wahlstrom, and Yeo. (ProQuest: ... denotes formulae/symbols omitted.)
It is still not known whether a shortest common superstring (SCS) of n input strings can be found faster than in O⁎(2n) time (O⁎(⋅) suppresses polynomial factors of the input length). In this short ...note, we show that for any constant r, SCS for strings of length at most r can be solved in time O⁎(2(1−c(r))n) where c(r)=(1+2r2)−1. For this, we introduce so-called hierarchical graphs that allow us to reduce SCS on strings of length at most r to the directed rural postman problem on a graph with at most k=(1−c(r))n weakly connected components. One can then use a recent O⁎(2k) time algorithm by Gutin, Wahlström, and Yeo.
•We study the shortest common superstring problem on string of length r (r-SCS).•We introduce hierarchical graphs to reduce the r-SCS problem to the directed rural postman problem (DRPP).•We bound the number of weakly connected components in hierarchical graphs and call recent algorithm by Gutin et al.•The main result is a randomized 2n(1−Ω(r−2))-time algorithm for r-SCS.