We study the complexity of lattice problems in a world where algorithms, reductions, and protocols can run in superpolynomial time, revisiting four foundational results: two worst-case to ...average-case reductions and two protocols. We also show a novel protocol. 1. We prove that secret-key cryptography exists if \(\widetilde{O}(\sqrt{n})\)-approximate SVP is hard for \(2^{\varepsilon n}\)-time algorithms. I.e., we extend to our setting (Micciancio and Regev's improved version of) Ajtai's celebrated polynomial-time worst-case to average-case reduction from \(\widetilde{O}(n)\)-approximate SVP to SIS. 2. We prove that public-key cryptography exists if \(\widetilde{O}(n)\)-approximate SVP is hard for \(2^{\varepsilon n}\)-time algorithms. This extends to our setting Regev's celebrated polynomial-time worst-case to average-case reduction from \(\widetilde{O}(n^{1.5})\)-approximate SVP to LWE. In fact, Regev's reduction is quantum, but ours is classical, generalizing Peikert's polynomial-time classical reduction from \(\widetilde{O}(n^2)\)-approximate SVP. 3. We show a \(2^{\varepsilon n}\)-time coAM protocol for \(O(1)\)-approximate CVP, generalizing the celebrated polynomial-time protocol for \(O(\sqrt{n/\log n})\)-CVP due to Goldreich and Goldwasser. These results show complexity-theoretic barriers to extending the recent line of fine-grained hardness results for CVP and SVP to larger approximation factors. (This result also extends to arbitrary norms.) 4. We show a \(2^{\varepsilon n}\)-time co-non-deterministic protocol for \(O(\sqrt{\log n})\)-approximate SVP, generalizing the (also celebrated!) polynomial-time protocol for \(O(\sqrt{n})\)-CVP due to Aharonov and Regev. 5. We give a novel coMA protocol for \(O(1)\)-approximate CVP with a \(2^{\varepsilon n}\)-time verifier. All of the results described above are special cases of more general theorems that achieve time-approximation factor tradeoffs.
We prove tight upper and lower bounds on approximation ratios of all Boolean Max-2CSP problems in the streaming model. Specifically, for every type of Max-2CSP problem, we give an explicit constant ...\(\alpha\), s.t. for any \(\epsilon>0\) (i) there is an \((\alpha-\epsilon)\)-streaming approximation using space \(O(\log{n})\); and (ii) any \((\alpha+\epsilon)\)-streaming approximation requires space \(\Omega(\sqrt{n})\). This generalizes the celebrated work of Kapralov, Khanna, Sudan SODA 2015; Kapralov, Krachun STOC 2019, who showed that the optimal approximation ratio for Max-CUT was \(1/2\). Prior to this work, the problem of determining this ratio was open for all other Max-2CSPs. Our results are quite surprising for some specific Max-2CSPs. For the Max-DCUT problem, there was a gap between an upper bound of \(1/2\) and a lower bound of \(2/5\) Guruswami, Velingker, Velusamy APPROX 2017. We show that neither of these bounds is tight, and the optimal ratio for Max-DCUT is \(4/9\). We also establish that the tight approximation for Max-2SAT is \(\sqrt{2}/2\), and for Exact Max-2SAT it is \(3/4\). As a byproduct, our result gives a separation between space-efficient approximations for Max-2SAT and Exact Max-2SAT. This is in sharp contrast to the setting of polynomial-time algorithms with polynomial space, where the two problems are known to be equally hard to approximate. Finally, we prove that the tight streaming approximation for \mksat{} is \(\sqrt{2}/2\) for every \(k\geq2\).
We consider Boolean circuits over the full binary basis. We prove a (3+1/86)n-o(n) lower bound on the size of such a circuit for an explicitly defined predicate, namely an affine disperser for ...sublinear dimension. This improves the 3n-o(n) bound of Norbert Blum (1984).The proof is based on the gate elimination technique extended with the following three ideas. We generalize the computational model by allowing circuits to contain cycles, this in turn allows us to perform affine substitutions. We use a carefully chosen circuit complexity measure to track the progress of the gate elimination process. Finally, we use quadratic substitutions that may be viewed as delayed affine substitutions.
This book constitutes the refereed proceedings of the 7th International Symposium on Parameterized and Exact Computation, IPEC 2012, in Ljubljana, Slovenia, in September 2012. The 21 revised full ...papers presented together with 2 keynote talks were carefully reviewed and selected from 37 submissions. The topics addressed cover research in all aspects of parameterized/exact algorithms and complexity including but are not limited to new techniques for the design and analysis of parameterized and exact algorithms; fixed-parameter tractability results; parameterized complexity theory; relationship between parameterized complexity and traditional complexity classifications; applications of parameterized and exact computation; and implementation issues of parameterized and exact algorithms.
Collapsing Superstring Conjecture Golovnev, Alexander; Kulikov, Alexander S; Logunov, Alexander ...
arXiv.org,
06/2020
Paper, Journal Article
Odprti dostop
In the Shortest Common Superstring (SCS) problem, one is given a collection of strings, and needs to find a shortest string containing each of them as a substring. SCS admits ...\(2\frac{11}{23}\)-approximation in polynomial time (Mucha, SODA'13). While this algorithm and its analysis are technically involved, the 30 years old Greedy Conjecture claims that the trivial and efficient Greedy Algorithm gives a 2-approximation for SCS. We develop a graph-theoretic framework for studying approximation algorithms for SCS. The framework is reminiscent of the classical 2-approximation for Traveling Salesman: take two copies of an optimal solution, apply a trivial edge-collapsing procedure, and get an approximate solution. In this framework, we observe two surprising properties of SCS solutions, and we conjecture that they hold for all input instances. The first conjecture, that we call Collapsing Superstring conjecture, claims that there is an elementary way to transform any solution repeated twice into the same graph \(G\). This conjecture would give an elementary 2-approximate algorithm for SCS. The second conjecture claims that not only the resulting graph \(G\) is the same for all solutions, but that \(G\) can be computed by an elementary greedy procedure called Greedy Hierarchical Algorithm. While the second conjecture clearly implies the first one, perhaps surprisingly we prove their equivalence. We support these equivalent conjectures by giving a proof for the special case where all input strings have length at most 3. We prove that the standard Greedy Conjecture implies Greedy Hierarchical Conjecture, while the latter is sufficient for an efficient greedy 2-approximate approximation of SCS. Except for its (conjectured) good approximation ratio, the Greedy Hierarchical Algorithm provably finds a 3.5-approximation.
String matching is the problem of deciding whether a given \(n\)-bit string contains a given \(k\)-bit pattern. We study the complexity of this problem in three settings. Communication complexity. ...For small \(k\), we provide near-optimal upper and lower bounds on the communication complexity of string matching. For large \(k\), our bounds leave open an exponential gap; we exhibit some evidence for the existence of a better protocol. Circuit complexity. We present several upper and lower bounds on the size of circuits with threshold and DeMorgan gates solving the string matching problem. Similarly to the above, our bounds are near-optimal for small \(k\). Learning. We consider the problem of learning a hidden pattern of length at most \(k\) relative to the classifier that assigns 1 to every string that contains the pattern. We prove optimal bounds on the VC dimension and sample complexity of this problem.
The Minrank of Random Graphs Golovnev, Alexander; Regev, Oded; Weinstein, Omri
arXiv.org,
02/2017
Paper, Journal Article
Odprti dostop
The minrank of a graph \(G\) is the minimum rank of a matrix \(M\) that can be obtained from the adjacency matrix of \(G\) by switching some ones to zeros (i.e., deleting edges) and then setting all ...diagonal entries to one. This quantity is closely related to the fundamental information-theoretic problems of (linear) index coding (Bar-Yossef et al., FOCS'06), network coding and distributed storage, and to Valiant's approach for proving superlinear circuit lower bounds (Valiant, Boolean Function Complexity '92). We prove tight bounds on the minrank of random Erdős-Rényi graphs \(G(n,p)\) for all regimes of \(p\in0,1\). In particular, for any constant \(p\), we show that \(\mathsf{minrk}(G) = \Theta(n/\log n)\) with high probability, where \(G\) is chosen from \(G(n,p)\). This bound gives a near quadratic improvement over the previous best lower bound of \(\Omega(\sqrt{n})\) (Haviv and Langberg, ISIT'12), and partially settles an open problem raised by Lubetzky and Stav (FOCS '07). Our lower bound matches the well-known upper bound obtained by the "clique covering" solution, and settles the linear index coding problem for random graphs. Finally, our result suggests a new avenue of attack, via derandomization, on Valiant's approach for proving superlinear lower bounds for logarithmic-depth semilinear circuits.
On the Quantitative Hardness of CVP Huck Bennett; Golovnev, Alexander; Stephens-Davidowitz, Noah
arXiv (Cornell University),
10/2017
Paper, Journal Article
Odprti dostop
\( \newcommand{\eps}{\varepsilon} \newcommand{\problem}1{\ensuremath{\mathrm{#1}} } \newcommand{\CVP}{\problem{CVP}} \newcommand{\SVP}{\problem{SVP}} \newcommand{\CVPP}{\problem{CVPP}} ...\newcommand{\ensuremath}1{#1} \)For odd integers \(p \geq 1\) (and \(p = \infty\)), we show that the Closest Vector Problem in the \(\ell_p\) norm (\(\CVP_p\)) over rank \(n\) lattices cannot be solved in \(2^{(1-\eps) n}\) time for any constant \(\eps > 0\) unless the Strong Exponential Time Hypothesis (SETH) fails. We then extend this result to "almost all" values of \(p \geq 1\), not including the even integers. This comes tantalizingly close to settling the quantitative time complexity of the important special case of \(\CVP_2\) (i.e., \(\CVP\) in the Euclidean norm), for which a \(2^{n +o(n)}\)-time algorithm is known. In particular, our result applies for any \(p = p(n) \neq 2\) that approaches \(2\) as \(n \to \infty\). We also show a similar SETH-hardness result for \(\SVP_\infty\); hardness of approximating \(\CVP_p\) to within some constant factor under the so-called Gap-ETH assumption; and other quantitative hardness results for \(\CVP_p\) and \(\CVPP_p\) for any \(1 \leq p < \infty\) under different assumptions.
Label tree-based algorithms are widely used to tackle multi-class and multi-label problems with a large number of labels. We focus on a particular subclass of these algorithms that use probabilistic ...classifiers in the tree nodes. Examples of such algorithms are hierarchical softmax (HSM), designed for multi-class classification, and probabilistic label trees (PLTs) that generalize HSM to multi-label problems. If the tree structure is given, learning of PLT can be solved with provable regret guaranties Wydmuch et.al. 2018. However, to find a tree structure that results in a PLT with a low training and prediction computational costs as well as low statistical error seems to be a very challenging problem, not well-understood yet. In this paper, we address the problem of finding a tree structure that has low computational cost. First, we show that finding a tree with optimal training cost is NP-complete, nevertheless there are some tractable special cases with either perfect approximation or exact solution that can be obtained in linear time in terms of the number of labels \(m\). For the general case, we obtain \(O(\log m)\) approximation in linear time too. Moreover, we prove an upper bound on the expected prediction cost expressed in terms of the expected training cost. We also show that under additional assumptions the prediction cost of a PLT is \(O(\log m)\).