The closest vector problem (CVP) and shortest (nonzero) vector problem (SVP) are the core algorithmic problems on Euclidean lattices. They are central to the applications of lattices in many problems ...of communications and cryptography. Kannan's embedding technique is a powerful technique for solving the approximate CVP; yet, its remarkable practical performance is not well understood. In this paper, the embedding technique is analyzed from a bounded distance decoding (BDD) viewpoint. We present two complementary analyses of the embedding technique: we establish a reduction from BDD to Hermite SVP (via unique SVP), which can be used along with any Hermite SVP solver (including, among others, the Lenstra, Lenstra and Lovász (LLL) algorithm), and show that, in the special case of LLL, it performs at least as well as Babai's nearest plane algorithm (LLL-aided successive interference cancellation). The former analysis helps us to explain the folklore practical observation that unique SVP is easier than standard approximate SVP. It is proven that when the LLL algorithm is employed, the embedding technique can solve the CVP provided that the noise norm is smaller than a decoding radius λ 1 /(2γ) , where λ 1 is the minimum distance of the lattice, and γ ≈ O (2 n/4 ). This substantially improves the previously best known correct decoding bound γ ≈ O (2 n ) . Focusing on the applications of BDD to decoding of multiple-input multiple-output systems, we also prove that BDD of the regularized lattice is optimal in terms of the diversity-multiplexing gain tradeoff, and propose practical variants of embedding decoding which require no knowledge of the minimum distance of the lattice and/or further improve the error performance.
The Lenstra-Lenstra-Lovasz lattice basis reduction algorithm (called LLL or L^sup 3^) is a fundamental tool in computational number theory and theoretical computer science, which can be viewed as an ...efficient algorithmic version of Hermite's inequality on Hermite's constant. Given an integer d-dimensional lattice basis with vectors of Euclidean norm less than B in an n-dimensional space, the L^sup 3^ algorithm outputs a reduced basis in ... bit operations, where M(k) denotes the time required to multiply k-bit integers. In this article, the authors introduce the L^sup 2^ algorithm, a new and natural floating-point variant of the L^sup 3^ algorithm which provably outputs L ^sup 3^-reduced bases in polynomial time ... This is the first L^sup 3^ algorithm whose running time (without fast integer arithmetic) provably grows only quadratically with respect to log B, like Euclid's gcd algorithm and Lagrange's two-dimensional algorithm. (ProQuest: ... denotes formulae/symbols omitted.)
In this paper, we describe a polynomial time cryptanalysis of the (approximate) multilinear map proposed by Coron, Lepoint, and Tibouchi in Crypto13 (CLT13). This scheme includes a zero-testing ...functionality that determines whether the message of a given encoding is zero or not. This functionality is useful for designing several of its applications, but it leaks unexpected values, such as linear combinations of the secret elements. By collecting the outputs of the zero-testing algorithm, we construct a matrix containing the hidden information as eigenvalues, and then recover all the secret elements of the CLT13 scheme via diagonalization of the matrix. In addition, we provide polynomial time algorithms to directly break the security assumptions of many applications based on the CLT13 scheme. These algorithms include solving subgroup membership, decision linear, and graded external Diffie–Hellman problems. These algorithms mainly rely on the computation of the determinants of the matrices and their greatest common divisor, instead of performing their diagonalization.
Lattice reduction is a geometric generalization of the problem of computing greatest common divisors. Most of the interesting algorithmic problems related to lattice reduction are NP-hard as the ...lattice dimension increases. This article deals with the low-dimensional case. We study a greedy lattice basis reduction algorithm for the Euclidean norm, which is arguably the most natural lattice basis reduction algorithm because it is a straightforward generalization of an old two-dimensional algorithm of Lagrange, usually known as Gauss' algorithm, and which is very similar to Euclid's gcd algorithm. Our results are twofold. From a mathematical point of view, we show that up to dimension four, the output of the greedy algorithm is optimal: The output basis reaches all the successive minima of the lattice. However, as soon as the lattice dimension is strictly higher than four, the output basis may be arbitrarily bad as it may not even reach the first minimum. More importantly, from a computational point of view, we show that up to dimension four, the bit-complexity of the greedy algorithm is quadratic without fast integer arithmetic, just like Euclid's gcd algorithm. This was already proved by Semaev up to dimension three using rather technical means, but it was previously unknown whether or not the algorithm was still polynomial in dimension four. We propose two different analyzes: a global approach based on the geometry of the current basis when the length decrease stalls, and a local approach showing directly that a significant length decrease must occur every O(1) consecutive steps. Our analyzes simplify Semaev's analysis in dimensions two and three, and unify the cases of dimensions two to four. Although the global approach is much simpler, we also present the local approach because it gives further information on the behavior of the algorithm.
We describe two improvements to Gentry’s fully homomorphic scheme based on ideal lattices and its analysis: we provide a more aggressive analysis of one of the hardness assumptions (the one related ...to the Sparse Subset Sum Problem) and we introduce a probabilistic decryption algorithm that can be implemented with an algebraic circuit of low multiplicative degree. Combined together, these improvements lead to a faster fully homomorphic scheme, with a Õ(λ3.5) bit complexity per elementary binary add/mult gate, where λ is the security parameter. These improvements also apply to the fully homomorphic schemes of Smart and Vercauteren PKC’2010 and van Dijk et al. Eurocrypt’2010.
We introduce the
k
-
LWE
problem
, a Learning With Errors variant of the
k
-SIS problem. The Boneh-Freeman reduction from SIS to
k
-SIS suffers from an exponential loss in
k
. We improve and extend ...it to an LWE to
k
-LWE reduction with a polynomial loss in
k
, by relying on a new technique involving trapdoors for random integer kernel lattices. Based on this hardness result, we present the first algebraic construction of a traitor tracing scheme whose security relies on the worst-case hardness of standard lattice problems. The proposed
LWE
traitor tracing is almost as efficient as the
LWE
encryption. Further, it achieves public traceability, i.e., allows the authority to delegate the tracing capability to “untrusted” parties. To this aim, we introduce the notion of
projective sampling family
in which each sampling function is keyed and, with a projection of the key on a well chosen space, one can simulate the sampling function in a computationally indistinguishable way. The construction of a projective sampling family from
k
-
LWE
allows us to achieve public traceability, by publishing the projected keys of the users. We believe that the new lattice tools and the projective sampling family are quite general that they may have applications in other areas.