We devise an algorithm, L1, with the following specifications: It takes as input an arbitrary basis B=(bi)i ∈ Zd x d of a Euclidean lattice L; It computes a basis of L which is reduced for a mild ...modification of the Lenstra-Lenstra-Lovász reduction; It terminates in time O(d5+ε β + dω+1+ε β1+ε) where β = log max |bi| (for any ε>0 and ω is a valid exponent for matrix multiplication). This is the first LLL-reducing algorithm with a time complexity that is quasi-linear in β and polynomial in d.
The backbone structure of L1 is able to mimic the Knuth-Schönhage fast gcd algorithm thanks to a combination of cutting-edge ingredients. First the bit-size of our lattice bases can be decreased via truncations whose validity are backed by recent numerical stability results on the QR matrix factorization. Also we establish a new framework for analyzing unimodular transformation matrices which reduce shifts of reduced bases, this includes bit-size control and new perturbation tools. We illustrate the power of this framework by generating a family of reduction algorithms.
Strong lattice reduction is the key element for most attacks against lattice-based cryptosystems. Between the strongest but impractical HKZ reduction and the weak but fast LLL reduction, there have ...been several attempts to find efficient trade-offs. Among them, the BKZ algorithm introduced by Schnorr and Euchner FCT’91 seems to achieve the best time/quality compromise in practice. However, no reasonable complexity upper bound is known for BKZ, and Gama and Nguyen Eurocrypt’08 observed experimentally that its practical runtime seems to grow exponentially with the lattice dimension. In this work, we show that BKZ can be terminated long before its completion, while still providing bases of excellent quality. More precisely, we show that if given as inputs a basis (bi)i ≤ n ∈ ℚn ×n of a lattice L and a block-size β, and if terminated after \documentclass12pt{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$\Omega\left(\frac{n^3}{\beta^2}(\log n + \log \log \max_i \|{b}_i\|)\right)$\end{document} calls to a β-dimensional HKZ-reduction (or SVP) subroutine, then BKZ returns a basis whose first vector has norm \documentclass12pt{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$\leq 2 \nu _{\beta}^{\frac{n-1}{2(\beta-1)}+\frac{3}{2}} \cdot (\det L )^{\frac{1}{n}}$\end{document}, where νβ ≤ β is the maximum of Hermite’s constants in dimensions ≤ β. To obtain this result, we develop a completely new elementary technique based on discrete-time affine dynamical systems, which could lead to the design of improved lattice reduction algorithms.
This article presents rigorous normwise perturbation bounds for the Cholesky, LU, and QR factorizations with normwise or componentwise perturbations in the given matrix. The considered componentwise ...perturbations have the form of backward rounding errors for the standard factorization algorithms. The used approach is a combination of the classic and refined matrix equation approaches. Each of the new rigorous perturbation bounds is a small constant multiple of the corresponding first-order perturbation bound obtained by the refined matrix equation approach in the literature and can be estimated efficiently. These new bounds can be much tighter than the existing rigorous bounds obtained by the classic matrix equation approach, while the conditions for the former to hold are almost as moderate as the conditions for the latter to hold. PUBLICATION ABSTRACT
Homomorphic Encryption (HE) schemes such as BGV, BFV, and CKKS consume some ciphertext modulus for each multiplication. Bootstrapping (BTS) restores the modulus and allows homomorphic computation to ...continue, but it is time-consuming and requires a significant amount of modulus. For these reasons, decreasing modulus consumption is crucial topic for BGV, BFV and CKKS, on which numerous studies have been conducted.
We propose a novel method, called Mult2, to perform ciphertext multiplication in the CKKS scheme with lower modulus consumption. Mult2 relies an a new decomposition of a ciphertext into a pair of ciphertexts that homomorphically performs a weak form of Euclidean division. It multiplies two ciphertexts in decomposed formats with homomorphic double precision multiplication, and its result approximately decrypts to the same value as does the ordinary CKKS multiplication. Mult2 can perform homomorphic multiplication by consuming almost half of the modulus.
We extend it to Multt for any t≥ 2, which relies on the decomposition of a ciphertext into t components. All other CKKS operations can be equally performed on pair/tuple formats, leading to the double-CKKS (resp. tuple-CKKS) scheme enabling homomorphic double (resp. multiple) precision arithmetic.
As a result, when the ciphertext modulus and dimension are fixed, the proposed algorithms enable the evaluation of deeper circuits without bootstrapping, or allow to reduce the number of bootstrappings required for the evaluation of the same circuits. Furthermore, they can be used to increase the precision without increasing the parameters. For example, Mult2 enables 8 sequential multiplications with 100 bit scaling factor with a ciphertext modulus of only 680 bits, which is impossible with the ordinary CKKS multiplication algorithm.
Lattices over number fields arise from a variety of sources in algorithmic algebra and more recently cryptography. Similar to the classical case of ℤ-lattices, the choice of a nice, “short” ...(pseudo)-basis is important in many applications. In this article, we provide the first algorithm that computes such a “short” (pseudo)-basis. We utilize the LLL algorithm for ℤ-lattices together with the Bosma-Pohst-Cohen Hermite Normal Form and some size reduction technique to find a pseudo-basis where each basis vector belongs to the lattice and the product of the norms of the basis vectors is bounded by the lattice determinant, up to a multiplicative factor that is a field invariant. As it runs in polynomial time, this provides an effective variant of Minkowski’s second theorem for lattices over number fields.
Accelerating Lattice Reduction with FPGAs Detrey, Jérémie; Hanrot, Guillaume; Pujol, Xavier ...
Progress in Cryptology – LATINCRYPT 2010
Book Chapter, Conference Proceeding
Recenzirano
Odprti dostop
We describe an FPGA accelerator for the Kannan–Fincke–Pohst enumeration algorithm (KFP) solving the Shortest Lattice Vector Problem (SVP). This is the first FPGA implementation of KFP specifically ...targeting cryptographically relevant dimensions. In order to optimize this implementation, we theoretically and experimentally study several facets of KFP, including its efficient parallelization and its underlying arithmetic. Our FPGA accelerator can be used for both solving stand-alone instances of SVP (within a hybrid CPU–FPGA compound) or myriads of smaller dimensional SVP instances arising in a BKZ-type algorithm. For devices of comparable costs, our FPGA implementation is faster than a multi-core CPU implementation by a factor around 2.12.
The Learning With Errors (\(\mathsf{LWE}\)) problem asks to find \(\mathbf{s}\) from an input of the form \((\mathbf{A}, \mathbf{b} = \mathbf{A}\mathbf{s}+\mathbf{e}) \in (\mathbb{Z}/q\mathbb{Z})^{m ...\times n} \times (\mathbb{Z}/q\mathbb{Z})^{m}\), for a vector \(\mathbf{e}\) that has small-magnitude entries. In this work, we do not focus on solving \(\mathsf{LWE}\) but on the task of sampling instances. As these are extremely sparse in their range, it may seem plausible that the only way to proceed is to first create \(\mathbf{s}\) and \(\mathbf{e}\) and then set \(\mathbf{b} = \mathbf{A}\mathbf{s}+\mathbf{e}\). In particular, such an instance sampler knows the solution. This raises the question whether it is possible to obliviously sample \((\mathbf{A}, \mathbf{A}\mathbf{s}+\mathbf{e})\), namely, without knowing the underlying \(\mathbf{s}\). A variant of the assumption that oblivious \(\mathsf{LWE}\) sampling is hard has been used in a series of works to analyze the security of candidate constructions of Succinct Non interactive Arguments of Knowledge (SNARKs). As the assumption is related to \(\mathsf{LWE}\), these SNARKs have been conjectured to be secure in the presence of quantum adversaries. Our main result is a quantum polynomial-time algorithm that samples well-distributed \(\mathsf{LWE}\) instances while provably not knowing the solution, under the assumption that \(\mathsf{LWE}\) is hard. Moreover, the approach works for a vast range of \(\mathsf{LWE}\) parametrizations, including those used in the above-mentioned SNARKs. This invalidates the assumptions used in their security analyses, although it does not yield attacks against the constructions themselves.