We propose a new scheme of wiretap lattice coding that achieves semantic security and strong secrecy over the Gaussian wiretap channel. The key tool in our security proof is the flatness factor, ...which characterizes the convergence of the conditional output distributions corresponding to different messages and leads to an upper bound on the information leakage. We not only introduce the notion of secrecy-good lattices, but also propose the flatness factor as a design criterion of such lattices. Both the modulo-lattice Gaussian channel and genuine Gaussian channel are considered. In the latter case, we propose a novel secrecy coding scheme based on the discrete Gaussian distribution over a lattice, which achieves the secrecy capacity to within a half nat under mild conditions. No a priori distribution of the message is assumed, and no dither is used in our proposed schemes.
Classical hardness of learning with errors Brakerski, Zvika; Langlois, Adeline; Peikert, Chris ...
Proceedings of the forty-fifth annual ACM symposium on Theory of Computing,
06/2013
Conference Proceeding
Recenzirano
We show that the Learning with Errors (LWE) problem is classically at least as hard as standard worst-case lattice problems. Previously this was only known under quantum reductions.
Our techniques ...capture the tradeoff between the dimension and the modulus of LWE instances, leading to a much better understanding of the landscape of the problem. The proof is inspired by techniques from several recent cryptographic constructions, most notably fully homomorphic encryption schemes.
In distributed pseudorandom functions (DPRFs), a PRF secret key
SK
is secret shared among
N
servers so that each server can locally compute a partial evaluation of the PRF on some input
X
. A ...combiner that collects
t
partial evaluations can then reconstruct the evaluation
F
(
SK
,
X
) of the PRF under the initial secret key. So far, all non-interactive constructions in the standard model are based on lattice assumptions. One caveat is that they are only known to be secure in the static corruption setting, where the adversary chooses the servers to corrupt at the very beginning of the game, before any evaluation query. In this work, we construct the first fully non-interactive adaptively secure DPRF in the standard model. Our construction is proved secure under the
LWE
assumption against adversaries that may adaptively decide which servers they want to corrupt. We also extend our construction in order to achieve robustness against malicious adversaries.
Let
X
∈
Z
n
×
m
, with each entry independently and identically distributed from an integer Gaussian distribution. We consider the orthogonal lattice
Λ
⊥
(
X
)
of
X
, i.e., the set of vectors
v
∈
...Z
m
such that
X
v
=
0
. In this work, we prove probabilistic upper bounds on the smoothing parameter and the
(
m
-
n
)
-th minimum of
Λ
⊥
(
X
)
. These bounds improve and the techniques build upon prior works of Agrawal et al. (Adv Cryptol 2013:97–116, 2013), and of Aggarwal and Regev (Chic J Theor Comput Sci 7:1–11, 2016).
Despite its reduced complexity, lattice reduction-aided decoding exhibits a widening gap to maximum-likelihood (ML) performance as the dimension increases. To improve its performance, this paper ...presents randomized lattice decoding based on Klein's sampling technique, which is a randomized version of Babai's nearest plane algorithm i.e., successive interference cancelation (SIC) and samples lattice points from a Gaussian-like distribution over the lattice. To find the closest lattice point, Klein's algorithm is used to sample some lattice points and the closest among those samples is chosen. Lattice reduction increases the probability of finding the closest lattice point, and only needs to be run once during preprocessing. Further, the sampling can operate very efficiently in parallel. The technical contribution of this paper is twofold: we analyze and optimize the decoding radius of sampling decoding resulting in better error performance than Klein's original algorithm, and propose a very efficient implementation of random rounding. Of particular interest is that a fixed gain in the decoding radius compared to Babai's decoding can be achieved at polynomial complexity. The proposed decoder is useful for moderate dimensions where sphere decoding becomes computationally intensive, while lattice reduction-aided decoding starts to suffer considerable loss. Simulation results demonstrate near-ML performance is achieved by a moderate number of samples, even if the dimension is as high as 32.