Most lattice-based cryptographic schemes are built upon the assumed hardness of the Short Integer Solution (SIS) and Learning With Errors (LWE) problems. Their efficiencies can be drastically ...improved by switching the hardness assumptions to the more compact Ring-SIS and Ring-LWE problems. However, this change of hardness assumptions comes along with a possible security weakening: SIS and LWE are known to be at least as hard as standard (worst-case) problems on euclidean lattices, whereas Ring-SIS and Ring-LWE are only known to be as hard as their restrictions to special classes of ideal lattices, corresponding to ideals of some polynomial rings. In this work, we define the Module-SIS and Module-LWE problems, which bridge SIS with Ring-SIS, and LWE with Ring-LWE, respectively. We prove that these average-case problems are at least as hard as standard lattice problems restricted to module lattices (which themselves bridge arbitrary and ideal lattices). As these new problems enlarge the toolbox of the lattice-based cryptographer, they could prove useful for designing new schemes. Importantly, the worst-case to average-case reductions for the module problems are (qualitatively) sharp, in the sense that there exist converse reductions. This property is not known to hold in the context of Ring-SIS/Ring-LWE: Ideal lattice problems could reveal easy without impacting the hardness of Ring-SIS/Ring-LWE.
This handbook aims to provide a complete overview of modern floating-point arithmetic, including a detailed treatment of the newly revised (IEEE 754-2008) standard for floating-point arithmetic. ...Presented throughout are algorithms for implementing floating-point arithmetic as well as algorithms that use floating-point arithmetic. So that the techniques presented can be put directly into practice in actual coding or design, they are illustrated, whenever possible, by a corresponding program. Key topics and features include a presentation of the history and basic concepts of floating-point arithmetic, a development of smart and nontrivial algorithms, and algorithmic possibilities induced by the availability of a fused multiply-add (fma) instruction, implementation of floating-point arithmetic either in software-on an integer processor-or hardware, and a discussion of issues related to compilers and languages, coverage of several recent advances related to elementary functions, and extensions of floating-point arithmetic such as certification, verification, and precision. Handbook of Floating-Point Arithmetic is designed for programmers of numerical applications, compiler designers, programmers of floating-point algorithms, designers of arithmetic operators, and more generally, students and researchers in numerical analysis who wish to better understand a tool used in their daily work and research. TOC:List of Figures.- List of Tables.- Preface.- Part I. Introduction, Basic Definitions, and Standards. Introduction.- Definitions and Basic Notions.- Floating-Point Formats and Environment.- Part II. Cleverly Using Floating-Point Arithmetic. Basic Properties and Algorithms.- The Fused Multiply-Add Instructions.- Enhanced Floating-Point Sums, Dot Products, and Polynomial Values.- Languages and Compilers.- Part III. Implementing Floating-Point Operators. Algorithms for the Five Basic Operations.- Hardware Implementation of Floating-Point Arithmetic.- Software Implementation of Floating-Point Arithmetic.- Part IV. Elementary Functions. Evaluating Floating-Point Elementary Functions.- Solving the Table Maker`s Dilemma.- Part V. Extensions. Formalisms for Certifying Floating-Point Algorithms.- Extending the Precision.- Part VI. Perspectives and Appendix. Conclusion and Perspectives.- Appendix: Number Theory Tools for Floating-Point Arithmetic.- Bibliography.- Index.
Practical, Round-Optimal Lattice-Based Blind Signatures Agrawal, Shweta; Kirshanova, Elena; Stehlé, Damien ...
Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security,
11/2022
Conference Proceeding
Blind signatures are a fundamental cryptographic primitive with numerous practical applications. While there exist many practical blind signatures from number-theoretic assumptions, the situation is ...far less satisfactory from post-quantum assumptions. In this work, we provide the first overall practical, lattice-based blind signature, supporting an unbounded number of signature queries and additionally enjoying optimal round complexity. We provide a detailed estimate of parameters achieved -- we obtain a signature of size slightly above 45KB, for a core-SVP hardness of 109 bits. The run-times of the signer, user and verifier are also very small.
Our scheme relies on the Gentry, Peikert and Vaikuntanathan signature STOC'08 and non-interactive zero-knowledge proofs for linear relations with small unknowns, which are significantly more efficient than their general purpose counterparts. Its security stems from a new and arguably natural assumption which we introduce, called the one-more-ISIS assumption. This assumption can be seen as a lattice analogue of the one-more-RSA assumption by Bellare et al JoC'03. To gain confidence in our assumption, we provide a detailed analysis of diverse attack strategies.
The Rényi divergence is a measure of closeness of two probability distributions. We show that it can often be used as an alternative to the statistical distance in security proofs for lattice-based ...cryptography. Using the Rényi divergence is particularly suited for security proofs of primitives in which the attacker is required to solve a search problem (e.g., forging a signature). We show that it may also be used in the case of distinguishing problems (e.g., semantic security of encryption schemes), when they enjoy a public sampleability property. The techniques lead to security proofs for schemes with smaller parameters, and sometimes to simpler security proofs than the existing ones.
We present the state of the art solvers of the Shortest and Closest Lattice Vector Problems in the Euclidean norm. We recall the three main families of algorithms for these problems, namely the ...algorithm by Micciancio and Voulgaris based on the Voronoi cell STOC’10, the Monte-Carlo algorithms derived from the Ajtai, Kumar and Sivakumar algorithm STOC’01 and the enumeration algorithms originally elaborated by Kannan STOC’83 and Fincke and Pohst EUROCAL’83. We concentrate on the theoretical worst-case complexity bounds, but also consider some practical facets of these algorithms.
NTRUEncrypt, proposed in 1996 by Hoffstein, Pipher and Silverman, is the fastest known lattice-based encryption scheme. Its moderate key-sizes, excellent asymptotic performance and conjectured ...resistance to quantum computers could make it a desirable alternative to factorisation and discrete-log based encryption schemes. However, since its introduction, doubts have regularly arisen on its security. In the present work, we show how to modify NTRUEncrypt to make it provably secure in the standard model, under the assumed quantum hardness of standard worst-case lattice problems, restricted to a family of lattices related to some cyclotomic fields. Our main contribution is to show that if the secret key polynomials are selected by rejection from discrete Gaussians, then the public key, which is their ratio, is statistically indistinguishable from uniform over its domain. The security then follows from the already proven hardness of the the R-LWE problem.