Abstract. The best lattice reduction algorithm known in practice for high dimension is Schnorr-Euchner's BKZ: all security estimates of lattice cryptosystems are based on NTL's old implementation of BKZ. However, recent progress on lattice enumeration suggests that BKZ and its NTL implementation are no longer optimal, but the precise impact on security estimates was unclear. We assess this impact thanks to extensive experiments with BKZ 2.0, the first state-of-the-art implementation of BKZ incorporating recent improvements, such as Gama-Nguyen-Regev pruning. We propose an efficient simulation algorithm to model the behaviour of BKZ in high dimension with high blocksize ≥ 50, which can predict approximately both the output quality and the running time, thereby revising lattice security estimates. For instance, our simulation suggests that the smallest NTRUSign parameter set, which was claimed to provide at least 93-bit security against key-recovery lattice attacks, actually offers at most 65-bit security.
Lattice enumeration algorithms are the most basic algorithms for solving hard lattice problems such as the shortest vector problem and the closest vector problem, and are often used in public-key cryptanalysis either as standalone algorithms, or as subroutines in lattice reduction algorithms. Here we revisit these fundamental algorithms and show that surprising exponential speedups can be achieved both in theory and in practice by using a new technique, which we call extreme pruning. We also provide what is arguably the first sound analysis of pruning, which was introduced in the 1990s by Schnorr et al.
Despite their popularity, lattice reduction algorithms remain mysterious cryptanalytical tools. Though it has been widely reported that they behave better than their proved worst-case theoretical bounds, no precise assessment has ever been given. Such an assessment would be very helpful to predict the behaviour of lattice-based attacks, as well as to select keysizes for lattice-based cryptosystems. The goal of this paper is to provide such an assessment, based on extensive experiments performed with the NTL library. The experiments suggest several conjectures on the worst case and the actual behaviour of lattice reduction algorithms. We believe the assessment might also help to design new reduction algorithms overcoming the limitations of current algorithms.The integer d is the dimension of the lattice L. A lattice has infinitely many bases, but some are more useful than others. The goal of lattice reduction is to find interesting lattice bases, such as bases consisting of reasonably short and almost orthogonal vectors.Lattice reduction is one of the few potentially hard problems currently in use in public-key cryptography (see [29,23] for surveys on lattice-based cryptosystems), with the unique property that some lattice-based cryptosystems [3,34,35,33,11] are based on worst-case assumptions. And the problem is well-known for its major applications in public-key cryptanalysis (see [29]): knapsack cryptosystems [32], RSA in special settings [7,5], DSA signatures in special settings [16,26], etc. One peculiarity is the existence of very efficient approximation algorithms, which can sometimes solve the exact problem. In practice, the most popular lattice reduction algorithms are: floating-point versions [37,27] of the LLL algorithm [20], the LLL algorithm with deep insertions [37], and the BKZ algorithms [37,38], which are all implemented in the NTL library [39].Although these algorithms are widely used, their performances remain mysterious in many ways: it is folklore that there is a gap between the theoretical N. Smart (Ed.): EUROCRYPT
The celebrated Lenstra-Lenstra-Lovász lattice basis reduction algorithm (LLL) can naturally be viewed as an algorithmic version of Hermite's inequality on Hermite's constant. We present a polynomial-time blockwise reduction algorithm based on duality which can similarly be viewed as an algorithmic version of Mordell's inequality on Hermite's constant. This achieves a better and more natural approximation factor for the shortest vector problem than Schnorr's algorithm and its transference variant by Gama, HowgraveGraham, Koy and Nguyen. Furthermore, we show that this approximation factor is essentially tight in the worst case.
The most famous lattice problem is the Shortest Vector Problem (SVP), which has many applications in cryptology. The best approximation algorithms known for SVP in high dimension rely on a subroutine for exact SVP in low dimension. In this paper, we assess the practicality of the best (theoretical) algorithm known for exact SVP in low dimension: the sieve algorithm proposed by Ajtai, Kumar and Sivakumar (AKS) in 2001. AKS is a randomized algorithm of time and space complexity 2 O(n) , which is theoretically much lower than the super-exponential complexity of all alternative SVP algorithms. Surprisingly, no implementation and no practical analysis of AKS has ever been reported. It was in fact widely believed that AKS was impractical: for instance, Schnorr claimed in 2003 that the constant hidden in the 2 O(n) complexity was at least 30. In this paper, we show that AKS can actually be made practical: we present a heuristic variant of AKS whose running time is (4/3+ε) n polynomial-time operations, and whose space requirement is (4/3+ε) n/2 polynomially many bits. Our implementation can experimentally find shortest lattice vectors up to dimension 50, but is slower than classical alternative SVP algorithms in these dimensions.
Abstract. Lattices are regular arrangements of points in n-dimensional space, whose study appeared in the 19th century in both number theory and crystallography. Since the appearance of the celebrated LenstraLenstra-Lovász lattice basis reduction algorithm twenty years ago, lattices have had surprising applications in cryptology. Until recently, the applications of lattices to cryptology were only negative, as lattices were used to break various cryptographic schemes. Paradoxically, several positive cryptographic applications of lattices have emerged in the past five years: there now exist public-key cryptosystems based on the hardness of lattice problems, and lattices play a crucial rôle in a few security proofs. In this talk, we will try to survey the main examples of the two faces of lattices in cryptology. The full material of this talk appeared in [2]. A preliminary version can be found in [1].
Abstract. Most of the interesting algorithmic problems in the geometry of numbers are NP-hard as the lattice dimension increases. This article deals with the low-dimensional case. We study a greedy lattice basis reduction algorithm for the Euclidean norm, which is arguably the most natural lattice basis reduction algorithm, because it is a straightforward generalization of the well-known two-dimensional Gaussian algorithm. Our results are two-fold. From a mathematical point of view, we show that up to dimension four, the output of the greedy algorithm is optimal: the output basis reaches all the successive minima of the lattice. However, as soon as the lattice dimension is strictly higher than four, the output basis may not even reach the first minimum. More importantly, from a computational point of view, we show that up to dimension four, the bit-complexity of the greedy algorithm is quadratic without fast integer arithmetic: this allows to compute various lattice problems (e.g. computing a Minkowski-reduced basis and a closest vector) in quadratic time, without fast integer arithmetic, up to dimension four, while all other algorithms known for such problems have a bit-complexity which is at least cubic. This was already proved by Semaev up to dimension three using rather technical means, but it was previously unknown whether or not the algorithm was still polynomial in dimension four. Our analysis, based on geometric properties of low-dimensional lattices and in particular Voronoï cells, arguably simplifies Semaev's analysis in dimensions two and three, unifies the cases of dimensions two, three and four, but breaks down in dimension five. IntroductionA lattice is a discrete subgroup of R n . Any lattice L has a lattice basis, i.e. a set {b 1 , . . . , b d } of linearly independent vectors such that the lattice is the set of all integer linear combinations of the b i 's:A lattice basis is usually not unique, but all the bases have the same number of elements, called the dimension or rank of the lattice. In dimension higher than 2 one, there are infinitely many bases, but some are more interesting than others: they are called reduced. Roughly speaking, a reduced basis is a basis made of reasonably short vectors which are almost orthogonal. Finding good reduced bases has proved invaluable in many fields of computer science and mathematics, particularly in cryptology (see for instance the survey [18]); and the computational complexity of lattice problems has attracted considerable attention in the past few years (see for instance the book [16]), following Ajtai's discovery [1] of a connection between the worst-case and average-case hardness of certain lattice problems.There exist many different notions of reduction, such as those of Hermite [8], Minkowski [17], Korkine-Zolotarev (KZ) [9,11], Venkov [19], Lenstra-LenstraLovász [13], etc. Among these, the most intuitive one is perhaps Minkowski's reduction; and up to dimension four, it is arguably optimal compared to all other known reductions, because it reaches ...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.