This work describes a fast fully homomorphic encryption scheme over the torus (TFHE), that revisits, generalizes and improves the fully homomorphic encryption (FHE) based on GSW and its ring variants. The simplest FHE schemes consist in bootstrapped binary gates. In this gate bootstrapping mode, we show that the scheme FHEW of [29] can be expressed only in terms of external product between a GSW and a LWE ciphertext. As a consequence of this result and of other optimizations, we decrease the running time of their bootstrapping from 690ms to 13ms single core, using 16MB bootstrapping key instead of 1GB, and preserving the security parameter. In leveled homomorphic mode, we propose two methods to manipulate packed data, in order to decrease the ciphertext expansion and to optimize the evaluation of look-up tables and arbitrary functions in RingGSW based homomorphic schemes. We also extend the automata logic, introduced in [31], to the efficient leveled evaluation of weighted automata, and present a new homomorphic counter called TBSR, that supports all the elementary operations that occur in a multiplication. These improvements speed-up the evaluation of most arithmetic functions in a packed leveled mode, with a noise overhead that remains additive. We finally present a new circuit bootstrapping that converts LWE ciphertexts into low-noise RingGSW ciphertexts in just 137ms, which makes the leveled mode of TFHE composable, and which is fast enough to speed-up arithmetic functions, compared to the gate bootstrapping approach. Finally, we provide an alternative practical analysis of LWE based schemes, which directly relates the security parameter to the error rate This work was done while I. Chillotti was a PhD student in the Laboratoire de Mathématiques de Versailles, UVSQ (Versailles, France) and while M. Georgieva worked in Gemalto (Meudon, France). of LWE and the entropy of the LWE secret key, and we propose concrete parameter sets and timing comparison for all our constructions.
Lattice enumeration algorithms are the most basic algorithms for solving hard lattice problems such as the shortest vector problem and the closest vector problem, and are often used in public-key cryptanalysis either as standalone algorithms, or as subroutines in lattice reduction algorithms. Here we revisit these fundamental algorithms and show that surprising exponential speedups can be achieved both in theory and in practice by using a new technique, which we call extreme pruning. We also provide what is arguably the first sound analysis of pruning, which was introduced in the 1990s by Schnorr et al.
Despite their popularity, lattice reduction algorithms remain mysterious cryptanalytical tools. Though it has been widely reported that they behave better than their proved worst-case theoretical bounds, no precise assessment has ever been given. Such an assessment would be very helpful to predict the behaviour of lattice-based attacks, as well as to select keysizes for lattice-based cryptosystems. The goal of this paper is to provide such an assessment, based on extensive experiments performed with the NTL library. The experiments suggest several conjectures on the worst case and the actual behaviour of lattice reduction algorithms. We believe the assessment might also help to design new reduction algorithms overcoming the limitations of current algorithms.The integer d is the dimension of the lattice L. A lattice has infinitely many bases, but some are more useful than others. The goal of lattice reduction is to find interesting lattice bases, such as bases consisting of reasonably short and almost orthogonal vectors.Lattice reduction is one of the few potentially hard problems currently in use in public-key cryptography (see [29,23] for surveys on lattice-based cryptosystems), with the unique property that some lattice-based cryptosystems [3,34,35,33,11] are based on worst-case assumptions. And the problem is well-known for its major applications in public-key cryptanalysis (see [29]): knapsack cryptosystems [32], RSA in special settings [7,5], DSA signatures in special settings [16,26], etc. One peculiarity is the existence of very efficient approximation algorithms, which can sometimes solve the exact problem. In practice, the most popular lattice reduction algorithms are: floating-point versions [37,27] of the LLL algorithm [20], the LLL algorithm with deep insertions [37], and the BKZ algorithms [37,38], which are all implemented in the NTL library [39].Although these algorithms are widely used, their performances remain mysterious in many ways: it is folklore that there is a gap between the theoretical N. Smart (Ed.): EUROCRYPT
To solve the approximate nearest neighbor search problem (NNS) on the sphere, we propose a method using locality-sensitive filters (LSF), with the property that nearby vectors have a higher probability of surviving the same filter than vectors which are far apart. We instantiate the filters using spherical caps of height 1 − α, where a vector survives a filter if it is contained in the corresponding spherical cap, and where ideally each filter has an independent, uniformly random direction.For small α, these filters are very similar to the spherical locality-sensitive hash (LSH) family previously studied by Andoni et al. For larger α bounded away from 0, these filters potentially achieve a superior performance, provided we have access to an efficient oracle for finding relevant filters. Whereas existing LSH schemes are limited by a performance parameter of ρ ≥ 1/(2c 2 − 1) to solve approximate NNS with approximation factor c, with spherical LSF we potentially achieve smaller asymptotic values of ρ, depending on the density of the data set. For sparse data sets where the dimension is super-logarithmic in the size of the data set, we asymptotically obtain ρ = 1/(2c 2 − 1), while for a logarithmic dimensionality with density constant κ we obtain asymptotics of ρ ∼ 1/(4κc 2 ). To instantiate the filters and prove the existence of an efficient decoding oracle, we replace the independent filters by filters taken from certain structured random product codes. We show that the additional structure in these concatenation codes allows us to decode efficiently using techniques similar to lattice enumeration, and we can find the relevant filters with low overhead, while at the same time not significantly changing the collision *
The celebrated Lenstra-Lenstra-Lovász lattice basis reduction algorithm (LLL) can naturally be viewed as an algorithmic version of Hermite's inequality on Hermite's constant. We present a polynomial-time blockwise reduction algorithm based on duality which can similarly be viewed as an algorithmic version of Mordell's inequality on Hermite's constant. This achieves a better and more natural approximation factor for the shortest vector problem than Schnorr's algorithm and its transference variant by Gama, HowgraveGraham, Koy and Nguyen. Furthermore, we show that this approximation factor is essentially tight in the worst case.
Abstract. Lattice reduction is a hard problem of interest to both publickey cryptography and cryptanalysis. Despite its importance, extremely few algorithms are known. The best algorithm known in high dimension is due to Schnorr, proposed in 1987 as a block generalization of the famous LLL algorithm. This paper deals with Schnorr's algorithm and potential improvements. We prove that Schnorr's algorithm outputs better bases than what was previously known: namely, we decrease all former bounds on Schnorr's approximation factors to their (ln 2)-th power. On the other hand, we also show that the output quality may have intrinsic limitations, even if an improved reduction strategy was used for each block, thereby strengthening recent results by Ajtai. This is done by making a connection between Schnorr's algorithm and a mathematical constant introduced by Rankin more than 50 years ago as a generalization of Hermite's constant. Rankin's constant leads us to introduce the so-called smallest volume problem, a new lattice problem which generalizes the shortest vector problem, and which has applications to blockwise lattice reduction generalizing LLL and Schnorr's algorithm, possibly improving their output quality. Schnorr's algorithm is actually based on an approximation algorithm for the smallest volume problem in low dimension. We obtain a slight improvement over Schnorr's algorithm by presenting a cheaper approximation algorithm for the smallest volume problem, which we call transference reduction.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.