We study the design of cryptographic primitives resilient to key-leakage attacks, where an attacker can repeatedly and adaptively learn information about the secret key, subject only to the constraint that the overall amount of such information is bounded by some parameter . We construct a variety of leakage-resilient public-key systems including the first known identification schemes (ID), signature schemes and authenticated key agreement protocols (AKA). Our main result is an efficient three-round leakage-resilient AKA in the Random-Oracle model. This protocol ensures that session keys are private and authentic even if (1) the adversary leaks a large fraction of the long-term secret keys of both users prior to the protocol execution and (2) the adversary completely learns the long-term secret keys after the protocol execution. In particular, our AKA protocol provides qualitatively stronger privacy guarantees than leakage-resilient public-encryption schemes (constructed in prior and concurrent works), since such schemes necessarily become insecure if the adversary can perform leakage attacks after seing a ciphertext.Moreover, our schemes can be flexibly extended to the Bounded-Retrieval Model, allowing us to tolerate very large absolute amount of adversarial leakage (potentially many gigabytes of information), only by increasing the size of the secret key and without any other loss of efficiency in communication or computation. Concretely, given any leakage parameter , security parameter λ, and any desired fraction 0 < δ ≤ 1, our schemes have the following properties:Secret key size is (1 + δ) + O(λ). In particular, the attacker can learn an approximately (1 − δ) fraction of the secret key.Public key size is O(λ), and independent of . Communication complexity is O(λ/δ), and independent of . All computation reads at most O(λ/δ2 ) locations of the secret key, independently of .Lastly, we show that our schemes allow for repeated "invisible updates" of the secret key, allowing us to tolerate up to bits of leakage in between any two updates, and an unlimited amount of leakage overall. These updates require that the parties can securely store a short "master update key" (e.g. on a separate secure device protected against leakage), which is only used for updates and not during protocol execution. The updates are invisible in the sense that a party can update its secret key at any point in time, without modifying the public key or notifying the other users.
Abstract. We revisit the problem of generating a "hard" random lattice together with a basis of relatively short vectors. This problem has gained in importance lately due to new cryptographic schemes that use such a procedure for generating public/secret key pairs. In these applications, a shorter basis directly corresponds to milder underlying complexity assumptions and smaller key sizes.The contributions of this work are twofold. First, using the Hermite normal form as an organizing principle, we simplify and generalize an approach due to Ajtai (ICALP 1999). Second, we improve the construction and its analysis in several ways, most notably by tightening the length of the output basis essentially to the optimum value.
Abstract.We construct the first public-key encryption scheme in the Bounded-Retrieval Model (BRM), providing security against various forms of adversarial "key leakage" attacks. In this model, the adversary is allowed to learn arbitrary information about the decryption key, subject only to the constraint that the overall amount of "leakage" is bounded by at most bits. The goal of the BRM is to design cryptographic schemes that can flexibly tolerate arbitrarily leakage bounds (few bits or many Gigabytes), by only increasing the size of secret key proportionally, but keeping all the other parameters -including the size of the public key, ciphertext, encryption/decryption time, and the number of secret-key bits accessed during decryption -small and independent of . As our main technical tool, we introduce the concept of an IdentityBased Hash Proof System (IB-HPS), which generalizes the notion of hash proof systems of Cramer and Shoup [CS02] to the identity-based setting. We give three different constructions of this primitive based on: (1) bilinear groups, (2) lattices, and (3) quadratic residuosity. As a result of independent interest, we show that an IB-HPS almost immediately yields an Identity-Based Encryption (IBE) scheme which is secure against (small) partial leakage of the target identity's decryption key. As our main result, we use IB-HPS to construct public-key encryption (and IBE) schemes in the Bounded-Retrieval Model.
Abstract. We develop new theoretical tools for proving lower-bounds on the (amortized) complexity of functions in a parallel setting. We demonstrate their use by constructing the first provably secure Memory-hard functions (MHF); a class of functions recently gaining acceptance in practice as an effective means to counter brute-force attacks on security relevant functions.Pebbling games over graphs have proven to be a powerful abstraction for a wide variety of computational models. A dominant application of such games is proving complexity lower-bounds using "pebbling reductions". These bound the complexity of a given function fG in some computational model M of interest in terms of the pebbling complexity of a related graph G, where the pebbling complexity is defined via a pebbling game G. In particular, finding a function with high complexity in M is reduced to (the hopefully simpler task of) finding graphs with high pebbling complexity in G. We introduce a very simple and natural pebbling game Gp for abstracting parallel computation models. As an important conceptual contribution we also define a natural measure of pebbling complexity called cumulative complexity (CC) and show how it overcomes a crucial shortcoming in the parallel setting exhibited by more traditional complexity measures used in past reductions. Next (analogous to the results of Tarjan et. al. [PTC76,LT82] for sequential pebbling games) we demonstrates, via a novel and non-tibial proof, an explicit construction of a constant in-degree family of graphs whose CC in Gp approaches maximality to within a polylogarithmic factor for any graph of equal size.To demonstrate the use of these theoretical tools we give an example in the field of cryptography, by constructing the first provably secure Memory-hard function (MHF). Introduced by Percival [Per09], an MHF can be computed efficiently on a sequential machine but further requires the same amount of memory (or much greater time) amortized per evaluation on a parallel machine. Thus, while a say an ASIC or FPGA may be faster than a CPU, the increased cost of working memory negates this advantage. In particular MHFs crucially underly the ASIC/FPGAresistant proofs-of-work of the crypto-currency Litecoin[Cha11], the brute-force resistant key-derivation function scrypt [Per09] and can be used to increase the cost to an adversary of recovering passwords from a compromised login server. To place these applications on more sound theoretic footing we first provide a new formal definition (in the Random Oracle Model) and an accompanying notion of amortized memory hardness to capture the intuitive property of an MHF (thereby remedying an important issue with the original definition of [Per09]). Next we define a mapping from a graph G to a related function fG. Finally we prove a pebbling reduction bounding the amortized memory hardness per evaluation of fG in terms of the CC of G in the game Gp. Together with the construction above, this gives rise to the first provably secure example of an MHF.
Abstract. The learning with rounding (LWR) problem, introduced by Banerjee, Peikert and Rosen at EUROCRYPT '12, is a variant of learning with errors (LWE), where one replaces random errors with deterministic rounding. The LWR problem was shown to be as hard as LWE for a setting of parameters where the modulus and modulus-to-error ratio are super-polynomial. In this work we resolve the main open problem and give a new reduction that works for a larger range of parameters, allowing for a polynomial modulus and modulus-to-error ratio. In particular, a smaller modulus gives us greater efficiency, and a smaller modulus-to-error ratio gives us greater security, which now follows from the worst-case hardness of GapSVP with polynomial (rather than superpolynomial) approximation factors.As a tool in the reduction, we show that there is a "lossy mode" for the LWR problem, in which LWR samples only reveal partial information about the secret. This property gives us several interesting new applications, including a proof that LWR remains secure with weakly random secrets of sufficient min-entropy, and very simple constructions of deterministic encryption, lossy trapdoor functions and reusable extractors.Our approach is inspired by a technique of Goldwasser et al. from ICS '10, which implicitly showed the existence of a "lossy mode" for LWE. By refining this technique, we also improve on the parameters of that work to only requiring a polynomial (instead of super-polynomial) modulus and modulus-to-error ratio.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.