2014
DOI: 10.1007/978-3-642-55220-5_6
|View full text |Cite
|
Sign up to set email alerts
|

Key Derivation without Entropy Waste

Abstract: Abstract. We revisit the classical problem of converting an imperfect source of randomness into a usable cryptographic key. Assume that we have some cryptographic application P that expects a uniformly random m-bit key R and ensures that the best attack (in some complexity class) against P (R) has success probability at most δ. Our goal is to design a key-derivation function (KDF) h that converts any random source X of min-entropy k into a sufficiently "good" key h(X), guaranteeing that P (h(X)) has comparable… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
13
0

Year Published

2015
2015
2020
2020

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 17 publications
(13 citation statements)
references
References 22 publications
0
13
0
Order By: Relevance
“…Analogous observations were made in similar settings [1,5,4], where either the game is two-sided (e.g., indistinguishability applications) or the randomness is sampled from slightly defected min-entropy source. Plugging this lemma into [10] immediately yields a simpler proof for the key lemma of [10] (see Lemma 3.2), namely, "any k th iterate (instantiated with a regular OWF) is hard-to-invert".…”
Section: Introductionmentioning
confidence: 83%
See 1 more Smart Citation
“…Analogous observations were made in similar settings [1,5,4], where either the game is two-sided (e.g., indistinguishability applications) or the randomness is sampled from slightly defected min-entropy source. Plugging this lemma into [10] immediately yields a simpler proof for the key lemma of [10] (see Lemma 3.2), namely, "any k th iterate (instantiated with a regular OWF) is hard-to-invert".…”
Section: Introductionmentioning
confidence: 83%
“…See Remark 2.1 for some discussions. 4 The Rényi entropy deficiency of a random variable W over set W refers to the difference between entropies of UW and W , i.e., log |W| − H2(W ), where UW denotes the uniform distribution over W and H2(W ) is the Rényi entropy of W . 5 We should not confuse "weakly-regular" with "weakly-one-way", where the former "weakly" describes regularity (i.e., regular on a noticeable fraction as in Definition 2.4) and the latter is used for one-way-ness (i.e., hard-to-invert on a noticeable fraction [19]).…”
Section: Introductionmentioning
confidence: 99%
“…A condenser is like a randomness extractor but the output is allowed to be slightly entropy deficient. Efficient condensers are known with smaller entropy loss than possible for randomness extractors (for example k-wise independent hashes [DPW14]). We now argue that there exists a distribution Y whereH ∞ (Y |seed) ≥α(n−β) and (V, seed 1 , ..., seed n ) ≈ (Y, seed 1 , .., seed n ).…”
Section: Information-theoretic Construction For Sparse Block Sourcesmentioning
confidence: 99%
“…H Cond denotes a (log + 1) wise independent hash function, which is shown to be a good condenser (as stated in the table) for min-entropy in [DPW14]. The bounds for HILL entropy follow directly from the bound for min-entropy.…”
Section: A Figuresmentioning
confidence: 99%