2015
DOI: 10.1007/978-3-662-48324-4_19
|View full text |Cite
|
Sign up to set email alerts
|

Predictive Models for Min-entropy Estimation

Abstract: Random numbers are essential for cryptography. In most real-world systems, these values come from a cryptographic pseudorandom number generator (PRNG), which in turn is seeded by an entropy source. The security of the entire cryptographic system then relies on the accuracy of the claimed amount of entropy provided by the source. If the entropy source provides less unpredictability than is expected, the security of the cryptographic mechanisms is undermined, as in [5,7,10]. For this reason, correctly estimating… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
44
0
1

Year Published

2017
2017
2024
2024

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 27 publications
(45 citation statements)
references
References 10 publications
0
44
0
1
Order By: Relevance
“…The calculations for the local entropy estimate come from the probability theory of runs and recurrent events [Fel50]. For more information about min-entropy estimation using predictors, see [Kel15].…”
Section: G2 Predictorsmentioning
confidence: 99%
See 1 more Smart Citation
“…The calculations for the local entropy estimate come from the probability theory of runs and recurrent events [Fel50]. For more information about min-entropy estimation using predictors, see [Kel15].…”
Section: G2 Predictorsmentioning
confidence: 99%
“…The first approach is based on entropic statistics, first described for IID data in [HD12], and later applied to non-IID data [HD12]. The second approach is based on predictors, first described in [Kel15].…”
Section: Acronymsmentioning
confidence: 99%
“…assumption for entropy estimation is dictated by our ignorance about the type of graph to embed and the effect of the embedding function. Recent advances in cryptography [26], where estimating the correct amount of uncertainty is critical, point towards learning techniques that exploit the knowledge available about the random sources (the graphs and the embedding functions). In section 5.5, where we validate the commute time embedding as the most successful embedding function for graph matching/similarity purposes, we will analyze the impact of this choice in the entropy estimator.…”
Section: Motivation and Previous Workmentioning
confidence: 99%
“…Using Lagrange multipliers (one for each constraint), the problem is equivalent to maximizing (26) where the second (entropic) term relies both on an x log(x) barrier function and on β [41]. The third term contains the N (N − 1)/2 − N Lagrange multipliers α ij (one multiplier per constraint).…”
Section: The Optimality Of the Ct Embeddingmentioning
confidence: 99%
See 1 more Smart Citation