2017
DOI: 10.1038/s41598-017-03185-y
|View full text |Cite
|
Sign up to set email alerts
|

Improving randomness characterization through Bayesian model selection

Abstract: Random number generation plays an essential role in technology with important applications in areas ranging from cryptography to Monte Carlo methods, and other probabilistic algorithms. All such applications require high-quality sources of random numbers, yet effective methods for assessing whether a source produce truly random sequences are still missing. Current methods either do not rely on a formal description of randomness (NIST test suite) on the one hand, or are inapplicable in principle (the characteri… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
9
0

Year Published

2018
2018
2019
2019

Publication Types

Select...
3
1

Relationship

2
2

Authors

Journals

citations
Cited by 4 publications
(9 citation statements)
references
References 11 publications
(9 reference statements)
0
9
0
Order By: Relevance
“…Randomness characterization through Bayesian model selection has some clear and natural advantages, as already pointed out in [16], but, unfortunately, it has an important drawback: the number of all possible models for a given length i, given by B 2 i , grows supra-exponentially with i: indeed, for i = 1, we have two possible models, for i = 2, we have 15 possible models, for i = 3, we have instead 4140 possible models, while, for i = 4, we have 10,480,142,147 models. Thus, even if we are able to acquire data for the evaluation of these many models, it becomes computationally impractical to estimate the posterior for all of them using Equation (2).…”
Section: Tests Of Randomnessmentioning
confidence: 96%
See 3 more Smart Citations
“…Randomness characterization through Bayesian model selection has some clear and natural advantages, as already pointed out in [16], but, unfortunately, it has an important drawback: the number of all possible models for a given length i, given by B 2 i , grows supra-exponentially with i: indeed, for i = 1, we have two possible models, for i = 2, we have 15 possible models, for i = 3, we have instead 4140 possible models, while, for i = 4, we have 10,480,142,147 models. Thus, even if we are able to acquire data for the evaluation of these many models, it becomes computationally impractical to estimate the posterior for all of them using Equation (2).…”
Section: Tests Of Randomnessmentioning
confidence: 96%
“…Recently, a Bayesian criterion has been introduced [16,17] by some of the authors of the present article to test, from a purely probabilistic point of view, whether a sequence is maximally random as understood within information theory [18]. The method works by exploiting the Borel-normality compression scheme and then recasting the problem of finding possible biases in the sequence as an inferential one in which Bayesian model selection can be applied.…”
Section: Tests Of Randomnessmentioning
confidence: 99%
See 2 more Smart Citations
“…Current methods either do not rely on a formal description of randomness (e.g., the NIST test suite) or are inapplicable in principle, requiring testing of all possible computer programs that could produce the sequence. A method that behaves like a genuine QRNG and overcomes these difficulties based on Bayesian model selection was proposed [3]. Moreover, hardware TRNGs are used to create encryption keys, and offer advantages over software PRNGs.…”
Section: Introductionmentioning
confidence: 99%