2016
DOI: 10.4230/lipics.stacs.2016.54
|View full text |Cite
|
Sign up to set email alerts
|

Algorithmic Statistics, Prediction and Machine Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2016
2016
2016
2016

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 0 publications
0
2
0
Order By: Relevance
“…Let us note that in a more general setting [25] where we consider several strings as outcomes of the repeated experiment (with independent trials) and look for a model that explains all of them, a similar result is not true: not every probability distribution can be transformed into a uniform one.…”
Section: Optimality Deficiencymentioning
confidence: 99%
See 1 more Smart Citation
“…Let us note that in a more general setting [25] where we consider several strings as outcomes of the repeated experiment (with independent trials) and look for a model that explains all of them, a similar result is not true: not every probability distribution can be transformed into a uniform one.…”
Section: Optimality Deficiencymentioning
confidence: 99%
“…Though theorem 11 looks like a technical statement, it has important consequences; it implies that the two approaches based on randomness and optimality deficiencies remain equivalent in the case of bounded class of descriptions. The proof technique can be also used to prove Epstein-Levin theorem [11], as explained in [31]; similar technique was used by A. Milovanov in [25] where a common model for several strings is considered.…”
Section: Randomness and Optimality Deficiencies: Restricted Casementioning
confidence: 99%