2008 IEEE Information Theory Workshop 2008
DOI: 10.1109/itw.2008.4578670
|View full text |Cite
|
Sign up to set email alerts
|

Monte Carlo estimation of minimax regret with an application to MDL model selection

Abstract: Abstract-Minimum description length (MDL) model selection, in its modern NML formulation, involves a model complexity term which is equivalent to minimax/maximin regret. When the data are discrete-valued, the complexity term is a logarithm of a sum of maximized likelihoods over all possible data-sets. Because the sum has an exponential number of terms, its evaluation is in many cases intractable. In the continuous case, the sum is replaced by an integral for which a closed form is available in only a few cases… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2015
2015
2023
2023

Publication Types

Select...
4
3

Relationship

2
5

Authors

Journals

citations
Cited by 11 publications
(9 citation statements)
references
References 24 publications
(29 reference statements)
0
9
0
Order By: Relevance
“…One way to sidestep this problem is to approximate the NML penalties using Monte Carlo integration methods (Evans & Swartz, 2000), as done by Kellen (2011, 2015) when comparing different models of recognition memory (see also Roos, 2008).…”
Section: The Challenge Of Computing Nml and A Monte Carlo Solutionmentioning
confidence: 99%
“…One way to sidestep this problem is to approximate the NML penalties using Monte Carlo integration methods (Evans & Swartz, 2000), as done by Kellen (2011, 2015) when comparing different models of recognition memory (see also Roos, 2008).…”
Section: The Challenge Of Computing Nml and A Monte Carlo Solutionmentioning
confidence: 99%
“…If r grows slower than n or not at all, the leading term tends to the classical form (29), where the leading term is k 2 log n. In practice, the approximation (30) is applicable for a wide range of r/n ratios. Roos [2008] and Zou and Roos [2017] studied the third term in the expansion of Eq. (29), namely the Fisher information integral, under Markov chains and Bayesian networks using Monte Carlo sampling techniques.…”
Section: Asymptotic Expansions For Graphical Modelsmentioning
confidence: 99%
“…However, instead of resorting to factorized NML variants, where no numerical guarantees about the approximation error are known, we estimate NML by Monte Carlo sampling in the same fashion as in [13]. The obtained estimates can be shown to be consistent as the number of simulated samples is increased.…”
Section: Approximation Of Normalized Maximum Likelihoodmentioning
confidence: 99%
“…We need to consider other approximate methods such as the Monte Carlo sampling method introduced in [13]. Based on the law of large numbers, the sample average is guaranteed to converge to the mean if the sampling size is large.…”
Section: Monte Carlo Approximation Of Nmlmentioning
confidence: 99%
See 1 more Smart Citation