2019
DOI: 10.3150/17-bej993
|View full text |Cite
|
Sign up to set email alerts
|

Consistent order estimation for nonparametric hidden Markov models

Abstract: In this paper, we introduce a new estimator for the emission densities of a nonparametric hidden Markov model. It is adaptive and minimax with respect to each state's regularityas opposed to globally minimax estimators, which adapt to the worst regularity among the emission densities. Our method is based on Goldenshluger and Lepski's methodology. It is computationally efficient and only requires a family of preliminary estimators, without any restriction on the type of estimators considered. We present two suc… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
16
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 16 publications
(16 citation statements)
references
References 38 publications
(57 reference statements)
0
16
0
Order By: Relevance
“…Note, that often multiple Markov states are required for one conductance level, e.g., to accommodate different noise levels or dwell times. Though data-driven model-selection tools are available, see, e.g., (Gassiat and Keribin 2000;Gassiat and Boucheron 2003;Celeux and Durand 2008;Chambaz et al 2009;Lehéricy 2019) and the references therein, this is often done manually by an empirical data analysis or by repeating the steps below until results are satisfying, which can be time-consuming and introduces subjectivity. As soon as a specific HMM is selected, parameters of the Markov model can either be estimated by the Baum-Welch algorithm, see (Venkataramanan et al 2000;Qin et al 2000), by Bayesian approaches, in particular MCMC sampling, see (de Gunst et al 2001;Siekmann et al 2011), or by approaches based on the conductance (current) distribution, see (Yellen 1984;Heinemann and Sigworth 1991;Schroeder 2015) and the references therein.…”
Section: Hmm-based Analysismentioning
confidence: 99%
“…Note, that often multiple Markov states are required for one conductance level, e.g., to accommodate different noise levels or dwell times. Though data-driven model-selection tools are available, see, e.g., (Gassiat and Keribin 2000;Gassiat and Boucheron 2003;Celeux and Durand 2008;Chambaz et al 2009;Lehéricy 2019) and the references therein, this is often done manually by an empirical data analysis or by repeating the steps below until results are satisfying, which can be time-consuming and introduces subjectivity. As soon as a specific HMM is selected, parameters of the Markov model can either be estimated by the Baum-Welch algorithm, see (Venkataramanan et al 2000;Qin et al 2000), by Bayesian approaches, in particular MCMC sampling, see (de Gunst et al 2001;Siekmann et al 2011), or by approaches based on the conductance (current) distribution, see (Yellen 1984;Heinemann and Sigworth 1991;Schroeder 2015) and the references therein.…”
Section: Hmm-based Analysismentioning
confidence: 99%
“…We can see that the estimator is nicely bounded by M(x n 1 ) ≤ n sinceP(x n 1 |k) = 1 for k ≥ n. In the literature on Markov order estimation [83,[85][86][87][88][89][90][91][92][93][94][95], sublinear penalty − log w n = o(n) in estimators resembling (36) can be traced in [88,90,94]. In the literature on hidden Markov order estimation [84,[96][97][98][99][100][101][102][103][104], the majority of articles consider very similar ideas and prove the strong consistency of related estimators. Thus, we do not claim a particular originality of estimator (36).…”
Section: Finite-state Processesmentioning
confidence: 99%
“…In the literature on Markov order estimation [ 83 , 85 , 86 , 87 , 88 , 89 , 90 , 91 , 92 , 93 , 94 , 95 ], sublinear penalty in estimators resembling ( 36 ) can be traced in [ 88 , 90 , 94 ]. In the literature on hidden Markov order estimation [ 84 , 96 , 97 , 98 , 99 , 100 , 101 , 102 , 103 , 104 ], the majority of articles consider very similar ideas and prove the strong consistency of related estimators. Thus, we do not claim a particular originality of estimator ( 36 ).…”
Section: Finite-state Processesmentioning
confidence: 99%
See 1 more Smart Citation
“…First, it builds on the recent advance on the estimation of the parameters of hidden Markov models (HMMs) using spectral method-of-moments methods, which involve the spectral decomposition of certain low-order multivariate moments computed from the data (Anandkumar et al 2012, 2014, Azizzadenesheli et al 2016. It benefits from the theoretical finite-sample bound of spectral estimators, while the finite-sample guarantees of other alternatives such as maximum likelihood estimators remain an open problem (Lehéricy 2019). 1 Second, it builds on the well-known "upper confidence bound" (UCB) method in reinforcement learning Auer 2007, Jaksch et al 2010).…”
Section: Introductionmentioning
confidence: 99%