2010 IEEE International Conference on Acoustics, Speech and Signal Processing 2010
DOI: 10.1109/icassp.2010.5495625
|View full text |Cite
|
Sign up to set email alerts
|

Face recognition based on separable lattice 2-D HMM with state duration modeling

Abstract: This paper describes an extension of separable lattice 2-D HMMs (SL-HMMs) using state duration models for image recognition. SLHMMs are generative models which have size and location invariances based on state transition of HMMs. However, the state duration probability of HMMs exponentially decreases with increasing duration, therefore it may not be appropriate for modeling image variations accuratelty. To overcome this problem, we employ the structure of hidden semi Markov models (HSMMs) in which the state du… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2013
2013
2018
2018

Publication Types

Select...
3
3
1

Relationship

3
4

Authors

Journals

citations
Cited by 7 publications
(6 citation statements)
references
References 5 publications
0
6
0
Order By: Relevance
“…where P ( ) is a prior distribution and P (o) is evidence. Model parameters are estimated as posterior distribution P ( | o), and posterior distribution is integrated out in (20) so that the effect of overfitting is mitigated. That is, the Bayesian criterion has higher generalization ability than the ML criterion when there is insufficient training data.…”
Section: Bayesian Criterionmentioning
confidence: 99%
See 1 more Smart Citation
“…where P ( ) is a prior distribution and P (o) is evidence. Model parameters are estimated as posterior distribution P ( | o), and posterior distribution is integrated out in (20) so that the effect of overfitting is mitigated. That is, the Bayesian criterion has higher generalization ability than the ML criterion when there is insufficient training data.…”
Section: Bayesian Criterionmentioning
confidence: 99%
“…is calculated using (20). Although predictive distribution includes a complicated expectation calculation, the z (1) t (1) z (2) t (2) a (2) π (2) a (1) π (1) ϕ (1) α (1) ϕ (2) α (2) Dotted rectangles represent hyperparameters same approximation based on the VB method as that in posterior distribution training can be applied, and the lower bound of predictive distribution F (test) is defined as (Q(x , z , Λ)) (39) The same conditional independence assumption of the posterior distribution is used as in (26).…”
Section: Predictive Distribution Predictive Distributionmentioning
confidence: 99%
“…To cope with this problem, the training algorithm for SL2D-HMMs using the variational EM algorithm were derived in [8], where the loglikelihood can be approximated by the variational lower bound. Although some extensions of SL2D-HMMs have been proposed, e.g., a structure for rotational variations [10], explicit state duration modeling [11], and a structure with multiple horizontal/vertical Markov chains [12], this paper uses an original form of SL2D-HMMs.…”
Section: Separable Lattice 2-d Hmmsmentioning
confidence: 99%
“…HSMM has overcome the strict geometric distribution restriction of HMM when describing state duration and the duration of any distribution can be depicted. In recent years, HSMM is extensively applied to voice recognition [9] and failure prediction [10]. Therefore, HSMM is suitable for modeling and prediction of HSC system.…”
Section: Introductionmentioning
confidence: 99%