2016
DOI: 10.1587/transinf.2016edp7112
|View full text |Cite
|
Sign up to set email alerts
|

A Bayesian Approach to Image Recognition Based on Separable Lattice Hidden Markov Models

Abstract: SUMMARYThis paper proposes a Bayesian approach to image recognition based on separable lattice hidden Markov models (SL-HMMs). The geometric variations of the object to be recognized, e.g., size, location, and rotation, are an essential problem in image recognition. SL-HMMs, which have been proposed to reduce the effect of geometric variations, can perform elastic matching both horizontally and vertically. This makes it possible to model not only invariances to the size and location of the object but also nonl… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
3
3

Relationship

1
5

Authors

Journals

citations
Cited by 6 publications
(5 citation statements)
references
References 33 publications
0
5
0
Order By: Relevance
“…9 in Dataset2 was modeled by using three factors and the tuning parameter τ = 1/2000 in order to visualize model parameters. Figure 10 shows the values of the mean vector μ k in (36), and the eigen images W k in (34) were represented in grayscale. From Fig.…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…9 in Dataset2 was modeled by using three factors and the tuning parameter τ = 1/2000 in order to visualize model parameters. Figure 10 shows the values of the mean vector μ k in (36), and the eigen images W k in (34) were represented in grayscale. From Fig.…”
Section: Resultsmentioning
confidence: 99%
“…Figure shows some examples of images for the experiments. Also, 40 × 40 HMM states were used for the experiment, because our prior work and preliminary experiments showed the highest recognition performance in 40 × 40 HMM states.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…The EM algorithm is simple and numerically stable, but when the initial parameters deviate from the actual parameters and the algorithm converges slowly, the EM algorithm needs more iterations. Altmanetal also pointed out that when estimating the maximum probability of the estimated parameter, the maximum numerical method was faster than the EM algorithm [6][7][8]. The integration of the hidden model and the prior algorithm reduces the computation of EM algorithm and improves the efficiency.…”
Section: Improving the Introduction Of Hidden Markovmentioning
confidence: 99%
“…On this basis, we combine the proposed sub-algorithm with other commonly used RW algorithms. 16 This unified view will make it possible to transfer internal discovery between different RW algorithms and provide new ideas for designing new RW algorithms by adding or changing secondary nodes. To verify the second advantage, we designed a new subalgorithm with tags to solve the segmentation problem of slender objects.…”
Section: Introductionmentioning
confidence: 99%