Pattern Recognition in Practice 1986
DOI: 10.1016/b978-0-444-87877-9.50013-3
|View full text |Cite
|
Sign up to set email alerts
|

Probabilistic Labeling in a Hidden Second Order Markov Mesh

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
15
0

Year Published

1987
1987
2017
2017

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 23 publications
(15 citation statements)
references
References 10 publications
0
15
0
Order By: Relevance
“…One of the major shortcomings is that the dependence of a node on its neighbors in a fully connected multi-dimensional HMM does not guarantee to be explored. Several attempts have also been done to heuristically reduce the complexity of the HMM algorithms by making simplifying assumptions [34,36,42,43,[49][50][51][52]. The main disadvantage of these approaches is that they only provide approximate computations, such that the probabilistic model is no longer theoretically sound.…”
Section: (3) Terminationmentioning
confidence: 99%
See 1 more Smart Citation
“…One of the major shortcomings is that the dependence of a node on its neighbors in a fully connected multi-dimensional HMM does not guarantee to be explored. Several attempts have also been done to heuristically reduce the complexity of the HMM algorithms by making simplifying assumptions [34,36,42,43,[49][50][51][52]. The main disadvantage of these approaches is that they only provide approximate computations, such that the probabilistic model is no longer theoretically sound.…”
Section: (3) Terminationmentioning
confidence: 99%
“…One solution is to convert a multi-dimensional model into a 1-D multidimensional vector Markov model by considering the set of nodes with a fixed coordinate along a given direction (e.g., horizontal, vertical, or diagonal in a regular plane) as a new vector node. For example, the rows, the columns, and the anti-diagonal and its parallelisms of a 2-D lattice respectively form super nodes, generating a vector Markov chain [34][35][36][37][38]. Generalization to the 3-D case is also straightforward [39][40][41].…”
Section: (3) Terminationmentioning
confidence: 99%
“…For the frequency λ and orientation θ, the outputs of a pair of corresponding Gabor filters with initial phases 0 and −π/2 are denoted by γ λ,θ,0 and γ λ,θ,−pi/2 respectively. The combination of these two quantities yields the so called Gabor energy [6].…”
Section: Representation Of Imagesmentioning
confidence: 99%
“…The underlying idea is that the state of a block only depends on the states of two previously observed neighbor blocks in a raster scan. This assumption,which is appropriate for horizontally layered natural images, differs dramatically from that used in second-order Markov mesh models [6]. The complete specification of a spatial HMM λ requires a specification of the number of states N(the collection of available hidden states is denoted by S), and the four probability measures such as H(horizontal transition matrix), V(vertical transition matrix), B and π.…”
Section: Spatial Hidden Markov Modelmentioning
confidence: 99%
“…Context-dependent classification algorithms based on two-dimensional hidden Markov models (2-D HMMs) have been developed [14], [24], [25] to overcome the overlocalization of con- ventional block-based classification algorithms. In this paper, a multiresolution extension of the 2-D HMMs described in [25] is proposed so that more global context information can be used efficiently.…”
mentioning
confidence: 99%