Proceedings of 2nd International Conference on Document Analysis and Recognition (ICDAR '93)
DOI: 10.1109/icdar.1993.395727
|View full text |Cite
|
Sign up to set email alerts
|

Strategies for handwritten words recognition using hidden Markov models

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
12
0
1

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 18 publications
(13 citation statements)
references
References 4 publications
0
12
0
1
Order By: Relevance
“…Table 2 summarizes all segmentation-based methods according to the different categories they were 2 There was only one piece of evidence we ran into during our survey of word recognition systems where a segmentation algorithm was used in a small static lexicon environment. Moreover, we found this single method by Gilloux et al, [32][33][34], to have a lot in common with segmentation-free methods despite the segmentation algorithm that it was integrated with. Further discussion of this exception was given in the previous section.…”
Section: Segmentation-based Methodsmentioning
confidence: 99%
See 3 more Smart Citations
“…Table 2 summarizes all segmentation-based methods according to the different categories they were 2 There was only one piece of evidence we ran into during our survey of word recognition systems where a segmentation algorithm was used in a small static lexicon environment. Moreover, we found this single method by Gilloux et al, [32][33][34], to have a lot in common with segmentation-free methods despite the segmentation algorithm that it was integrated with. Further discussion of this exception was given in the previous section.…”
Section: Segmentation-based Methodsmentioning
confidence: 99%
“…All methods by Guillevic et al, [39], Saon et al, [70], and Gilloux et al, [32][33][34], represent a word model as a chain of n identical sub-HMMs (see an example in Fig. 5), where n is the most probable length of an observation sequence obtained from the training samples.…”
Section: Hmmsmentioning
confidence: 99%
See 2 more Smart Citations
“…The random variables whose joint probabilities must be estimated are letter, word, or part-of-speech labels. Linguistic context is always order-dependent, and therefore often modeled with transition frequencies in Markov Chains, Hidden Markov Models, and Markov Random Fields [16,17,18,19,20,21,22,23,24,25,26,27,28]. Linguistic variables are usually assumed to be independent of character shape, even though titles and headings in large or bold type have a different language structure than plain text.…”
Section: Language Modelsmentioning
confidence: 99%