2009 10th International Conference on Document Analysis and Recognition 2009
DOI: 10.1109/icdar.2009.9
|View full text |Cite
|
Sign up to set email alerts
|

Writer Adaptive Training and Writing Variant Model Refinement for Offline Arabic Handwriting Recognition

Abstract: We present a writer adaptive training and writer clustering approach for an HMM based Arabic handwriting recognition system to handle different handwriting styles and their variations. Additionally, a writing variant model refinement for specific writing variants is proposed.Current approaches try to compensate the impact of different writing styles during preprocessing and normalization steps.Writer adaptive training with a CMLLR based feature adaptation is used to train writer dependent models. An unsupervis… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
19
0

Year Published

2011
2011
2021
2021

Publication Types

Select...
4
2
2

Relationship

3
5

Authors

Journals

citations
Cited by 32 publications
(21 citation statements)
references
References 10 publications
0
19
0
Order By: Relevance
“…The final feature vector of size 24 is augmented by original moment values [21]. The system is writer adaptive using CMLLR for feature transformation [22]. For classification we use an HMM model with 6 segments per character.…”
Section: System Overviewmentioning
confidence: 99%
“…The final feature vector of size 24 is augmented by original moment values [21]. The system is writer adaptive using CMLLR for feature transformation [22]. For classification we use an HMM model with 6 segments per character.…”
Section: System Overviewmentioning
confidence: 99%
“…The size of the Arabic characters are very different. The number of HMM states is therefore estimated using the so called Model Length Estimation (MLE) presented in [12]. The characters are divided into MLE labels with one Gaussian for each them.…”
Section: B Visual Modelmentioning
confidence: 99%
“…In the following experiments, we additionally use a glyph dependent model length estimation (GDL) as described in [7,8], resulting in an ML trained baseline model with 637 mixtures and 48k densities (cf. Section 2.2).…”
Section: First Pass Decodingmentioning
confidence: 99%
“…maximum-likelihood (ML) training criterion and a lexicon with multiple writing variants as proposed in [7,8]. Each character is modeled by a multi-state left-toright HMM with skip transitions and separate Gaussian mixture models (GMMs).…”
Section: Visual Modelingmentioning
confidence: 99%
See 1 more Smart Citation