2014 14th International Conference on Frontiers in Handwriting Recognition 2014
DOI: 10.1109/icfhr.2014.98
|View full text |Cite
|
Sign up to set email alerts
|

Towards Unsupervised Learning for Handwriting Recognition

Abstract: Abstract-We present a method for training an off-line handwriting recognition system in an unsupervised manner. For an isolated word recognition task, we are able to bootstrap the system without any annotated data. We then retrain the system using the best hypothesis from a previous recognition pass in an iterative fashion. Our approach relies only on a prior language model and does not depend on an explicit segmentation of words into characters. The resulting system shows a promising performance on a standard… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2015
2015
2020
2020

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(2 citation statements)
references
References 16 publications
(20 reference statements)
0
2
0
Order By: Relevance
“…[46] learn mappings from hidden-states of an HMM with their transition probabilities initialised with conditional bi-gram distributions. [40] propose an iterative scheme for bootstrapping predictions for learning HMMs models, and recognise handwritten text. However, their approach is limited to (1) word images, (2) fixed lexicon (≈44K words) to facilitate exhaustive tree search, whereas, our method is applicable to full text strings, does not require a pre-defined lexicon of words.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…[46] learn mappings from hidden-states of an HMM with their transition probabilities initialised with conditional bi-gram distributions. [40] propose an iterative scheme for bootstrapping predictions for learning HMMs models, and recognise handwritten text. However, their approach is limited to (1) word images, (2) fixed lexicon (≈44K words) to facilitate exhaustive tree search, whereas, our method is applicable to full text strings, does not require a pre-defined lexicon of words.…”
Section: Related Workmentioning
confidence: 99%
“…Although earlier works use low-order, namely uni/bi-gram statistics for alignment [40,46], higher-order n-grams could be more informative. In this experiment we examine the impact of the length of the training text-sequences on convergence.…”
Section: Effect Of Text Length On Convergencementioning
confidence: 99%