2014 11th IAPR International Workshop on Document Analysis Systems 2014
DOI: 10.1109/das.2014.17
|View full text |Cite
|
Sign up to set email alerts
|

Multi-oriented Handwritten Annotations Extraction from Scanned Documents

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2015
2015
2022
2022

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 7 publications
(3 citation statements)
references
References 15 publications
0
3
0
Order By: Relevance
“…Geometrical features such as distance, area, and density are also commonly used to extract text-lines [23,70] Classification These algorithms classify structural elements (pixels, letters, text-lines...) from a set of learned features. Some of them have been successively applied to separate handwritten annotations from printed text by using connected component and patch level features [53,54], shape context features [22] or more traditional features [8]. In [36], structure detection of degraded newspaper archives is achieved by localizing titles, textlines, background, separators and noise using a Conditional Random Field.…”
Section: Bottom-up or Data-driven Strategiesmentioning
confidence: 99%
“…Geometrical features such as distance, area, and density are also commonly used to extract text-lines [23,70] Classification These algorithms classify structural elements (pixels, letters, text-lines...) from a set of learned features. Some of them have been successively applied to separate handwritten annotations from printed text by using connected component and patch level features [53,54], shape context features [22] or more traditional features [8]. In [36], structure detection of degraded newspaper archives is achieved by localizing titles, textlines, background, separators and noise using a Conditional Random Field.…”
Section: Bottom-up or Data-driven Strategiesmentioning
confidence: 99%
“…Black and white algorithms. Among them, use only binary images [9,29,[90][91][92][93][94][95][96][97][98][99].…”
Section: Feature Classificationmentioning
confidence: 99%
“…A Fisher classifier is then employed to associate the pseudo-word to the handwritten or the printed script. Fourier descriptors, Gabor filters, and Hu moments was extracted in [21]. A separate k-NN classifier was trained for each descriptor features, k-NN outputs were then combined using a simple majority vote.…”
Section: B Classificationmentioning
confidence: 99%