2014
DOI: 10.1016/j.patcog.2013.09.005
|View full text |Cite
|
Sign up to set email alerts
|

Distinction between handwritten and machine-printed text based on the bag of visual words model

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
27
0
1

Year Published

2014
2014
2021
2021

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 42 publications
(29 citation statements)
references
References 39 publications
1
27
0
1
Order By: Relevance
“…In terms of classification, the algorithm outperforms multi-class SVM and Random Forest approaches and in terms of features separability for each class, the approach is more robust for handling text lines and noise, and less vulnerable to the segmentation failures than the baseline Gabor features. Nevertheless, as will be shown in the experiments section, our approach outperforms the method proposed in [3] by over 13% in classification accuracy and gives similar results as in [4] despite using no labelled training data.…”
Section: Literature Reviewsupporting
confidence: 53%
See 4 more Smart Citations
“…In terms of classification, the algorithm outperforms multi-class SVM and Random Forest approaches and in terms of features separability for each class, the approach is more robust for handling text lines and noise, and less vulnerable to the segmentation failures than the baseline Gabor features. Nevertheless, as will be shown in the experiments section, our approach outperforms the method proposed in [3] by over 13% in classification accuracy and gives similar results as in [4] despite using no labelled training data.…”
Section: Literature Reviewsupporting
confidence: 53%
“…Because of the high similarity with the machine-printed sample, the geometric features, such as area and rectangularity used in [6], [7], [10], [12], fail to create separable classes and the samples will wrongly be classified as machine-printed. Table 1 shows the comparison of our approach with the results provided by Zagoris et al [3], [4], in which 15% of the samples in the PRImA-NHM dataset are utilised for train and the remaining 85% for the test phase. The bounding boxes-based PRImA Layout Evaluation Framework [4], [22] is used to compute overlapping regions of the classified segments with the ground truth.…”
Section: Gallery Creation and The Hmc Resultsmentioning
confidence: 99%
See 3 more Smart Citations