2018 16th International Conference on Frontiers in Handwriting Recognition (ICFHR) 2018
DOI: 10.1109/icfhr-2018.2018.00065
|View full text |Cite
|
Sign up to set email alerts
|

Identifying Cross-Depicted Historical Motifs

Abstract: Cross-depiction is the problem of identifying the same object even when it is depicted in a variety of manners. This is a common problem in handwritten historical documents image analysis, for instance when the same letter or motif is depicted in several different ways. It is a simple task for humans yet conventional heuristic computer vision methods struggle to cope with it. In this paper we address this problem using stateof-the-art deep learning techniques on a dataset of historical watermarks containing im… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
12
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
5
2
1

Relationship

4
4

Authors

Journals

citations
Cited by 12 publications
(12 citation statements)
references
References 28 publications
0
12
0
Order By: Relevance
“…When it comes to cross-domain transfer learning, its usefulness -especially from natural images such as ImageNet -is subject of an open discussion. There are cases where transfer learning across domains has been proven to be successful [17], [18]. In contrast, there is literature suggesting that this technique is harmful to the final performance of the networks.…”
Section: Related Workmentioning
confidence: 99%
“…When it comes to cross-domain transfer learning, its usefulness -especially from natural images such as ImageNet -is subject of an open discussion. There are cases where transfer learning across domains has been proven to be successful [17], [18]. In contrast, there is literature suggesting that this technique is harmful to the final performance of the networks.…”
Section: Related Workmentioning
confidence: 99%
“…In this work we are considering the classification pretraining introduced by Pondenkandath et al (2018). In this classification pretraining step, the neural network is trained for classification with cross-entropy loss on the same training set before training for similarity.…”
Section: Classification Pretrainingmentioning
confidence: 99%
“…Furthermore, we investigate an additional pretraining step for the neural networks in which we train for classification before training for similarity. This pretraining step has been shown to increase the network performances especially when for each class, there is little amount of labeled data available (Pondenkandath et al, 2018). Finally, we are using two more test sets and more evaluation metrics to compare our framework against more published results in our experimental evaluation.…”
Section: Introductionmentioning
confidence: 99%
“…Note that more details on these experiments and how to tackle the cross-depiction problem can be found explained in detail in the original work [22].…”
Section: A Resnet For Watermark Recognitionmentioning
confidence: 99%