2011 International Conference on Document Analysis and Recognition 2011
DOI: 10.1109/icdar.2011.210
|View full text |Cite
|
Sign up to set email alerts
|

Bayesian Approach to Photo Time-Stamp Recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2012
2012
2019
2019

Publication Types

Select...
3
2
1

Relationship

1
5

Authors

Journals

citations
Cited by 6 publications
(3 citation statements)
references
References 12 publications
0
3
0
Order By: Relevance
“…While Chen et al [3] and Fumin et al [8] perform template matching with manually created digit templates, Li [12] uses Self-Generating Neural Networks (SGNNs) to model digit appearance. Chen et al [3] make no assumptions on timestamp position, Fumin et al [8] and Li [12] limit their search to only the four image corners, while Shahab et al [26] learn location priors from a training set. Unlike these works, our goal is not to segment or read timestamps, but merely to detect their presence.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…While Chen et al [3] and Fumin et al [8] perform template matching with manually created digit templates, Li [12] uses Self-Generating Neural Networks (SGNNs) to model digit appearance. Chen et al [3] make no assumptions on timestamp position, Fumin et al [8] and Li [12] limit their search to only the four image corners, while Shahab et al [26] learn location priors from a training set. Unlike these works, our goal is not to segment or read timestamps, but merely to detect their presence.…”
Section: Related Workmentioning
confidence: 99%
“…While several approaches exist for detecting and reading timestamps in photos [3,12,8,26], most approaches for detecting visible watermarks are targeted at videos [33,16,11]. Because they make strong assumptions about the appearance and position of the timestamps or water- marks, they would fail when applied to the variety of WTFs present in Internet photos.…”
Section: Introductionmentioning
confidence: 99%
“…In the recent years, visual attention models have been employed for various object detection/recognition tasks [8], [9], [10]. Though the usage of visual attention models for character detection is still under-investigated, their effectiveness has been shown by Shahab et al [11,12] and Uchida et al [13]. Those researchers, who try to employ visual attention models for scene character detection, believe that the characters have different properties compared with their non-character neighbors (pop-out).…”
Section: Introductionmentioning
confidence: 99%