2019 International Conference on Document Analysis and Recognition Workshops (ICDARW) 2019
DOI: 10.1109/icdarw.2019.00020
|View full text |Cite
|
Sign up to set email alerts
|
Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
14
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 18 publications
(14 citation statements)
references
References 24 publications
0
14
0
Order By: Relevance
“…When applying to text verification, the CNN classifies the RoIs of images to verify if a RoI belongs to a text part or not. Several CNN models have been proposed in the literature for the text verification (Yang et al, 2015;Ray et al, 2016;Wang et al, 2018;Zhang et al, 2015;Ghosh et al, 2019). Table 3 gives a summary.…”
Section: Verification and Text Line Constructionmentioning
confidence: 99%
See 1 more Smart Citation
“…When applying to text verification, the CNN classifies the RoIs of images to verify if a RoI belongs to a text part or not. Several CNN models have been proposed in the literature for the text verification (Yang et al, 2015;Ray et al, 2016;Wang et al, 2018;Zhang et al, 2015;Ghosh et al, 2019). Table 3 gives a summary.…”
Section: Verification and Text Line Constructionmentioning
confidence: 99%
“…For the sake of performance evaluation, we customize our model inspired from (Wang et al, 2018;Ghosh et al, 2019). It is produced with two convolutional layers and one FC layer.…”
Section: Verification and Text Line Constructionmentioning
confidence: 99%
“…However, the scope of this method is to categorize video frames but not caption and scene texts in video. Since use of convolutional network is popular and it solves complex issue, Ghosh et al [5] proposed identifying the presence of graphical text in scene images using CNN. The method considers text which is edited and text in natural scene images as graphical text for classification.…”
Section: Related Workmentioning
confidence: 99%
“…However, the local window is moved pixel by pixel by referring to the input image. The variances of the respective Fourier and DCT coefficients are used to derive weights as defined in Equation (4) and Equation (5). Finally, the derived weights are combined with the respective Fourier and DCT coefficients as defined in Equation ( 6), which results in a fused image.…”
Section: Dct and Fft Coefficients For Fusionmentioning
confidence: 99%
See 1 more Smart Citation