2021 International Joint Conference on Neural Networks (IJCNN) 2021
DOI: 10.1109/ijcnn52387.2021.9534223
|View full text |Cite
|
Sign up to set email alerts
|

Parallel Scale-wise Attention Network for Effective Scene Text Recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 8 publications
(3 citation statements)
references
References 36 publications
0
3
0
Order By: Relevance
“…•Big fluctuation in text appearance and scale: Some of the text may be scaled or rotated, this creates a problem in detection because there is no suitable text [5].…”
Section: Challengesmentioning
confidence: 99%
“…•Big fluctuation in text appearance and scale: Some of the text may be scaled or rotated, this creates a problem in detection because there is no suitable text [5].…”
Section: Challengesmentioning
confidence: 99%
“…Following its success in machine translation, the attention mechanism has started to be applied for text recognition (Gao et al, 2017;Sajid et al, 2021). (Chowdhury and Vig, 2018) combined a CNN backbone with an RNN encoder-decoder for HTR.…”
Section: Related Workmentioning
confidence: 99%
“…While the CTC architecture has been successful in OCR, it still has limitations in terms of language modeling, which can be improved by using an external language model as a post-processing step. To overcome this limitation, attention mechanisms have gained popularity in text recognition tasks [11,17,26,32]. Chowdhury and Vig [11] introduced an RNN encoder-decoder model for HTR that utilized a CNN backbone.…”
Section: Related Workmentioning
confidence: 99%