2020
DOI: 10.1007/s11063-019-10166-x
|View full text |Cite
|
Sign up to set email alerts
|

Generating Text Sequence Images for Recognition

Abstract: Recently, methods based on deep learning have dominated the field of text recognition. With a large number of training data, most of them can achieve the state-of-the-art performances. However, it is hard to harvest and label sufficient text sequence images from the real scenes. To mitigate this issue, several methods to synthesize text sequence images were proposed, yet they usually need complicated preceding or followup steps. In this work, we present a method which is able to generate infinite training data… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
5
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 8 publications
(5 citation statements)
references
References 31 publications
0
5
0
Order By: Relevance
“…In [25], proxy metrics are trained to assess the model quality. The authors of [25] train a model on 8 million synthetically generated images. In our case, because of the size of our network, such validation would be unfeasible.…”
Section: Resultsmentioning
confidence: 99%
See 4 more Smart Citations
“…In [25], proxy metrics are trained to assess the model quality. The authors of [25] train a model on 8 million synthetically generated images. In our case, because of the size of our network, such validation would be unfeasible.…”
Section: Resultsmentioning
confidence: 99%
“…In [6], proxy metrics are trained to assess the model quality. The authors of [6] train a model on 8 million synthetically generated images.…”
Section: Resultsmentioning
confidence: 99%
See 3 more Smart Citations