2013 12th International Conference on Document Analysis and Recognition 2013
DOI: 10.1109/icdar.2013.221
|View full text |Cite
|
Sign up to set email alerts
|

ICDAR 2013 Robust Reading Competition

Abstract: Abstract-This

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

3
629
0
4

Year Published

2014
2014
2022
2022

Publication Types

Select...
5
3
1

Relationship

1
8

Authors

Journals

citations
Cited by 1,165 publications
(677 citation statements)
references
References 23 publications
3
629
0
4
Order By: Relevance
“…Figure 10 depicts the user interface of our evaluation tool as well as the precision and recall curves, where x-axis denotes t r values and y-axis denotes t p ones. The proposed performance metrics and their underlying constraints are similar to those used in ICDAR 2013 [24] and ICDAR 2015 [25] RRCs. It is worth noting that our annotation and evaluation tools are fully implemented in Java and are made open-source for standardization and validation purposes.…”
mentioning
confidence: 90%
See 1 more Smart Citation
“…Figure 10 depicts the user interface of our evaluation tool as well as the precision and recall curves, where x-axis denotes t r values and y-axis denotes t p ones. The proposed performance metrics and their underlying constraints are similar to those used in ICDAR 2013 [24] and ICDAR 2015 [25] RRCs. It is worth noting that our annotation and evaluation tools are fully implemented in Java and are made open-source for standardization and validation purposes.…”
mentioning
confidence: 90%
“…This algorithm is based on a fully convolutional network (FCN) followed by a standard non-maximum suppression process. In the 2013 edition of ICDAR RRC [24], a new database was proposed for video text detection, tracking and recognition. It contains 28 short video sequences.…”
Section: Literature Reviewmentioning
confidence: 99%
“…Finally, we evaluate the misclassification error of our method using script-part templates learned from both datasets over a third Latin-only dataset. For [30] which provides cropped word images of English text, and measure the classification accuracy of our method. From the above table we can see how features learned on the MLe2e dataset are much better in generalizing to other datasets.…”
Section: Cross-domain Performance and Confusion In Latin-only Datamentioning
confidence: 99%
“…The training set of the letters is chosen from Chars74K [27] which contains 7705 images of single letter '0'-'9','a'-'z','A'-'Z'(different orientation). Training set of the non-text region is from ICDAR 2013 [28] which contains 229 images, and totally 56523 non-text regions are extracted.…”
Section: The Characteristic Of Kmentioning
confidence: 99%