2020
DOI: 10.1109/access.2020.2978225
|View full text |Cite
|
Sign up to set email alerts
|

Instance Segmentation Network With Self-Distillation for Scene Text Detection

Abstract: Segmentation based methods have become the mainstream for detecting scene text with arbitrary orientations and shapes. In order to address challenging problems such as separating the text instances that are very close to each other, however, these methods often require time-consuming postprocessing. In this paper, we propose an instance segmentation network (ISNet), which simultaneously generates prototype masks and per-instance mask coefficients. After linearly combining the two components, ISNet can implemen… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 13 publications
(7 citation statements)
references
References 39 publications
0
7
0
Order By: Relevance
“…The PSENet [27] uses convolutional features extracted with FPN [28] and an iterative framework which grows text regions from their innermost pixels allowing the network to accurately separate the individual text region instances. Another idea is to combine region level detection with mask coefficients that map pixels to specific text regions [29]. In this paper, we propose a model that predicts a pixel-level mask of text regions.…”
Section: B Scene Text Detectionmentioning
confidence: 99%
“…The PSENet [27] uses convolutional features extracted with FPN [28] and an iterative framework which grows text regions from their innermost pixels allowing the network to accurately separate the individual text region instances. Another idea is to combine region level detection with mask coefficients that map pixels to specific text regions [29]. In this paper, we propose a model that predicts a pixel-level mask of text regions.…”
Section: B Scene Text Detectionmentioning
confidence: 99%
“…Yang et al [74] utilized the information of earlier training epochs to supervise the later training epochs. The self-distillation mechanism has been applied in many fields like classification [80], weakly-supervised object detection [28], text segmentation [75], etc. In this paper, we introduce the self-distillation mechanism into the training process of the WSSS model and propose a novel self-dual teaching strategy to facilitate an effective knowledge distillation under the weak supervision.…”
Section: Related Workmentioning
confidence: 99%
“…By this way, the students perform significantly better than the teacher in language modeling tasks. Hence, Yang et al [56] utilized the self-distillation to accurately detect the text in the image and optimized the teacher-student training process. Clark et al [57] applied the BAN to multitask learning and validated its effectiveness in other NLP tasks such as textual similarity, textual entailment, and so on.…”
Section: B Knowledge Distillationmentioning
confidence: 99%