2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2020
DOI: 10.1109/cvpr42600.2020.01333
|View full text |Cite
|
Sign up to set email alerts
|

Unifying Training and Inference for Panoptic Segmentation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
43
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
2
2

Relationship

2
6

Authors

Journals

citations
Cited by 72 publications
(43 citation statements)
references
References 17 publications
0
43
0
Order By: Relevance
“…In Table 16, we conduct experiments on COCO val set. Compared with recent approaches, Panoptic FCN achieves superior performance with efficiency, which surpasses leading box-based [33] and box-free [39] methods over 0.2% and 1.5% PQ, respectively. With simple enhancement, the gap enlarges to 0.9% and 2.2% PQ.…”
Section: Cocomentioning
confidence: 92%
See 2 more Smart Citations
“…In Table 16, we conduct experiments on COCO val set. Compared with recent approaches, Panoptic FCN achieves superior performance with efficiency, which surpasses leading box-based [33] and box-free [39] methods over 0.2% and 1.5% PQ, respectively. With simple enhancement, the gap enlarges to 0.9% and 2.2% PQ.…”
Section: Cocomentioning
confidence: 92%
“…els[32],[33], the proposed method still achieves comparable result. If equipped with SwinT-based backbone, the proposed method achieves much better performance.…”
mentioning
confidence: 83%
See 1 more Smart Citation
“…Compared to bottom-up methods, we achieve comparable performance, but smaller inference time. Two-stage approaches [9,14,30] based on Mask R-CNN [8] provide the most accurate segmentation, having the highest PQ scores. At the same time, two-stage methods are the slowest.…”
Section: Performance On Cocomentioning
confidence: 99%
“…Early attempts [19] in PS follow the decomposition pipeline, separately predicting the semantic and instance segmentation results and then adopting the things-stuff fusion process in later stages. Several works try to simplify the process and improve the accuracy through replacing the post things-stuff fusion with the parameter-free [48] or trainable [23] panoptic head. Furthermore, more researchers [24,39] try to abandon the separated branches and build an end-to-end unified framework.…”
Section: Related Workmentioning
confidence: 99%