2019
DOI: 10.48550/arxiv.1910.03892
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Fast Panoptic Segmentation Network

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0
1

Year Published

2020
2020
2022
2022

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(5 citation statements)
references
References 0 publications
0
4
0
1
Order By: Relevance
“…Publications. [11], FPSNet [12], EPSNet [13] і VPSNet [14]. Наприклад, семантичний розрив між (X 0,0 ,X 1,3 ) нівелюється за допомогою Також в статті запропоноване використання глибокого контрольованого навчання (Deep Supervision [24]), що дозволяє моделі працювати в двох режимах:…”
Section: Practical Value Of Obtained Resultsunclassified
“…Publications. [11], FPSNet [12], EPSNet [13] і VPSNet [14]. Наприклад, семантичний розрив між (X 0,0 ,X 1,3 ) нівелюється за допомогою Також в статті запропоноване використання глибокого контрольованого навчання (Deep Supervision [24]), що дозволяє моделі працювати в двох режимах:…”
Section: Practical Value Of Obtained Resultsunclassified
“…BB PQ PQ T h PQ St IoU FPSNet [9] 55.1 48.3 60.1 -TASCNet [19] 55.9 50.5 59.8 -AUNet [21] 56.4 52.7 59.0 73.6 P. FPN [16] 57 In Table 4 we benchmark the quantitative performance on the Microsoft COCO dataset while qualitative results are shown in figure 6. Similar to the methodology used for Cityscapes we report results with same backbone and with same pre-training.…”
Section: Methodsmentioning
confidence: 99%
“…Despite the recent introduction of panoptic segmentation there have already been multiple works attempting to address this [9,19,21,41]. This is in part due to its importance to the wider community, success in individual subtasks of instance and semantic segmentation and publicly available datasets to benchmark different methods.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…We use the Adam optimizer[22] with learning rate 10 −5 , polynomial learning rate decay, and test-time flip. We use ResNet50 as a backbone for DeepLabV3+ (pretrained on ImageNet) with an embedding dimension of12 on Cityscapes, and 128 on COCO and Vistas. During training, we use crop size 1024 × 2048 on Cityscapes, and 512 × 512 on COCO, and crop around thing classes.…”
mentioning
confidence: 99%