2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2019
DOI: 10.1109/cvpr.2019.00541
|View full text |Cite
|
Sign up to set email alerts
|

FickleNet: Weakly and Semi-Supervised Semantic Image Segmentation Using Stochastic Inference

Abstract: The main obstacle to weakly supervised semantic image segmentation is the difficulty of obtaining pixel-level information from coarse image-level annotations. Most methods based on image-level annotations use localization maps obtained from the classifier, but these only focus on the small discriminative parts of objects and do not capture precise boundaries. FickleNet explores diverse combinations of locations on feature maps created by generic deep neural networks. It selects hidden units randomly and then u… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
359
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 436 publications
(382 citation statements)
references
References 43 publications
1
359
0
Order By: Relevance
“…Since CAMs only focus on small discriminative regions which are too sparse to effectively supervise a segmentation model, various techniques such as adversarial erasing [12], [17], [21], [18] and region growing [13], [22] have been developed to expand sparse object seeds. Another research line introduces dilated convolutions of different rates [14], [16], [15], [23] to enlarge receptive fields in classification networks and aggregate multiple attention maps to achieve dense localization cues. In this work, we adopt the self-attention scheme to capture richer and more extensive contextual information to mine integral object seeds, and meanwhile leverage both class-agnostic saliency cues and class-specific attention cues to ensure the seeds accurate.…”
Section: A Weakly-supervised Semantic Segmentationmentioning
confidence: 99%
See 2 more Smart Citations
“…Since CAMs only focus on small discriminative regions which are too sparse to effectively supervise a segmentation model, various techniques such as adversarial erasing [12], [17], [21], [18] and region growing [13], [22] have been developed to expand sparse object seeds. Another research line introduces dilated convolutions of different rates [14], [16], [15], [23] to enlarge receptive fields in classification networks and aggregate multiple attention maps to achieve dense localization cues. In this work, we adopt the self-attention scheme to capture richer and more extensive contextual information to mine integral object seeds, and meanwhile leverage both class-agnostic saliency cues and class-specific attention cues to ensure the seeds accurate.…”
Section: A Weakly-supervised Semantic Segmentationmentioning
confidence: 99%
“…The last two pooling layers are removed in order to increase the resolution of the output feature maps. Note that, unlike previous works [14], [23], [15] that enlarge the dilation rate of convolution kernels in conv5 block, we avoid the usage of dilated convolution and instead use the self-attention module to capture more extensive contexts.…”
Section: The Proposed Approachmentioning
confidence: 99%
See 1 more Smart Citation
“…Weakly supervised semantic segmentation methods have evolved rapidly. Scribble supervision [39] can now achieve 97% of the performance of manually supervised semantic segmentation with the same backbone network, and even the weakest image-level supervision [24] can produce * Correspondence to: Sungroh Yoon <sryoon@snu.ac.kr>. Figure 1: (a) Our method discovers activated regions from each frame and aggregates them into a single frame using a warping technique based on optical flow.…”
Section: Introductionmentioning
confidence: 99%
“…However, the rate at which segmentation techniques relying on weak annotations are improving is declining rapidly. For instance, under imagelevel supervision, a 7.9% improvement on PASCAL VOC 2012 validation images [7] was achieved between 2016 and 2017 [23,2], 1.8% between 2017 and 2018 [46], but only 0.8% since then [24].…”
Section: Introductionmentioning
confidence: 99%