2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2017
DOI: 10.1109/cvpr.2017.315
|View full text |Cite
|
Sign up to set email alerts
|

Learning Random-Walk Label Propagation for Weakly-Supervised Semantic Segmentation

Abstract: Large-scale training for semantic segmentation is challenging due to the expense of obtaining training data for this task relative to other vision tasks. We propose a novel training approach to address this difficulty. Given cheaplyobtained sparse image labelings, we propagate the sparse labels to produce guessed dense labelings. A standard CNN-based segmentation network is trained to mimic these labelings. The label-propagation process is defined via random-walk hitting probabilities, which leads to a differe… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
128
0

Year Published

2018
2018
2020
2020

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 235 publications
(139 citation statements)
references
References 17 publications
0
128
0
Order By: Relevance
“…Table 13 compares our approach using different sparsity levels (different numbers of labeled pixels for the augmentation), with other recent label augmentation or propagation methods using the PASCAL VOC 2012 data set. One of these works (Vernaza & Chandraker, 2017) uses traces as the input of the augmentation process (Traces) as well as the learned boundaries (learned by a neural network) using the RAWKS algorithm (v1) to augment the trace sparse labeling. V2 indicates the evaluation is done on 94% of the pixels, where the model is confident enough.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Table 13 compares our approach using different sparsity levels (different numbers of labeled pixels for the augmentation), with other recent label augmentation or propagation methods using the PASCAL VOC 2012 data set. One of these works (Vernaza & Chandraker, 2017) uses traces as the input of the augmentation process (Traces) as well as the learned boundaries (learned by a neural network) using the RAWKS algorithm (v1) to augment the trace sparse labeling. V2 indicates the evaluation is done on 94% of the pixels, where the model is confident enough.…”
Section: Discussionmentioning
confidence: 99%
“…The authors work with sparse convolutions to learn directly from sparse labeling and show successful results with levels of sparsity between 5% and 70%. Label propagation was also used in Vernaza and Chandraker (2017), who show how to simultaneously learn a label-propagator and the image segmentation model, both with deep learning architectures. This approach propagates the ground truth labels from a few traces to estimate the main object boundaries in the image and provides a label for each pixel.…”
Section: Models For Weakly Labeled Datamentioning
confidence: 99%
“…[28] Figure 7. Qualitative comparison between Learned Random Walker presented here and first order approximation from [28]. We can observe that neither the sparse seeding nor our sampling strategy (sec.…”
Section: Sampling Strategy Vs Approximate Back-propagationmentioning
confidence: 93%
“…Different approaches have been proposed for backpropagating gradient in Laplacian systems. Very recently, a first order approximation of the true derivative has been used in [28] for semantic segmentation. Their approach has the conspicuous advantage that it requires solving only one system of linear equations.…”
Section: Sampling Strategy Vs Approximate Back-propagationmentioning
confidence: 99%
See 1 more Smart Citation