2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) 2018
DOI: 10.1109/cvprw.2018.00145
|View full text |Cite
|
Sign up to set email alerts
|

Minimizing Supervision for Free-Space Segmentation

Abstract: Identifying "free-space," or safely driveable regions in the scene ahead, is a fundamental task for autonomous navigation. While this task can be addressed using semantic segmentation, the manual labor involved in creating pixelwise annotations to train the segmentation model is very costly. Although weakly supervised segmentation addresses this issue, most methods are not designed for free-space. In this paper, we observe that homogeneous texture and location are two key characteristics of free-space, and dev… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
20
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 21 publications
(20 citation statements)
references
References 43 publications
(84 reference statements)
0
20
0
Order By: Relevance
“…Bottom-Half is able to reach a decent IoU of 0.7550 and a high Recall, which is not surprising since free space indeed covers a large portion of the lower half of most frames. The Precision of this model is however only of 0.7798, which is poor compared to the 0.8778 achieved by our second unsupervised baseline, the raw Weak Labels from [49]. This second baseline also yields a large IoU improvement, reaching 0.7900.…”
Section: Resultsmentioning
confidence: 71%
See 4 more Smart Citations
“…Bottom-Half is able to reach a decent IoU of 0.7550 and a high Recall, which is not surprising since free space indeed covers a large portion of the lower half of most frames. The Precision of this model is however only of 0.7798, which is poor compared to the 0.8778 achieved by our second unsupervised baseline, the raw Weak Labels from [49]. This second baseline also yields a large IoU improvement, reaching 0.7900.…”
Section: Resultsmentioning
confidence: 71%
“…We present Co-Teaching and its adaptation for a segmentation task, and we introduce its Stochastic variant. Since we focus on improving the training aspect, we use the weak labels proposed in [49] as targets during training. We benchmark the performances of (Stochastic) Co-Teaching against a fully-supervised model, as well as against unsupervised and weakly-supervised baselines in Section 5.…”
Section: Methodsmentioning
confidence: 99%
See 3 more Smart Citations