2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2019
DOI: 10.1109/cvpr.2019.01043
|View full text |Cite
|
Sign up to set email alerts
|

Beyond Gradient Descent for Regularized Segmentation Losses

Abstract: The simplicity of gradient descent (GD) made it the default method for training ever-deeper and complex neural networks. Both loss functions and architectures are often explicitly tuned to be amenable to this basic local optimization. In the context of weakly-supervised CNN segmentation, we demonstrate a well-motivated loss function where an alternative optimizer (ADM) 1 achieves the state-of-the-art while GD performs poorly. Interestingly, GD obtains its best result for a "smoother" tuning of the loss functio… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
22
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
4
1

Relationship

4
6

Authors

Journals

citations
Cited by 24 publications
(22 citation statements)
references
References 47 publications
0
22
0
Order By: Relevance
“…To further improve the quality of weaklysupervised training, it is possible to leverage standard lowlevel regularizes over a large number of unlabeled pixels. [66,32,61,62,42]. For example, [62] achieves the stateof-the-art using bilinear relaxation of the Potts model in (3)…”
Section: Regularized Losses In Cnn Segmentationmentioning
confidence: 99%
“…To further improve the quality of weaklysupervised training, it is possible to leverage standard lowlevel regularizes over a large number of unlabeled pixels. [66,32,61,62,42]. For example, [62] achieves the stateof-the-art using bilinear relaxation of the Potts model in (3)…”
Section: Regularized Losses In Cnn Segmentationmentioning
confidence: 99%
“…Alternatively, one can directly use objective (2) as a regularized loss function [50,52]. Our proposal generation approach can be seen as a one step of ADM procedure for such a loss [37].…”
Section: Sampling Modelmentioning
confidence: 99%
“…Size-constrained proposal e y < l a t e x i t s h a 1 _ b a s e 6 4 = " C P 7 J n V i B w g n m j Y I 5 t 4 ADMM for incorporating high-order segmentation priors on the target region's histogram of intensities [26] or compactness [17]. More recently, similar techniques have been proposed to include higher-order priors directly in the learning process [27,28]. To our knowledge, our work is the first employing a discretecontinuous framework for weakly-supervised segmentation.…”
Section: Crf-regularized Proposalmentioning
confidence: 99%