2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2019
DOI: 10.1109/cvpr.2019.00262
|View full text |Cite
|
Sign up to set email alerts
|

ADVENT: Adversarial Entropy Minimization for Domain Adaptation in Semantic Segmentation

Abstract: Semantic segmentation is a key problem for many computer vision tasks. While approaches based on convolutional neural networks constantly break new records on different benchmarks, generalizing well to diverse testing environments remains a major challenge. In numerous real world applications, there is indeed a large gap between data distributions in train and test domains, which results in severe performance loss at run-time. In this work, we address the task of unsupervised domain adaptation in semantic segm… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

4
1,063
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 1,157 publications
(1,159 citation statements)
references
References 37 publications
4
1,063
0
Order By: Relevance
“…Recently, inspired by semi-supervised learning [11,17] which also utilizes the unlabeled data, there are several semisupervised learning based methods [9,24,36,31] proposed for UDA task. Assuming that areas with higher prediction probability are more accurate, the class-balanced selftraining [36] generated pseudo labels based on class-wise thresholds.…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations
“…Recently, inspired by semi-supervised learning [11,17] which also utilizes the unlabeled data, there are several semisupervised learning based methods [9,24,36,31] proposed for UDA task. Assuming that areas with higher prediction probability are more accurate, the class-balanced selftraining [36] generated pseudo labels based on class-wise thresholds.…”
Section: Related Workmentioning
confidence: 99%
“…Thus making unlabeled samples less ambiguous can help classes to be more separable, e.g., minimizing the conditional entropy [11]. ADVENT [31] adopted this idea in the UDA field and minimized the prediction entropy of the target sample.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…Feature alignment of the two domains in the latent feature space is further developed in [32] through an information bottleneck before the adversarial adaptation module on the feature space. Following a self-training strategy, Vu et al [29] resort to an entropy-based loss. They explore both a direct approach with a hand-crafted loss and indirect adversarial solution for which the objective is expressed by a learnable discriminator.…”
Section: Related Workmentioning
confidence: 99%