2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) 2019
DOI: 10.1109/cvprw.2019.00181
|View full text |Cite
|
Sign up to set email alerts
|

Unsupervised Domain Adaptation to Improve Image Segmentation Quality Both in the Source and Target Domain

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
29
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 52 publications
(30 citation statements)
references
References 32 publications
1
29
0
Order By: Relevance
“…For adaptation, we use only test images; for finetune (oracle), we use the labels of train to finetune parameters of pretrained DNN and evaluate on test images. DLV3 with W1 has an mIoU of 60.4%, which is higher than other UDA methods on semantic KITTI ( [25]). Currently, our method achieves the best UDA performance on the benchmark.…”
Section: Methodsmentioning
confidence: 81%
See 1 more Smart Citation
“…For adaptation, we use only test images; for finetune (oracle), we use the labels of train to finetune parameters of pretrained DNN and evaluate on test images. DLV3 with W1 has an mIoU of 60.4%, which is higher than other UDA methods on semantic KITTI ( [25]). Currently, our method achieves the best UDA performance on the benchmark.…”
Section: Methodsmentioning
confidence: 81%
“…The usage of GANs for domain adaptation is applied in semantic segmentation where the discriminators are deployed at various locations of the network to measure the differences between the distributions of the features of source and target domains [22], [23], [24]. Bolte et al [25] use GANs and obtain a single model for both domains; however, the adaptation is applied between datasets with similar appearances under similar weather conditions and all parameters of the network are updated by using the source domain labels. Adaptation is not limited to GANs, Peng et al [26] use Wasserstein GANs (W-GANs) [4] with penalty gradient [27] with an additional generator in the network to measure the distribution similarity.…”
Section: Introductionmentioning
confidence: 99%
“…Iverson . (8) with [•] being the Iverson bracket, which is 0 or 1 if the condition inside the bracket is false or true, respectively. Note that the "ground truth" depth maps d obtained from a LiDAR sensor are only sparse, meaning that only for a subset of pixels i ∈ I (d) ⊂ I a depth value d i is available.…”
Section: Performance Evaluation Metricsmentioning
confidence: 99%
“…For example, when the predicted performance is too low, the high-level planning could decide not to trust the current information from the environment perception. This is of special importance when considering that in practice the DNN performance is often very sensitive to changes of the environment, which have not been included into the dataset, the neural network was trained on [7], [8]. Such changes include, e.g., a different camera type, different lighting or weather conditions, various other kinds of domain shift, or even directed adversarial attacks [9], [10], which are difficult to detect on the input image.…”
mentioning
confidence: 99%
“…For instance, unsupervised domain adaptation was attempted with a domain-adversarial neural network employing a gradient reversal layer during training in [30]. There are indications that this unsupervised domain adaptation approach can improve the segmentation quality in source and target domain [31].…”
Section: Trainingmentioning
confidence: 99%