2020
DOI: 10.1007/978-3-030-58555-6_42
|View full text |Cite
|
Sign up to set email alerts
|

Contextual-Relation Consistent Domain Adaptation for Semantic Segmentation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
73
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
4
3
3

Relationship

0
10

Authors

Journals

citations
Cited by 100 publications
(73 citation statements)
references
References 48 publications
0
73
0
Order By: Relevance
“…(13) Baseline. The previous cross-domain semantic segmentation methods are mainly divided into three types: 1) methods based on adversarial training [21][22][23][24][25]; 2) methods based on self-training [3] [26] [27]; and 3) methods based on data augmentation [28,29]. In order to verify the effectiveness of our proposed model, we select the representative schemes of the above three methods in recent years for comparison.…”
Section: Methodsmentioning
confidence: 99%
“…(13) Baseline. The previous cross-domain semantic segmentation methods are mainly divided into three types: 1) methods based on adversarial training [21][22][23][24][25]; 2) methods based on self-training [3] [26] [27]; and 3) methods based on data augmentation [28,29]. In order to verify the effectiveness of our proposed model, we select the representative schemes of the above three methods in recent years for comparison.…”
Section: Methodsmentioning
confidence: 99%
“…Translating the target domain data to the source domain and the production of the pseudo labels or weak labels for unlabeld target data are proposed for alleviating the burden of specific situation dataset creation [8,9,10,11]. There are part of solutions for model optimization that concentrate on the narrowing the gap of the feature between source and target as well [12,13]. In this paper, our team proposes an improved semantic segmentation network to optimize the performance of the images segmentation in the nighttime situation.…”
Section: Our Methodsmentioning
confidence: 99%
“…He et al [46] apply KD on a compact student network to address the mentioned problem and take advantage of a pretrained autoencoder for feature similarity optimization. There also exist other variations of single teacher KD algorithms, which propose novel approaches to distilling knowledge, such as intra-class feature variation [49] or contextualrelation consistent domain adaptation [50].…”
Section: A Knowledge Distillationmentioning
confidence: 99%