2021 IEEE International Intelligent Transportation Systems Conference (ITSC) 2021
DOI: 10.1109/itsc48978.2021.9564566
|View full text |Cite
|
Sign up to set email alerts
|

Continual Unsupervised Domain Adaptation for Semantic Segmentation by Online Frequency Domain Style Transfer

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
13
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2
1

Relationship

3
4

Authors

Journals

citations
Cited by 18 publications
(13 citation statements)
references
References 40 publications
0
13
0
Order By: Relevance
“…Most methods in this category store information about a specific domain's style in order to transform source images into the styles of the target domains during training. Recent work achieves this by storing low-frequency components of the images for every domain [14], by capturing the style of the domain with generative models [15], [16] or by using a domain-specific memory to mitigate forgetting [17]. Supervised domain-incremental learning is rarely considered [11].…”
Section: A Continual Semantic Segmentationmentioning
confidence: 99%
See 2 more Smart Citations
“…Most methods in this category store information about a specific domain's style in order to transform source images into the styles of the target domains during training. Recent work achieves this by storing low-frequency components of the images for every domain [14], by capturing the style of the domain with generative models [15], [16] or by using a domain-specific memory to mitigate forgetting [17]. Supervised domain-incremental learning is rarely considered [11].…”
Section: A Continual Semantic Segmentationmentioning
confidence: 99%
“…Methods in this category work mostly by storing information about the style of the specific domains, so that during training the source images can be transferred into the styles of the different target domains. This can be achieved by storing low-frequency components of the domains [53] or by capturing the style using generative models [36,60]. Other recent work proposes to use a target-specific memory for every domain to mitigate forgetting [28].…”
Section: Continual Unsupervised Domain Adaptationmentioning
confidence: 99%
See 1 more Smart Citation
“…This network replays source domain knowledge to the network during adaptation. Moreover, Stan et al [47] and Termöhlen et al [48] learn a source domain distribution, which is aligned with the target domain distribution during adaptation. These approaches do not make use of source data during the UDA.…”
Section: Uda Without Source Datamentioning
confidence: 99%
“…This network replays source domain knowledge to the network during adaptation. Moreover, Stan et al [47] and Termöhlen et al [48] learn a source domain distribution, which is aligned with the target domain distribution during adaptation. These approaches do not make use of source data during the UDA.…”
Section: Uda Without Source Datamentioning
confidence: 99%