2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW) 2021
DOI: 10.1109/iccvw54120.2021.00182
|View full text |Cite
|
Sign up to set email alerts
|

Multi-Domain Conditional Image Translation: Translating Driving Datasets from Clear-Weather to Adverse Conditions

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(2 citation statements)
references
References 57 publications
0
2
0
Order By: Relevance
“…To efficiently translate images among more than two domains, multi-domain image translation methods [21], [22], [23], e.g., StarGAN [7], build unified models to learn oneto-one mappings between the shared latent space and input domains. To sample multimodal translation results between domains, MUNIT [24] and DRIT [25] decompose the latent space into a shared content space and an unshared style space.…”
Section: Unit Of Multiple Domainsmentioning
confidence: 99%
“…To efficiently translate images among more than two domains, multi-domain image translation methods [21], [22], [23], e.g., StarGAN [7], build unified models to learn oneto-one mappings between the shared latent space and input domains. To sample multimodal translation results between domains, MUNIT [24] and DRIT [25] decompose the latent space into a shared content space and an unshared style space.…”
Section: Unit Of Multiple Domainsmentioning
confidence: 99%
“…CBST [21] and CRST [27] are two representative self-training strategies for domain adaptive semantic segmentation. Recent methods CuDA-Net [57], FIFO [58] and CMDIT [59] focus on bridging the domain gap between the clear images and foggy images to improve the performance of foggy scene segmentation. FogAdapt+ [60] combines the scale-invariance and uncertainty to minimize the domain shift in foggy scenes segmentation.…”
Section: A Experimental Settings 1) Datasetsmentioning
confidence: 99%