2022
DOI: 10.1007/978-3-031-19815-1_31
|View full text |Cite
|
Sign up to set email alerts
|

Style-Hallucinated Dual Consistency Learning for Domain Generalized Semantic Segmentation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
21
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 32 publications
(24 citation statements)
references
References 33 publications
0
21
0
Order By: Relevance
“…Augmenting the style distribution for narrowing the domain shift issue has been explored in domain generation [34,60,66], image segmentation [4,63], and CD-FSL [11]. Concretely, MixStyle [66], AdvStyle [63], DSU [34], and wave-SAN [11] synthesize styles without extra parameters via mixing, attacking, sampling from a Gaussian distribution, and swapping. MaxStyle [4] and L2D [60] require additional network modules and complex auxiliary tasks to help generate the new styles.…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations
“…Augmenting the style distribution for narrowing the domain shift issue has been explored in domain generation [34,60,66], image segmentation [4,63], and CD-FSL [11]. Concretely, MixStyle [66], AdvStyle [63], DSU [34], and wave-SAN [11] synthesize styles without extra parameters via mixing, attacking, sampling from a Gaussian distribution, and swapping. MaxStyle [4] and L2D [60] require additional network modules and complex auxiliary tasks to help generate the new styles.…”
Section: Related Workmentioning
confidence: 99%
“…MaxStyle [4] and L2D [60] require additional network modules and complex auxiliary tasks to help generate the new styles. Typically, AdvStyle [63] is the most related work to us. Thus, we highlight the key differences: 1) AdvStyle attacks styles on the image, while we attack styles on multiple feature spaces with a progressive attacking method; 2) AdvStyle uses the same task loss (segmentation) for attacking and optimization; in contrast, we use the classical classification loss to attack the styles, while utilize the task loss (FSL) to optimize the whole network.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…However, the domain shift is usually agnostic in real-world scenarios since the target data is not available for training. This issue inspires the research area of domain generalization (DG) [1,22,27,28,30,34,41,43,45,47,51,52,54,74,75,[78][79][80], which is aimed to make models trained on seen domains achieve accurate predictions on unseen domainss, i.e., the conditional distribution P (Y |X) is robust with shifted marginal distribution P (X). Canonical DG focuses on learning a domaininvariant feature distribution P (F (X)) across domains for the robustness of conditional distribution P (Y |F (X)).…”
Section: Introductionmentioning
confidence: 99%