2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW) 2021
DOI: 10.1109/iccvw54120.2021.00209
|View full text |Cite
|
Sign up to set email alerts
|

Rethinking Content and Style: Exploring Bias for Unsupervised Disentanglement

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(3 citation statements)
references
References 21 publications
0
3
0
Order By: Relevance
“…and Lee et al [ 99 ] proposed to separate an image into two spaces, a content space and an attribute space. Another type of disentanglement was proposed by [ 8 , 100 ], who considered the important features as content and the remaining ones as style features.…”
Section: Guidancementioning
confidence: 99%
See 1 more Smart Citation
“…and Lee et al [ 99 ] proposed to separate an image into two spaces, a content space and an attribute space. Another type of disentanglement was proposed by [ 8 , 100 ], who considered the important features as content and the remaining ones as style features.…”
Section: Guidancementioning
confidence: 99%
“…Ren et al [ 100 ] began their paper by stating that most content-style disentanglement methods depend on supervision to guide the disentanglement. They followed the work of Gabbay et al [ 102 ] and proposed an unsupervised content-style disentanglement module (C-S DisMo), which tries to isolate the most important features for the reconstruction from less important features.…”
Section: Guidancementioning
confidence: 99%
“…We encode the contrast information into a low-dimensional vector by a shared encoder E C . We employ an information bottleneck loss [2,19] here to limit the information capacity of c and avoid informative leakage:…”
Section: Contrast Encodermentioning
confidence: 99%