2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2021
DOI: 10.1109/cvpr46437.2021.01361
|View full text |Cite
|
Sign up to set email alerts
|

Visualizing Adapted Knowledge in Domain Transfer

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
10
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
4
2

Relationship

1
9

Authors

Journals

citations
Cited by 52 publications
(10 citation statements)
references
References 19 publications
0
10
0
Order By: Relevance
“…One line is based on sample generation strategy. For example, 3C-GAN [21] and SDDA [19] generate labeled samples with target domain style for training; VDM-DA [30] generates source domain style features then aligns the generated features with target features; SFIT [11] utilizes the batch-norm layers of the source model to generate images with source domain style and aligns the output predictions. Another line uses pseudo label strategy.…”
Section: Source-free Domain Adaptationmentioning
confidence: 99%
“…One line is based on sample generation strategy. For example, 3C-GAN [21] and SDDA [19] generate labeled samples with target domain style for training; VDM-DA [30] generates source domain style features then aligns the generated features with target features; SFIT [11] utilizes the batch-norm layers of the source model to generate images with source domain style and aligns the output predictions. Another line uses pseudo label strategy.…”
Section: Source-free Domain Adaptationmentioning
confidence: 99%
“…CPGA [42] generates source avatar prototypes via contrastive learning and achieves adaptation with target pseudo labels. SFIT [16] designs a two-branch framework to achieve image translation and fine-tunes the target model with the generated images. However, generationbased methods always require additional models and are difficult to train.…”
Section: Model Adaptationmentioning
confidence: 99%
“…In image style transfer, the use of unsupervised unpaired images has become the most widely used method for style transfer [12] . This type of algorithm only needs to collect similar images from the same domain, without collecting the original and target domains of each image, greatly saving the workload of collecting images.…”
Section: Algorithmmentioning
confidence: 99%