2020 IEEE International Conference on Multimedia and Expo (ICME) 2020
DOI: 10.1109/icme46284.2020.9102756
|View full text |Cite
|
Sign up to set email alerts
|

Category-Level Adversarial Self-Ensembling for Domain Adaptation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3

Citation Types

0
3
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
1

Relationship

1
4

Authors

Journals

citations
Cited by 6 publications
(3 citation statements)
references
References 16 publications
0
3
0
Order By: Relevance
“…These methods based on self-ensembling usually incorporate adversarial training to further align the feature distributions of the source and target domains, or introduce consistency loss terms to promote the student network to learn consistent predictions across different perturbations of the input data. For instance, Zuo et al [ 16 ] proposed a category-level adversarial framework, which focuses on aligning the features of the source and target domains. In another recent work, Xu et al [ 17 ] designed a self-ensembling attention network to address domain shift, which is employed to promote the computation of consistency loss for the unlabeled domain.…”
Section: Introductionmentioning
confidence: 99%
“…These methods based on self-ensembling usually incorporate adversarial training to further align the feature distributions of the source and target domains, or introduce consistency loss terms to promote the student network to learn consistent predictions across different perturbations of the input data. For instance, Zuo et al [ 16 ] proposed a category-level adversarial framework, which focuses on aligning the features of the source and target domains. In another recent work, Xu et al [ 17 ] designed a self-ensembling attention network to address domain shift, which is employed to promote the computation of consistency loss for the unlabeled domain.…”
Section: Introductionmentioning
confidence: 99%
“…Meanwhile, according to the emotion theories [8,9], recent studies have considered both the theory-based sentiment composition and the spatial distribution considering class information in a latent feature space [10][11][12][13]. Secondly, researchers have concentrated on the small amount of well-labeled data [14][15][16][17][18][19][20][21][22]. Specifically, these methods regard a part of well-labeled data as unlabeled data.…”
Section: Introductionmentioning
confidence: 99%
“…the model has the ability to annotate sentiment labels [14][15][16][17]. Furthermore, to train with more than one dataset, domain adaptation (DA) approaches have been employed for image sentiment analysis [18][19][20][21][22].…”
Section: Introductionmentioning
confidence: 99%