2019
DOI: 10.48550/arxiv.1910.03903
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

MixMatch Domain Adaptaion: Prize-winning solution for both tracks of VisDA 2019 challenge

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
6
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(7 citation statements)
references
References 0 publications
1
6
0
Order By: Relevance
“…Other favored semi-supervised techniques like tritraining and virtual adversarial training have been used in 1. https://cutt.ly/DfN3rFU frameworks [83], [84], respectively. Recently, [85] directly employs MixMatch [29] and obtains promising results in the VisDA-2019 challenge. Different from prior works that treat the whole target domain as an unlabeled dataset, we focus on intra-domain semi-supervised learning where the labeled dataset consists of confident target data samples and the unlabeled dataset consists of remaining samples.…”
Section: Semi-supervised Learningmentioning
confidence: 99%
“…Other favored semi-supervised techniques like tritraining and virtual adversarial training have been used in 1. https://cutt.ly/DfN3rFU frameworks [83], [84], respectively. Recently, [85] directly employs MixMatch [29] and obtains promising results in the VisDA-2019 challenge. Different from prior works that treat the whole target domain as an unlabeled dataset, we focus on intra-domain semi-supervised learning where the labeled dataset consists of confident target data samples and the unlabeled dataset consists of remaining samples.…”
Section: Semi-supervised Learningmentioning
confidence: 99%
“…Recently, semi-supervised learning approaches [22], [23] have also shown impressive achievements on UDA problem, and Rukhovich et al [53] even wins the VisDA competition by directly exploiting MixMatch [22] in 2019. Inspired by them, we construct the proxy source domain by pseudo-labeling portions of confident samples (source-similar samples), and try to solve the SFDA task in a semi-supervised style.…”
Section: A Proxy Source Domain Construction By Prototypesmentioning
confidence: 99%
“…Also aims to reduce the data distribution mismatch, compared with UDA, semi- supervised domain adaptation (SSDA) bridges domain discrepancy via introducing partially labeled target samples. Recently, a few methods have been proposed based on deep learning [46,32,22,35] for image classification. [46] decomposes SSDA into two sub-problems: UDA and SSL, and employ co-training [3] to exchange the expertise between two classifiers, which are trained on MixUp-ed [48] data between labeled and unlabeled data of each view.…”
Section: Related Workmentioning
confidence: 99%