Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining 2016
DOI: 10.1145/2939672.2939716
|View full text |Cite
|
Sign up to set email alerts
|

Domain Adaptation in the Absence of Source Domain Data

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
44
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 76 publications
(45 citation statements)
references
References 19 publications
0
44
0
Order By: Relevance
“…Model Adaptation Most of the previously mentioned methods need the explicit availability of source domain data during adaptation too, and have made tremendous strides in improving the segmentation performance in that case. A few recent papers tackle model adaptation for classification problems [44,42,16]. [39] proposes source-free domain adaptation in the case where label knowledge of the target domain is not available, and show their efficiency on a set of classification problems with varying levels of label overlap.…”
Section: Related Workmentioning
confidence: 99%
See 4 more Smart Citations
“…Model Adaptation Most of the previously mentioned methods need the explicit availability of source domain data during adaptation too, and have made tremendous strides in improving the segmentation performance in that case. A few recent papers tackle model adaptation for classification problems [44,42,16]. [39] proposes source-free domain adaptation in the case where label knowledge of the target domain is not available, and show their efficiency on a set of classification problems with varying levels of label overlap.…”
Section: Related Workmentioning
confidence: 99%
“…These pseudo-tasks train the shared network structure to learn the task in multiple ways, and can be interpreted as using self-supervision. Similarly, increasing the robustness of classification through feature noising has been studied extensively [28,8,68,16,47]. Work on learning robust networks through pseudoensembling by reducing the variance when dropout is used, has been proposed [2].…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations