Proceedings of the 26th Annual International Conference on Machine Learning 2009
DOI: 10.1145/1553374.1553411
|View full text |Cite
|
Sign up to set email alerts
|

Domain adaptation from multiple sources via auxiliary classifiers

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

4
222
0

Year Published

2013
2013
2020
2020

Publication Types

Select...
5
3
2

Relationship

0
10

Authors

Journals

citations
Cited by 280 publications
(226 citation statements)
references
References 11 publications
4
222
0
Order By: Relevance
“…In particular, amongst TL baselines we chose: No transfer: Regularized Least Squares (RLS) algorithm trained solely on the target data; Best source: indicates the performance of the best source classifier selected by its score on the testing set. This is a pseudo-indicator of what an HTL can achieve; AverageKT: obtained by averaging the predictions of all the source classifiers; RLS src+feat: RLS trained on the concatenation of feature descriptors and source classifier predictions; MultiKT · 2 : HTL algorithm by [24] selecting β in (1) by minimizing the leave-one-out error subject to β 2 ≤ τ ; MultiKT · 1 : similar to previous, but applying the constraint β 1 ≤ τ ; DAM: An HTL algorithm by [9], that can handle selection from multiple source hypotheses. It was shown to perform better than a well known and similar ASVM [26] algorithm.…”
Section: Methodsmentioning
confidence: 99%
“…In particular, amongst TL baselines we chose: No transfer: Regularized Least Squares (RLS) algorithm trained solely on the target data; Best source: indicates the performance of the best source classifier selected by its score on the testing set. This is a pseudo-indicator of what an HTL can achieve; AverageKT: obtained by averaging the predictions of all the source classifiers; RLS src+feat: RLS trained on the concatenation of feature descriptors and source classifier predictions; MultiKT · 2 : HTL algorithm by [24] selecting β in (1) by minimizing the leave-one-out error subject to β 2 ≤ τ ; MultiKT · 1 : similar to previous, but applying the constraint β 1 ≤ τ ; DAM: An HTL algorithm by [9], that can handle selection from multiple source hypotheses. It was shown to perform better than a well known and similar ASVM [26] algorithm.…”
Section: Methodsmentioning
confidence: 99%
“…Model-based domain adaptation approaches [22,23] discover an adaptive classifier that performs well on the target data. In these models, the learned classifier transfers model parameters from the source to the target domain without any change in the feature space.…”
Section: Related Workmentioning
confidence: 99%
“…While several existing single source adaptation techniques can be extended to multi-source adaptation, the literature in multi-source adaptation can be broadly categorized as: 1) feature representation approaches (Chattopadhyay et al, 2012;Sun et al, 2011;Duan et al, 2009;Duan et al, 2012; …”
Section: Related Workmentioning
confidence: 99%