2020
DOI: 10.1016/j.neucom.2019.10.105
|View full text |Cite
|
Sign up to set email alerts
|

PAC-Bayes and domain adaptation

Abstract: We provide two main contributions in PAC-Bayesian theory for domain adaptation where the objective is to learn, from a source distribution, a well-performing majority vote on a different, but related, target distribution. Firstly, we propose an improvement of the previous approach we proposed in [1], which relies on a novel distribution pseudodistance based on a disagreement averaging, allowing us to derive a new tighter domain adaptation bound for the target risk. While this bound stands in the spirit of comm… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
38
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
7
3

Relationship

1
9

Authors

Journals

citations
Cited by 30 publications
(48 citation statements)
references
References 32 publications
0
38
0
Order By: Relevance
“…Pioneering theoretical work was proposed by Ben-David et al [10], which shows that the target risk is upper bounded by three terms: source risk, marginal distribution discrepancy, and combined risk. This learning bound has been extended from many perspectives, such as considering different loss functions [32], different distribution distances [33], [34], [35] or the PAC-Bayes framework [36], [37]. According to the survey [14], most works focus on proving tighter bounds by constructing a new distribution distance.…”
Section: Domain Adaptation Theorymentioning
confidence: 99%
“…Pioneering theoretical work was proposed by Ben-David et al [10], which shows that the target risk is upper bounded by three terms: source risk, marginal distribution discrepancy, and combined risk. This learning bound has been extended from many perspectives, such as considering different loss functions [32], different distribution distances [33], [34], [35] or the PAC-Bayes framework [36], [37]. According to the survey [14], most works focus on proving tighter bounds by constructing a new distribution distance.…”
Section: Domain Adaptation Theorymentioning
confidence: 99%
“…An extension of hypothesis transfer learning is studied in [42], where an algorithm combining the hypotheses from multiple sources based on regularized ERM principle is studied. There are also works focusing on the theoretical aspects of domain adaptation, see [5,11,43,44,45,46], which are also related to our problem. Note that in domain adaptation, there is no labeled target data and only unlabeled target samples are available.…”
Section: Related Workmentioning
confidence: 99%
“…PAC-Bayesian analysis of distribution shift. Performance under distribution shift has also been characterized under the PAC-Bayesian setting where the learning algorithm outputs a posterior distribution over the h hypothesis class [37,38,61]. Li and Bilmes [61] directly bound the error on the target distribution (OOD) in terms of the empirical error on a small number of labeled samples from the target and a "divergence prior" which measures some divergence between the source and target domains.…”
Section: F Additional Related Workmentioning
confidence: 99%