2017
DOI: 10.48550/arxiv.1707.09724
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Transfer Learning with Label Noise

Abstract: Transfer learning aims to improve learning in target domain by borrowing knowledge from a related but di erent source domain. To reduce the distribution shift between source and target domains, recent methods have focused on exploring invariant representations that have similar distributions across domains. However, when learning this invariant knowledge, existing methods assume that the labels in source domain are uncontaminated, while in reality, we often have access to source data with noisy labels. In this… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2017
2017
2021
2021

Publication Types

Select...
6
2
1

Relationship

4
5

Authors

Journals

citations
Cited by 16 publications
(9 citation statements)
references
References 21 publications
0
9
0
Order By: Relevance
“…In the setting of label noise, transition probabilities are introduced to statistically model the generation of noisy labels. In classi cation and transfer learning, methods [25,21,35,38] employ transition probabilities to modify loss functions such that they can be robust to noisy labels. Similar strategies to modify deep neural networks by adding a transition layer have been proposed in [29,26].…”
Section: Related Workmentioning
confidence: 99%
“…In the setting of label noise, transition probabilities are introduced to statistically model the generation of noisy labels. In classi cation and transfer learning, methods [25,21,35,38] employ transition probabilities to modify loss functions such that they can be robust to noisy labels. Similar strategies to modify deep neural networks by adding a transition layer have been proposed in [29,26].…”
Section: Related Workmentioning
confidence: 99%
“…The goal of UDA is to transfer knowledge [21], [22], [23] from an annotated source domain to another unlabeled target domain by reducing domain shift. From the standpoint of feature learning, many UDA studies can be considered as either domain-invariant feature learning or domain-specific feature learning.…”
Section: A Unsupervised Domain Adaptationmentioning
confidence: 99%
“…The theoretical work on understanding the impact of label noise on the training and generalization of deep neural networks is still ongoing [65]. On the practical side, many studies have shown the negative impact of noisy labels on the performance of these models in real-world applications [66], [67], [68]. Not surprisingly, therefore, this topic has been the subject of much research in recent years.…”
Section: Deep Learning With Noisy Labelsmentioning
confidence: 99%