2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2019
DOI: 10.1109/cvpr.2019.01155
|View full text |Cite
|
Sign up to set email alerts
|

Characterizing and Avoiding Negative Transfer

Abstract: When labeled data is scarce for a specific target task, transfer learning often offers an effective solution by utilizing data from a related source task. However, when transferring knowledge from a less related source, it may inversely hurt the target performance, a phenomenon known as negative transfer. Despite its pervasiveness, negative transfer is usually described in an informal manner, lacking rigorous definition, careful analysis, or systematic treatment. This paper proposes a formal definition of nega… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

6
192
0
1

Year Published

2019
2019
2023
2023

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 323 publications
(199 citation statements)
references
References 29 publications
6
192
0
1
Order By: Relevance
“…Though the notion of negative transfer has been well recognized within the DA community [30], its rigorous definition is still unclear [46]. A widely accepted description of negative transfer [30] is stated as transferring knowledge from the source can have a negative impact on the target learner.…”
Section: Model Robustness Evaluationmentioning
confidence: 99%
See 1 more Smart Citation
“…Though the notion of negative transfer has been well recognized within the DA community [30], its rigorous definition is still unclear [46]. A widely accepted description of negative transfer [30] is stated as transferring knowledge from the source can have a negative impact on the target learner.…”
Section: Model Robustness Evaluationmentioning
confidence: 99%
“…While intuitive, how to evaluate it still remains open. Inspired by [46], we propose meaningful protocols to evaluate the robustness of a given algorithm especially under the more general partial setting. It is noteworthy that in this setting, the negative transfer is caused not only from the unrelated samples within the shared categories but also from the unrelated data from the source outlier classes.…”
Section: Model Robustness Evaluationmentioning
confidence: 99%
“…With fine-tuning, learned parameters or features of source tasks may be forgotten after learning target tasks [29], and domain similarity between tasks being important for transfer learning [89]. Furthermore, transferring knowledge between dissimilar tasks may cause negative transfer [62,79]. Thus many works have discussed task difference between image classification and object detection [60,69,72,6] and [13] shows that object detectors I, which finetuned from ImageNet pre-trained models, can achieve high accuracy.…”
Section: Introductionmentioning
confidence: 99%
“…Sometimes the performance may actually be worse than if the target task was trained alone. This is known as negative transfer and occurs when the source tasks are poorly suited for the target tasks ( Wang et al., 2019 ). It appears in human learning as well, e.g., learning to throw a baseball may be harder after learning to throw a football due to muscle memory making it difficult to adapt to a new throwing motion.…”
Section: Methods To Integrate Human Knowledgementioning
confidence: 99%
“…Currently, preventing negative transfer requires effective human intuition or experience. Research is conducted to develop methods that will quantitatively eliminate negative performance, including using a discriminator gate to assign different weights to each source task ( Wang et al., 2019 ) and using an iterative method that detects the source of the negative transfer to reduce class noise ( Gui et al., 2018 ).…”
Section: Methods To Integrate Human Knowledgementioning
confidence: 99%