2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2019
DOI: 10.1109/cvpr.2019.00494
|View full text |Cite
|
Sign up to set email alerts
|

SpotTune: Transfer Learning Through Adaptive Fine-Tuning

Abstract: Transfer learning, which allows a source task to affect the inductive bias of the target task, is widely used in computer vision. The typical way of conducting transfer learning with deep neural networks is to fine-tune a model pretrained on the source task using data from the target task. In this paper, we propose an adaptive fine-tuning approach, called SpotTune, which finds the optimal fine-tuning strategy per instance for the target data. In SpotTune, given an image from the target task, a policy network i… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

4
281
2
2

Year Published

2019
2019
2023
2023

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 362 publications
(289 citation statements)
references
References 42 publications
4
281
2
2
Order By: Relevance
“…In order to improve performance further, we apply fine-tuning and SpotTune, 16 both with and without HM. Fine-tuning on the limited number of regular MIP images does not show any improvement in performance; however, SpotTune leads to a ROC AUC of 0.825.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…In order to improve performance further, we apply fine-tuning and SpotTune, 16 both with and without HM. Fine-tuning on the limited number of regular MIP images does not show any improvement in performance; however, SpotTune leads to a ROC AUC of 0.825.…”
Section: Resultsmentioning
confidence: 99%
“…This inputdependent fine-tuning approach enables targeting layers per input instance and leads to better accuracy. 16 We refer readers to the original paper 16 for further details of SpotTune.…”
Section: Fine-tuningmentioning
confidence: 99%
“…In particular, while in [8] simple multiplicative binary masks are used to indicate which network parameters are useful for a new task and which are not, in [9] a more general formulation is proposed considering affine transformations. Guo et al [12] proposed an adaptive fine-tuning method and derive specialized classifiers by fine-tuning certain layers according to a given target image.…”
Section: Related Workmentioning
confidence: 99%
“…Interestingly, the best performing approach in terms of score, i.e. SpotTune [12], requires a much larger number of parameters that would restrict the use of this method when increasing the number of domains.…”
Section: Multi-domain Learningmentioning
confidence: 99%
“…Long et al [24] propose a deep adaptation network architecture to match the mean embeddings of different domain distributions in a reproducing kernel Hilbert space. Guo et al [13] propose an adaptive fine-tuning approach to find the optimal fine-tuning strategy per instance for the target data. Readers can refer to [27] and the references therein for details about transfer learning.…”
Section: Related Workmentioning
confidence: 99%