2019
DOI: 10.1007/978-3-030-34879-3_12
|View full text |Cite
|
Sign up to set email alerts
|

Enhanced Transfer Learning with ImageNet Trained Classification Layer

Abstract: Parameter fine tuning is a transfer learning approach whereby learned parameters from pre-trained source network are transferred to the target network followed by fine-tuning. Prior research has shown that this approach is capable of improving task performance. However, the impact of the ImageNet pre-trained classification layer in parameter finetuning is mostly unexplored in the literature. In this paper, we propose a fine-tuning approach with the pre-trained classification layer. We employ layer-wise fine-tu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
11
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
3
1

Relationship

3
6

Authors

Journals

citations
Cited by 24 publications
(13 citation statements)
references
References 24 publications
0
11
0
Order By: Relevance
“…Hence, we only considered representatives of the base models in this domain as discussed below. Conveniently, these models are all available as part of the Keras API and each support transfer learning [94] in the form of supporting the pre-application to the model of the ImageNet [95] weights.…”
Section: Model Developmentmentioning
confidence: 99%
“…Hence, we only considered representatives of the base models in this domain as discussed below. Conveniently, these models are all available as part of the Keras API and each support transfer learning [94] in the form of supporting the pre-application to the model of the ImageNet [95] weights.…”
Section: Model Developmentmentioning
confidence: 99%
“…For these reasons we have selected five deep learning models as classifiers for our experiments. Conveniently, these models are all available as part of the Keras API and each support transfer learning [44] in the form of supporting the pre-application to the model of the ImageNet [40] weights.…”
Section: B Model Considerationmentioning
confidence: 99%
“…The network was trained using the weights pre-trained on ImageNet. The frozen block approach was adopted for better results [ 62 ]: each EfficientNetB3 block was trained for three epochs using a learning rate of for the top layers and for the rest of the blocks. Training stopped when validation loss no longer yielded an improvement and the model started overfitting.…”
Section: Methodsmentioning
confidence: 99%