2018
DOI: 10.1613/jair.5756
|View full text |Cite
|
Sign up to set email alerts
|

On the Behavior of Convolutional Nets for Feature Extraction

Abstract: Deep neural networks are representation learning techniques. During training, a deep net is capable of generating a descriptive language of unprecedented size and detail in machine learning. Extracting the descriptive language coded within a trained CNN model (in the case of image data), and reusing it for other purposes is a field of interest, as it provides access to the visual descriptors previously learnt by the CNN after processing millions of images, without requiring an expensive trainin… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
48
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
4
3
1
1

Relationship

1
8

Authors

Journals

citations
Cited by 71 publications
(49 citation statements)
references
References 20 publications
0
48
0
Order By: Relevance
“…In this paper, the features are extracted from the FC layer of three architectures (AOCT-Net, MobileNet, and ShuffleNet) which is a common method since the FC layer proceeds the Softmax classifier. Based on the selected layer only three features from each class will be extracted and these features will a fine selected and representative [49,50]. The scatter distribution for the extracted features using three models is represented in Figure 5.…”
Section: Features Extractionmentioning
confidence: 99%
See 1 more Smart Citation
“…In this paper, the features are extracted from the FC layer of three architectures (AOCT-Net, MobileNet, and ShuffleNet) which is a common method since the FC layer proceeds the Softmax classifier. Based on the selected layer only three features from each class will be extracted and these features will a fine selected and representative [49,50]. The scatter distribution for the extracted features using three models is represented in Figure 5.…”
Section: Features Extractionmentioning
confidence: 99%
“…In this way, the learning rule in a Softmax classifier for the binary units is similar to the regular binary unit law. The only difference is that the Softmax function model is the generalization of the logistic sigmoid function, which can handle classification problems with more than two possible values [49,50].…”
Section: Softmax Classifiermentioning
confidence: 99%
“…The mapping of standardized values into these three categories is done through the definition of two constant thresholds. The optimal values of these thresholds can be found empirically for a labeled dataset [33]. However, we use certain threshold values shown to perform consistently across several domains [4].…”
Section: Full-network Embeddingmentioning
confidence: 99%
“…Deep learning methodologies contribute to simplifying the feature engineering process, and by using a deep multi-layer convolutional neural network (CNN) [16], the features are learned from the data during the training process. Consequently, a successfully trained CNN model can be considered as a feature extractor that both combines features in different spatial locations and takes into account the aspects of spatial autocorrelation between features.…”
Section: Introductionmentioning
confidence: 99%