2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition 2018
DOI: 10.1109/cvpr.2018.00974
|View full text |Cite
|
Sign up to set email alerts
|

Learning Structure and Strength of CNN Filters for Small Sample Size Training

Abstract: Convolutional Neural Networks have provided state-ofthe-art results in several computer vision problems. However, due to a large number of parameters in CNNs, they require a large number of training samples which is a limiting factor for small sample size problems. To address this limitation, we propose SSF-CNN which focuses on learning the "structure" and "strength" of filters. The structure of the filter is initialized using a dictionary based filter learning algorithm and the strength of the filter is learn… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
37
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 75 publications
(38 citation statements)
references
References 39 publications
(79 reference statements)
0
37
0
Order By: Relevance
“…Nevertheless, Geometric Morphometric data has still proven to be a powerful type of input data for training, proving relatively fast to learn patterns from >1 min. Furthermore, considering the nature of the landmark data involved and its consequent transformation through GPA and PCA dimensionality reduction methods, this type of input data is less prone to issues presented by sample size as opposed to studies concerning, for example, Computer Vision and image processing-based techniques [54,[73][74][75][76], the latter requiring large amounts of parameters (usually in the millions), which are hard to learn from small datasets [76].…”
Section: Discussionmentioning
confidence: 99%
“…Nevertheless, Geometric Morphometric data has still proven to be a powerful type of input data for training, proving relatively fast to learn patterns from >1 min. Furthermore, considering the nature of the landmark data involved and its consequent transformation through GPA and PCA dimensionality reduction methods, this type of input data is less prone to issues presented by sample size as opposed to studies concerning, for example, Computer Vision and image processing-based techniques [54,[73][74][75][76], the latter requiring large amounts of parameters (usually in the millions), which are hard to learn from small datasets [76].…”
Section: Discussionmentioning
confidence: 99%
“…Algorithm 1 summarizes the training process of two main stages: large-scale DNN training (line 1-5) and metatransfer learning (line [6][7][8][9][10][11][12][13][14][15][16][17][18][19][20][21][22]. HT meta-batch re-sampling and continuous training phases are shown in lines 16-20, for which the failure classes are returned by Algorithm 2, see line 14.…”
Section: Algorithmmentioning
confidence: 99%
“…Oyallon et al 15 applied hybrid network to overcome the drawback of handcrafted filters. Rohit Keshari et al 16 proposed Structure and Strength Filtered CNN (SSF‐CNN). The second is to initialize the filter.…”
Section: Related Workmentioning
confidence: 99%
“…However, when these CNNs are trained, these architectures are not applicable to other datasets. Because these architectures learn filters in stack‐wise manner, once the network (filter) is trained, fine‐tuning filters on other databases is usually not allowed 16 . To address the gap, we built the Lightweight CNN‐LSTM network based on Rohit Keshari's method.…”
Section: Related Workmentioning
confidence: 99%