2021
DOI: 10.1007/s10489-021-02769-6
|View full text |Cite
|
Sign up to set email alerts
|

Few-shot contrastive learning for image classification and its application to insulator identification

Abstract: This paper presents a novel discriminative Few-shot learning architecture based on batch compact loss. Currently, Convolutional Neural Network (CNN) has achieved reasonably good performance in image recognition. Most existing CNN methods facilitate classifiers to learn discriminating patterns to identify existing categories trained with large samples. However, learning to recognize novel categories from a few examples is a challenging task. To address this, we propose the Residual Compact Network to train a de… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 13 publications
(3 citation statements)
references
References 53 publications
0
3
0
Order By: Relevance
“…The author in [5] ends their study by concluding that deep residual learning for image recognition is a promising direction in image recognition. They note that deep residual learning for image recognition is computationally efficient, more accurate and suitable for sparse data representations.…”
Section: Introductionmentioning
confidence: 99%
“…The author in [5] ends their study by concluding that deep residual learning for image recognition is a promising direction in image recognition. They note that deep residual learning for image recognition is computationally efficient, more accurate and suitable for sparse data representations.…”
Section: Introductionmentioning
confidence: 99%
“…Gao et al 34 proposed a multi-scale residual network for hyperspectral image classification. Li et al 35 designed a residual compact network to train a deep neural network for insulator recognition. These methods apply residual connections, avoiding feature degradation by guiding deep features with original features through identity transformation.…”
Section: Introductionmentioning
confidence: 99%
“…Ahuja 13 and Oyewola 33 et al applied only one residual connection at the beginning to end (ResBTE) of their designed modules. Gao 34 and Li 35 et al connected multiple residual connections in series (ResCMS). Nevertheless, they suffer from the drawback of not fully utilizing the information of each convolution layer.…”
Section: Introductionmentioning
confidence: 99%