2023
DOI: 10.1016/j.patcog.2023.109381
|View full text |Cite
|
Sign up to set email alerts
|

Deep metric learning for few-shot image classification: A Review of recent developments

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
19
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 47 publications
(19 citation statements)
references
References 99 publications
0
19
0
Order By: Relevance
“…In order to verify the performance of the proposed method for few-shot fault diagnosis, five methods are compared: 1) Finetune-Last [32] ; 2)Finetune-Whole [32] ; 3)Siamese Net [16] ; 4) Protot Net [18] ; 5) Matching Net [14] .The Adam method is used to optimize the network parameters. For the fine-tuning based models (Finetune-Last and Finetune-Whole), the fine-tuning iteration number is set to 100, and the network model is a four-layer convolutional neural network.…”
Section: Experimental Setting and Parameter Definitionmentioning
confidence: 99%
See 1 more Smart Citation
“…In order to verify the performance of the proposed method for few-shot fault diagnosis, five methods are compared: 1) Finetune-Last [32] ; 2)Finetune-Whole [32] ; 3)Siamese Net [16] ; 4) Protot Net [18] ; 5) Matching Net [14] .The Adam method is used to optimize the network parameters. For the fine-tuning based models (Finetune-Last and Finetune-Whole), the fine-tuning iteration number is set to 100, and the network model is a four-layer convolutional neural network.…”
Section: Experimental Setting and Parameter Definitionmentioning
confidence: 99%
“…In this paper, the meta-learning method is used to make the network model learn general metaknowledge from the source domain and adapt to new tasks quickly. At present, many ideas based on metalearning [6] have been proposed, mainly including metric-based algorithms [7][8] 、 algorithms based on data augmentation [9] and model-based optimization algorithms [10][11][12] .Among them, metric-based metalearning is considered to be one of the simplest and effective methods, Wang et al [13] proposed a metalearning model based on feature space metrics, which combines general supervised learning and metric meta-learning [14] (Matching Net), using the similarity information of single sample and sample group to learn the fault category of unknown samples. Zhang et al [15] proposed a few-shot model based on Siamese network [16]( Siamese network) for bearing fault diagnosis.…”
Section: Introductionmentioning
confidence: 99%
“…The recognition task can be completed by comparing the distance to labelled samples. This method can effectively avoid the problem of over fitting because it does not require additional parameters for new classes [19]. Different distance metrics can be used: Manhattan distance, Euclidean distance, cosine similarity, and others.…”
Section: Metric Learningmentioning
confidence: 99%
“…However, the number of support samples is limited, so they cannot adequately represent the novel class [19].…”
Section: Metric Learningmentioning
confidence: 99%
“…Various triplet selection strategies, like semi-hard triplet mining, as used on FaceNet [29], aim to strike a balance between excessively challenging and overly simplistic examples, located within a predefined margin hyperparameter. DML has been applied to tasks such as face classification, hyperspectral image classification, and few-shot classifications of diseases [25,[29][30][31][32][33][34], where it has outperformed the baseline methods.…”
Section: Introductionmentioning
confidence: 99%