2020 25th International Conference on Pattern Recognition (ICPR) 2021
DOI: 10.1109/icpr48806.2021.9412941
|View full text |Cite
|
Sign up to set email alerts
|

Explanation-Guided Training for Cross-Domain Few-Shot Classification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
28
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
2
1

Relationship

0
10

Authors

Journals

citations
Cited by 40 publications
(30 citation statements)
references
References 23 publications
0
28
0
Order By: Relevance
“…Model-aware approaches [16], [17], [18], [19], [20], [21], [22], [23], [24], [25], [12], [26], on the other hand, take the internal structure of the model into account and therefore tend to yield more precise model based explanations. For example, LRP [2], a model-aware post-hoc XAI approach, has been widely used to explain the decisions of various deep neural networks, such as convolutional neural networks, recurrent neural networks and graph neural networks [27]. Because of its importance in general, and to this paper in particular, we briefly describe the basic idea of LRP.…”
Section: A Explainability Methodsmentioning
confidence: 99%
“…Model-aware approaches [16], [17], [18], [19], [20], [21], [22], [23], [24], [25], [12], [26], on the other hand, take the internal structure of the model into account and therefore tend to yield more precise model based explanations. For example, LRP [2], a model-aware post-hoc XAI approach, has been widely used to explain the decisions of various deep neural networks, such as convolutional neural networks, recurrent neural networks and graph neural networks [27]. Because of its importance in general, and to this paper in particular, we briefly describe the basic idea of LRP.…”
Section: A Explainability Methodsmentioning
confidence: 99%
“…Based on this, FRN [84] explores the potential space for few-shot image classification, using ridge regression to reconstruct and normalize the feature map without adding new learning parameters. FWT [85] utilizes only the source data for the affine transformation of features, as do LRP-GNN [86] and SBMTL [87]. FD-MIXUP [88] constructs auxiliary datasets by mixup and uses encoders to learn domain-irrelevant features to guide the network generalization to other tasks.…”
Section: Cross-domain Few-shot Learningmentioning
confidence: 99%
“…Few-shot image generation. Conventional few-shot learning [24][25][26] aims at learning a discriminative classifier for classification [27][28][29][30], segmentation [31,32] or detection [33][34][35] tasks. Differently, few-shot image generation (FSIG) [14,18,19] aims at learning a generator for new and diverse samples given extremely limited samples (e.g., 10 shots).…”
Section: Related Workmentioning
confidence: 99%