2022
DOI: 10.1007/978-3-031-19778-9_3
|View full text |Cite
|
Sign up to set email alerts
|

Adaptive Transformers for Robust Few-shot Cross-domain Face Anti-spoofing

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
28
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 26 publications
(28 citation statements)
references
References 56 publications
0
28
0
Order By: Relevance
“…(1) Feature-wise transformation (Chen Q. et al, 2022 ) is used to convert feature distributions learned on the source domain into that of target domain. Huang H.-P. et al ( 2022 ) introduce an integrated adaptor module and a feature transformation layer in adaptive vision transformers (ViT), which adapts to different domains with a few samples to achieve robust performance. (2) Attention mechanism can learn some essential features of the target object and improve the learning ability of the model for critical features.…”
Section: Metric-based Few-shot Image Classificationmentioning
confidence: 99%
See 1 more Smart Citation
“…(1) Feature-wise transformation (Chen Q. et al, 2022 ) is used to convert feature distributions learned on the source domain into that of target domain. Huang H.-P. et al ( 2022 ) introduce an integrated adaptor module and a feature transformation layer in adaptive vision transformers (ViT), which adapts to different domains with a few samples to achieve robust performance. (2) Attention mechanism can learn some essential features of the target object and improve the learning ability of the model for critical features.…”
Section: Metric-based Few-shot Image Classificationmentioning
confidence: 99%
“…is used to convert feature distributions learned on the source domain into that of target domain Huang H.-P. et al (2022). introduce an integrated adaptor module and a feature transformation layer in adaptive vision transformers (ViT), which adapts to different domains with a few samples to achieve robust performance.…”
mentioning
confidence: 99%
“…With their in-built local patchifying and global self-attention mechanisms, ViTs may be potentially better-suited to FAS over their CNN counterparts. Most recently, CNN equipped with attention modules [28] and sophisticated designed ViT variants [13,18,20,24] have been introduced into FAS and obtained promising performance. However, whether a vanilla ViT without extra training samples from upstream tasks can achieve competitive crossdomain generalization has not been explored thus far.…”
Section: Introductionmentioning
confidence: 99%
“…To tackle the cross-domain problem, domain generalization [41,13] and adaption [21,10] techniques for FAS have been extensively studied in recent years. Domain generalization-based methods aim to develop a generalized FAS model with training data from multiple source domains.…”
Section: Introductionmentioning
confidence: 99%
“…During continual sessions, a small amount of data could lead to overfitting, bring poor generalization performance and catastrophic forgetting. To update models continually and efficiently, we introduce Efficient Parameter Transfer Learning (EPTL) paragdim for the DCL-FAS and utilize Adapters [9,10] for Vision Transformer (ViT) [6]. By using the adapters, [9], ViT models can be efficiently adapted even with low-shot training data.…”
Section: Introductionmentioning
confidence: 99%