2021
DOI: 10.1109/tpami.2020.3044416
|View full text |Cite
|
Sign up to set email alerts
|

FNA++: Fast Network Adaptation via Parameter Remapping and Architecture Search

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
26
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
8
2

Relationship

1
9

Authors

Journals

citations
Cited by 34 publications
(29 citation statements)
references
References 22 publications
0
26
0
Order By: Relevance
“…Also, input features are downsampled aiming to eliminate spatial redundancy. Two approaches for transferring architectures in NAS are presented in [16] [17]. Last but not least, Yu et al introduce a NAS methodology tailored to face spoofing detection that considers a search space ad-hoc for the task [18].…”
Section: The Automl Special Sectionmentioning
confidence: 99%
“…Also, input features are downsampled aiming to eliminate spatial redundancy. Two approaches for transferring architectures in NAS are presented in [16] [17]. Last but not least, Yu et al introduce a NAS methodology tailored to face spoofing detection that considers a search space ad-hoc for the task [18].…”
Section: The Automl Special Sectionmentioning
confidence: 99%
“…Although these lightweight networks have achieved good detection performance, this manual design cost is high and becomes difficult as the network becomes complex. Besides, there are also some recent works [36][37][38] on automatically designing network architectures. The search space of these methods is extremely large, thus one needs to train hundreds of models to distinguish good from bad ones.…”
Section: Lightweight Network Designmentioning
confidence: 99%
“…Obviously, from the pixel mapping we established, we can see that the pixel region included in the dilated convolution is a subset of the pixel region of the standard convolution. Inspired by [27], we directly copy the weights of the pre-trained standard convolution to the non-zero element position of the dilated convolution, which is equivalent to the dilated convolution retaining some feature information extracted by the original convolution.…”
Section: Dilated Rate Selection For Multi-scale Extractionmentioning
confidence: 99%