2020
DOI: 10.1007/978-3-030-58571-6_41
|View full text |Cite
|
Sign up to set email alerts
|

BigNAS: Scaling up Neural Architecture Search with Big Single-Stage Models

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
137
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 170 publications
(149 citation statements)
references
References 20 publications
0
137
0
Order By: Relevance
“…GFNet [35] processes a sequence of small patches from original images and terminates the inference once the model is sufficiently confident about its prediction. Our method is more close to another category of methods [10], [11], [37], [38], [39] where the dynamic routes are determined by the resource budgets. NestedNet [37] uses a nested sparse network which consists of multiple levels to meet different resources.…”
Section: Related Workmentioning
confidence: 98%
“…GFNet [35] processes a sequence of small patches from original images and terminates the inference once the model is sufficiently confident about its prediction. Our method is more close to another category of methods [10], [11], [37], [38], [39] where the dynamic routes are determined by the resource budgets. NestedNet [37] uses a nested sparse network which consists of multiple levels to meet different resources.…”
Section: Related Workmentioning
confidence: 98%
“…Slimmable neural networks [7,8] showed that for ImageNet [14] classification, a trained convolutional super-network can have structured sub-networks with similar performance to individually trained networks. BigNAS [15] trained a single set of shared weights on Im-ageNet which are used to obtain child models via a simple coarseto-fine architecture selection heuristic. All these works suggest a possible super-network that encompasses multiple high quality subnetworks and encourage us to explore DSNN, a general sparsitybased super-network.…”
Section: Model Formulationmentioning
confidence: 99%
“…Dai et al [99] Data adapted pruning for efficient neural architecture search DA-NAS 2020 ECCV Gradient based Classification Tian et al [100] Efficient and effective GAN architecture search E 2 GAN 2020 ECCV Reinforcement learning GAN Chu et al [101] Fair differentiable architecture search FairDARTS 2020 ECCV Gradient based Classification Hu et al [102] Three-freedom neural architecture search TF-NAS 2020 ECCV Gradient based Classification Hu et al [103] Angle-based search space shrinking ABS 2020 ECCV Other Classification Yu et al [104] Barrier penalty neural architecture search BP-NAS 2020 ECCV Other Classification Wang et al [105] Attention cell search for video classification AttentionNAS 2020 ECCV Other Video classification Bulat et al [106] Binary architecTure search BATS 2020 ECCV Other Classification Yu et al [107] Neural architecture search with big single-stage models BigNAS 2020 ECCV Gradient based Classification Guo et al [108] Single path one-shot neural architecture search with uniform sampling Single-Path-SuperNet 2020 ECCV Evolutionary algorithm Classification Liu et al [109] Unsupervised neural architecture search UnNAS 2020 ECCV Gradient based Classification get tasks, which can solve large GPU memory consumption problems and long computation time of the NAS method. Liu et al [67] proposed the method of DARTS for effective structure search.…”
Section: Gradient Based Classificationmentioning
confidence: 99%