2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2019
DOI: 10.1109/cvpr.2019.00936
|View full text |Cite
|
Sign up to set email alerts
|

Partial Order Pruning: For Best Speed/Accuracy Trade-Off in Neural Architecture Search

Abstract: Achieving good speed and accuracy trade-off on a target platform is very important in deploying deep neural networks in real world scenarios. However, most existing automatic architecture search approaches only concentrate on high performance. In this work, we propose an algorithm that can offer better speed/accuracy trade-off of searched networks, which is termed "Partial Order Pruning". It prunes the architecture search space with a partial order assumption to automatically search for the architectures with … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
73
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
4
4

Relationship

0
8

Authors

Journals

citations
Cited by 128 publications
(73 citation statements)
references
References 26 publications
(83 reference statements)
0
73
0
Order By: Relevance
“…architecture depends on the difficulty and size of the dataset at hand. While these findings may encourage an automated neural architecture search, such an approach is hindered by the limited computational resources [19], [20], [21], [22], [23]. Alternatively, we propose an ensemble architecture, which combines U-Nets of varying depths into one unified structure.…”
Section: Table Imentioning
confidence: 99%
“…architecture depends on the difficulty and size of the dataset at hand. While these findings may encourage an automated neural architecture search, such an approach is hindered by the limited computational resources [19], [20], [21], [22], [23]. Alternatively, we propose an ensemble architecture, which combines U-Nets of varying depths into one unified structure.…”
Section: Table Imentioning
confidence: 99%
“…The architecture search step is based on: 1) the evolutionary algorithm to mutate the best architecture on the Pareto front; 2) Partial Order Pruning method (Li et al 2019a) to prune the architecture search space with the prior knowledge that deeper models and wider models are better. Our algorithm can be parallelized on multiple computation nodes (each has 8 V100 GPUs) and lift the Pareto front simultaneously.…”
Section: Multi-objective Search Algorithmmentioning
confidence: 99%
“…We also found that MobileNet V2 is dominated by other models although it has mush less FLOPs in Figure 3-2. This is because it has higher memory access cost thus is slower in practice (Li et al 2019a). Therefore, using the direct metric, i.e.…”
Section: Experiments Architecture Search Detailsmentioning
confidence: 99%
See 1 more Smart Citation
“…Moreover, the input resolution of our model is the original high resolution of 1024 × 2048, but our FSFNet is the fastest model, which has a small number of parameters and high accuracy. Comparing our FSFNet to the DF1-Seg-d8 [50] and Fasterseg [35] models, mIoU is 2.2% and 2.3% lower, but the inference speed is faster by 48% and 23%, respectively. Thus, we can confirm that our FSFNet has a good trade-off between semantic segmentation accuracy and computational resources in existing state-of-the-art semantic segmentation.…”
Section: 72mentioning
confidence: 99%