2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) 2021
DOI: 10.1109/cvprw53098.2021.00337
|View full text |Cite
|
Sign up to set email alerts
|

Discovering Multi-Hardware Mobile Models via Architecture Search

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
7
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
3
2
2

Relationship

1
6

Authors

Journals

citations
Cited by 11 publications
(7 citation statements)
references
References 5 publications
0
7
0
Order By: Relevance
“…NAS and accelerator design. Hardware-aware NAS has been actively studied to incorporate characteristics of target device and automate the design of optimal architectures subject to latency and/or energy constraints [4,8,15,19,23,25,32,35]. These studies do not explore the hardware design space.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…NAS and accelerator design. Hardware-aware NAS has been actively studied to incorporate characteristics of target device and automate the design of optimal architectures subject to latency and/or energy constraints [4,8,15,19,23,25,32,35]. These studies do not explore the hardware design space.…”
Section: Related Workmentioning
confidence: 99%
“…(DNN) models [36]. As DNNs are being deployed on increasingly diverse devices such as tiny Internet-of-Things devices, state-of-theart (SOTA) NAS is turning hardware-aware by further taking into consideration the target hardware as a crucial factor that affects the resulting performance (e.g., inference latency) of NAS-designed models [4,8,15,25,26,30,32] Likewise, optimizing hardware accelerators built on Field Programmable Gate Array (FPGA) or Application-Specific Integrated Circuit (ASIC), as well as the corresponding dataflows (e.g., scheduling DNN computations and mapping them on hardware), is also critical for speeding up DNN execution [1,11,33].…”
mentioning
confidence: 99%
“…The pretraining dataset for CompressionNet is still Kinetics-600 [12]. We then reduced the model complexity by leveraging MoViNet [29] for 3D (CompressionNet) and MobileNet [30] To distinguish different versions of UVQ models, we refer to the initial model proposed in [25] as UVQ(ImageNet); the version using EfficientNet-b0 pretrained on JFT dataset, but otherwise the same as in [25], as UVQ (JFT); and the version using MoViNet and MobileNet as UVQ-lite (JFT). UVQ (ImageNet) and UVQ (JFT) have the same model complexity.…”
Section: Reducing Uvq Model Complexitymentioning
confidence: 99%
“…In particular, we made modifications on top of the most recent version of MobileNet, MobileNet Multi-Hardware (MNMH) in [5], which is searched for mobile classification with AutoML considering applications on various chips. The specifications are shown in Table 1.…”
Section: Tailored Mobilenet Backbonementioning
confidence: 99%
“…One common practice is to adapt a strong backbone network built for image classification task to the segmentation task. To maintain the highresolution feature representation for this dense prediction task, excessive down-sampling or striding operations at the Latency is simply normalized using real on-device latency on various platforms as [5]. Multiple MOSAIC models with different filter sizes are shown on the Cityscapes plot.…”
Section: Introductionmentioning
confidence: 99%