Search citation statements
Paper Sections
Citation Types
Year Published
Publication Types
Relationship
Authors
Journals
Fine-tuning deep neural networks (DNNs) is pivotal for creating inference modules that can be suitably imported to edge or FPGA (Field Programmable Gate Arrays) platforms. Traditionally, exploration of different parameters throughout the layers of DNNs has been done using grid search and other brute force techniques. Though these methods lead to the optimal choice of network parameters, the search process can be very time-consuming and may not consider deployment constraints across different target platforms. This work addresses this problem by proposing Reg-Tune, a regression-based profiling approach to quickly determine the trend of different metrics in relation to hardware deployment of neural networks on tinyML platforms like FPGAs and edge devices. We start by training a handful of configurations belonging to different combinations of \(\mathcal {NN}\scriptstyle \langle q (quantization),\,s (scaling)\rangle \displaystyle \) or \(\mathcal {NN}\scriptstyle \langle r (resolution),\,s\rangle \displaystyle \) workloads to generate the accuracy values respectively for their corresponding application. Next, we deploy these configurations on the target device to generate energy/latency values. According to our hypothesis, the most energy-efficient configuration suitable for deployment on the target device is a function of the variables q , r , and s . Finally, these trained and deployed configurations and their related results are used as data points for polynomial regression with the variables q , r , and s to realize the trend for accuracy/energy/latency on the target device. Our setup allows us to choose the near-optimal energy-consuming or latency-driven configuration for the desired accuracy from the contour profiles of energy/latency across different tinyML device platforms. To this extent, we demonstrate the profiling process for three different case studies and across two platforms for energy and latency fine-tuning. Our approach results in at least 5.7 × better energy efficiency when compared to recent implementations for human activity recognition on FPGA and 74.6% reduction in latency for semantic segmentation of aerial imagery on edge devices compared to baseline deployments.
Fine-tuning deep neural networks (DNNs) is pivotal for creating inference modules that can be suitably imported to edge or FPGA (Field Programmable Gate Arrays) platforms. Traditionally, exploration of different parameters throughout the layers of DNNs has been done using grid search and other brute force techniques. Though these methods lead to the optimal choice of network parameters, the search process can be very time-consuming and may not consider deployment constraints across different target platforms. This work addresses this problem by proposing Reg-Tune, a regression-based profiling approach to quickly determine the trend of different metrics in relation to hardware deployment of neural networks on tinyML platforms like FPGAs and edge devices. We start by training a handful of configurations belonging to different combinations of \(\mathcal {NN}\scriptstyle \langle q (quantization),\,s (scaling)\rangle \displaystyle \) or \(\mathcal {NN}\scriptstyle \langle r (resolution),\,s\rangle \displaystyle \) workloads to generate the accuracy values respectively for their corresponding application. Next, we deploy these configurations on the target device to generate energy/latency values. According to our hypothesis, the most energy-efficient configuration suitable for deployment on the target device is a function of the variables q , r , and s . Finally, these trained and deployed configurations and their related results are used as data points for polynomial regression with the variables q , r , and s to realize the trend for accuracy/energy/latency on the target device. Our setup allows us to choose the near-optimal energy-consuming or latency-driven configuration for the desired accuracy from the contour profiles of energy/latency across different tinyML device platforms. To this extent, we demonstrate the profiling process for three different case studies and across two platforms for energy and latency fine-tuning. Our approach results in at least 5.7 × better energy efficiency when compared to recent implementations for human activity recognition on FPGA and 74.6% reduction in latency for semantic segmentation of aerial imagery on edge devices compared to baseline deployments.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.