2022
DOI: 10.1109/access.2022.3190538
|View full text |Cite
|
Sign up to set email alerts
|

Edge Deployment Framework of GuardBot for Optimized Face Mask Recognition With Real-Time Inference Using Deep Learning

Abstract: Deep learning based models on the edge devices have received considerable attention as a promising means to handle a variety of AI applications. However, deploying the deep learning models in the production environment with efficient inference on the edge devices is still a challenging task due to computation and memory constraints. This paper proposes a framework for the service robot named GuardBot powered by Jetson Xavier NX and presents a real-world case study of deploying the optimized face mask recogniti… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
8
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 14 publications
(12 citation statements)
references
References 87 publications
0
8
0
Order By: Relevance
“…TensorRT contains a deep learning inference optimizer for trained deep learning models and runtime for execution. It speeds up the inference by quantization and other techniques for deployment in the production environment 85 . Therefore, we optimized the proposed custom Keras‐TensorFlow trained DCNNs with Nvidia TensorRT at FP32 (TF‐TRT FP32), FP16 (TF‐TRT FP16), and INT8 (TF‐TRT INT8) precision modes to improve the inference speed.…”
Section: Optimization and Inference Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…TensorRT contains a deep learning inference optimizer for trained deep learning models and runtime for execution. It speeds up the inference by quantization and other techniques for deployment in the production environment 85 . Therefore, we optimized the proposed custom Keras‐TensorFlow trained DCNNs with Nvidia TensorRT at FP32 (TF‐TRT FP32), FP16 (TF‐TRT FP16), and INT8 (TF‐TRT INT8) precision modes to improve the inference speed.…”
Section: Optimization and Inference Resultsmentioning
confidence: 99%
“…For the optimization steps of the proposed Keras‐TensorFlow model using Nvidia TensorRT SDK, we refer the readers to recent works 14,85 . At first, we loaded the Keras‐TensorFlow DCNN models and saved them in native TensorFlow format.…”
Section: Optimization and Inference Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Hardware: The experiments for all the trackers were conducted on the Nvidia Jetson Xavier NX board. However, the limited memory of the Xavier NX posed a challenge as it does not provide enough space to load and run heavy deep learning models efficiently [ 92 ]. To overcome this problem, we created a swap file that allows us to use more memory than the physically installed memory.…”
Section: Resultsmentioning
confidence: 99%
“…Our work diverges from previous studies that assess the optimization of transfer learning-trained models, including [16] which explores multiple models and [17] whose optimization focuses exclusively on reducing computational load. We examine both energy efficiency and performance across two devices, in contrast to prior research that only concentrates on inference tasks.…”
Section: Related Workmentioning
confidence: 90%