2020 IEEE 32nd International Conference on Tools With Artificial Intelligence (ICTAI) 2020
DOI: 10.1109/ictai50040.2020.00106
|View full text |Cite
|
Sign up to set email alerts
|

OpenVINO Deep Learning Workbench: A Platform for Model Optimization, Analysis and Deployment

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 12 publications
(3 citation statements)
references
References 12 publications
0
3
0
Order By: Relevance
“…In this regard, it is necessary to convert the weights of the DNN since they are trained in FP32. The conversion is achieved starting with the TensorFlow trained files using a desktop PC and performing post-training quantization with OpenVINO toolkit libraries [36] which are specific for Intel devices. These libraries convert the data of the DNN to FP16 data type and adapt the DNN operations to the NCS architecture.…”
Section: E Deep Learning Neural Accelerators For the Edgementioning
confidence: 99%
“…In this regard, it is necessary to convert the weights of the DNN since they are trained in FP32. The conversion is achieved starting with the TensorFlow trained files using a desktop PC and performing post-training quantization with OpenVINO toolkit libraries [36] which are specific for Intel devices. These libraries convert the data of the DNN to FP16 data type and adapt the DNN operations to the NCS architecture.…”
Section: E Deep Learning Neural Accelerators For the Edgementioning
confidence: 99%
“…How to enable fast inference on low-powered embedded platforms remains an open research question. Intel OpenVINO toolkit emerges as an extremely useful tool of choice since it optimizes DL models across Intel hardware while minimizing the inference time [11]. A large portion of the studies discussed above quite commonly neglect this design aspect and demonstrates their DL solutions based on expensive GPU resources.…”
Section: Inference Optimizationmentioning
confidence: 99%
“…An allied question is: How much accuracy and AP we need to sacrifice while pursuing faster inference? In contrast, our measurement outputs are based on OpenVINO DL Workbench [11], which is an open-source and productionready framework to ensure reusability, interoperability, and scalability.…”
Section: Inference Optimizationmentioning
confidence: 99%