2018 IEEE International Conference on Smart Cloud (SmartCloud) 2018
DOI: 10.1109/smartcloud.2018.00026
|View full text |Cite
|
Sign up to set email alerts
|

A Thread-Saving Schedule with Graph Analysis for Parallel Deep Learning Applications on Embedded Systems

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 12 publications
0
2
0
Order By: Relevance
“…In these days, deep learning models have been proved efficient in many applications [1][2][3][4][5][6][7][8]. Generally, the performance of a deep learning-based classification model depends on the captured features [9][10][11]. When using a deep learning mode for the classification, the probability of each object is outputted.…”
Section: Introductionmentioning
confidence: 99%
“…In these days, deep learning models have been proved efficient in many applications [1][2][3][4][5][6][7][8]. Generally, the performance of a deep learning-based classification model depends on the captured features [9][10][11]. When using a deep learning mode for the classification, the probability of each object is outputted.…”
Section: Introductionmentioning
confidence: 99%
“…When utilizing the deep learning models for the classification, the performance of models is related to the captured feature. To get higher performance for the classification tasks, the structure should be well designed and the hyperparameters should be well tuned [6][7][8]. When the training set contains the samples that belong to different domains, the diverse features may lower the performance of trained models [9,10].…”
Section: Introductionmentioning
confidence: 99%