2020 IEEE International Conference on Communications Workshops (ICC Workshops) 2020
DOI: 10.1109/iccworkshops49005.2020.9145449
|View full text |Cite
|
Sign up to set email alerts
|

Modeling of Deep Neural Network (DNN) Placement and Inference in Edge Computing

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 11 publications
(4 citation statements)
references
References 8 publications
0
4
0
Order By: Relevance
“…Bensalem et al [63] laid out an edge server based on a deep neural network and proposed the best layout model. Different kinds of deep neural networks have the ability to extract various features from data because they have their own distinct structures, which can help with model optimization [64] .…”
Section: Ai For Edge Server Placementmentioning
confidence: 99%
See 2 more Smart Citations
“…Bensalem et al [63] laid out an edge server based on a deep neural network and proposed the best layout model. Different kinds of deep neural networks have the ability to extract various features from data because they have their own distinct structures, which can help with model optimization [64] .…”
Section: Ai For Edge Server Placementmentioning
confidence: 99%
“…Different kinds of deep neural networks have the ability to extract various features from data because they have their own distinct structures, which can help with model optimization [64] . In the process of designing the optimal layout model, the authors [63] put forward the formula of deep neural network parameter selection after comprehensively considering the communication delay between nodes and the cost of EC nodes. They then determined the most suitable parameters of the deep neural network model through this formula.…”
Section: Ai For Edge Server Placementmentioning
confidence: 99%
See 1 more Smart Citation
“…This dichotomy determines the need for multiple design re-spins (before a successful integration), potentially leading to long tuning phases, overloading the designers and producing results highly depending on their skills. Despite the variety of resources available, optimizing these heterogeneous computing architectures for performing low-latency and energy-efficient DL inference tasks without compromising performance is still a challenge [5].…”
Section: Ai System Engineering Challengesmentioning
confidence: 99%