2022
DOI: 10.1109/tbdata.2018.2880978
|View full text |Cite
|
Sign up to set email alerts
|

Towards Ubiquitous Intelligent Computing: Heterogeneous Distributed Deep Neural Networks

Abstract: For the pursuit of ubiquitous computing, distributed computing systems containing the cloud, edge devices, and Internet-of-Things devices are highly demanded. However, existing distributed frameworks do not tailor for the fast development of Deep Neural Network (DNN), which is the key technique behind many intelligent applications nowadays. Based on prior exploration on distributed deep neural networks (DDNN), we propose Heterogeneous Distributed Deep Neural Network (HDDNN) over the distributed hierarchy, targ… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
6
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 16 publications
(10 citation statements)
references
References 42 publications
0
6
0
Order By: Relevance
“…Hierarchical distribution can also be combined with compressing strategies to reduce the size of the data to be transmitted and accordingly minimize the communication delay and the time of the entire inference, such as using the encoding techniques as done in [221]. Authors in [222], [223] also opted for hierarchical offloading, while focusing primarily on fault-tolerance of the shared data. Particularly, authors in [222] considered two fault-tolerance methods, namely reassigning and monitoring, where the first one consists of assigning all layers tasks at least once, and then the unfinished tasks are reassigned to all participants regardless of their current state.…”
Section: ) Remote Collaborationmentioning
confidence: 99%
See 1 more Smart Citation
“…Hierarchical distribution can also be combined with compressing strategies to reduce the size of the data to be transmitted and accordingly minimize the communication delay and the time of the entire inference, such as using the encoding techniques as done in [221]. Authors in [222], [223] also opted for hierarchical offloading, while focusing primarily on fault-tolerance of the shared data. Particularly, authors in [222] considered two fault-tolerance methods, namely reassigning and monitoring, where the first one consists of assigning all layers tasks at least once, and then the unfinished tasks are reassigned to all participants regardless of their current state.…”
Section: ) Remote Collaborationmentioning
confidence: 99%
“…Authors in [222], [223] also opted for hierarchical offloading, while focusing primarily on fault-tolerance of the shared data. Particularly, authors in [222] considered two fault-tolerance methods, namely reassigning and monitoring, where the first one consists of assigning all layers tasks at least once, and then the unfinished tasks are reassigned to all participants regardless of their current state. This method, is generating a considerable communication and latency overhead related to allocating redundant tasks, particularly to devices with limitedcapacities.…”
Section: ) Remote Collaborationmentioning
confidence: 99%
“…The second partition strategy is the hierarchical splitting where the offloading is accomplished across the cloud, edge servers, and mobile devices. HDDNN [22] is one of the hierarchical partitioning approaches, where authors tested the performance of the distributed system with heterogeneous nodes' capacities, heterogeneous DNN networks, and heterogeneous tasks.…”
Section: Related Workmentioning
confidence: 99%
“…Hierarchical distribution can also be combined with compressing strategies to reduce the size of the data to be transmitted and accordingly minimize the communication delay and the time of the entire inference, such as using the encoding technique as done in [166]. Authors in [167], [168] also opted for hierarchical offloading, while focusing primarily on fault-tolerance of the shared data. Particularly, authors in [167] considered two fault-tolerance methods, namely reassigning and monitoring, where the first one consists of assigning all layers tasks at least once, and then the unfinished tasks are reassigned to all participants regardless of their current state.…”
Section: Ivb1 Remote Collaborationmentioning
confidence: 99%
“…Authors in [167], [168] also opted for hierarchical offloading, while focusing primarily on fault-tolerance of the shared data. Particularly, authors in [167] considered two fault-tolerance methods, namely reassigning and monitoring, where the first one consists of assigning all layers tasks at least once, and then the unfinished tasks are reassigned to all participants regardless of their current state. This method, is generating a considerable communication and latency overhead related to allocating redundant tasks, particularly to devices with limitedcapacities.…”
Section: Ivb1 Remote Collaborationmentioning
confidence: 99%