2021
DOI: 10.1109/jiot.2021.3063497
|View full text |Cite
|
Sign up to set email alerts
|

Compacting Deep Neural Networks for Internet of Things: Methods and Applications

Abstract: Deep Neural Networks (DNNs) have shown great success in completing complex tasks. However, DNNs inevitably bring high computational cost and storage consumption due to the complexity of hierarchical structures, thereby hindering their wide deployment in Internet-of-Things (IoT) devices, which have limited computational capability and storage capacity. Therefore, it is a necessity to investigate the technologies to compact DNNs. Despite tremendous advances in compacting DNNs, few surveys summarize compacting-DN… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
17
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 34 publications
(26 citation statements)
references
References 208 publications
0
17
0
Order By: Relevance
“…Some of these techniques can be also utilized for modeling opponent agents based on the local information of the main agents [123]. Some knowledge distillation techniques such as pruning and low-rank decomposition can be also applied for original multi-agent systems to remove redundant information collected from distributed agents [124]. There are other abstraction techniques in the literature that reduce the number of local states by collapsing data values.…”
Section: Dimension Reduction and Filteringmentioning
confidence: 99%
“…Some of these techniques can be also utilized for modeling opponent agents based on the local information of the main agents [123]. Some knowledge distillation techniques such as pruning and low-rank decomposition can be also applied for original multi-agent systems to remove redundant information collected from distributed agents [124]. There are other abstraction techniques in the literature that reduce the number of local states by collapsing data values.…”
Section: Dimension Reduction and Filteringmentioning
confidence: 99%
“…Various techniques have been proposed in the literature for compacting DNNs, see survey [13]. The conventional framework deals with scenarios where one has access to a highly-parameterized high-performance DNN, and aims at making it more compact.…”
Section: A Non-collaborative Edge Inferencementioning
confidence: 99%
“…For instance, the fact that a DNN is to be pruned or quantized can be accounted for in its training procedure, boosting the trained model to facilitate compaction. Furthermore, one can prefer deep architectures, such as convolutional networks with small kernels and shortcut connections [13], that are inherently more compact compared with conventional ones. An alternative strategy is to design DNNaided systems that utilize compact networks by incorporating statistical model-based domain knowledge and augmenting a suitable classic inference algorithm with trainable models, see, e.g., survey in [14].…”
Section: A Non-collaborative Edge Inferencementioning
confidence: 99%
“…Insufficient labeled data means that the training samples do not reflect the overall data well, resulting in poor generalization of the learning models. However, massive amounts of subjective text data are not naturally labeled with sentiment categories, and it is labor-intensive and financially demanding to process large-scale high-quality labeled data [ 5 ]. Semisupervised learning has emerged to improve the performance of models by leveraging costly unlabeled data.…”
Section: Introductionmentioning
confidence: 99%