2020
DOI: 10.1109/access.2020.2984796
|View full text |Cite
|
Sign up to set email alerts
|

A Sparse Connected Long Short-Term Memory With Sharing Weight for Time Series Prediction

Abstract: The development of the mobile Internet and the success of deep learning in many applications have driven the need to deploy and apply deep learning models on mobile devices under the condition of limited resources. Long Short-Term Memory (LSTM), as a special scheme in deep learning, can learn long-distance dependencies hidden in time series. However, the high computational complexity of LSTM-related structures and the need for a large number of resources for training have become obstacles to their deployment o… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
2

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 32 publications
0
2
0
Order By: Relevance
“…e first category method [4,20]: the strategy of pruning filters is used for network compression. In particular, Xiong and Ling [5] used pruning strategies to preserve important connections during the training phase. Wen [6] decreased the memory requirements of LSTMs by altering the structure of LSTMs.…”
Section: Sparse Recurrent Neural Networkmentioning
confidence: 99%
See 1 more Smart Citation
“…e first category method [4,20]: the strategy of pruning filters is used for network compression. In particular, Xiong and Ling [5] used pruning strategies to preserve important connections during the training phase. Wen [6] decreased the memory requirements of LSTMs by altering the structure of LSTMs.…”
Section: Sparse Recurrent Neural Networkmentioning
confidence: 99%
“…us, it attempts to prune the external gates parameters to compress RNN models. ere are many researchers who designed structure sparse strategies in recurrent neural network model [4][5][6], but the effort in analyzing the large scale of datasets, especially in text classification, is lacking. In order to reduce computational expense while ensuring accuracy, we try to integrate the sparse strategy into the hybrid RNN model for document classification.…”
Section: Introductionmentioning
confidence: 99%