2021 ACM/IEEE 48th Annual International Symposium on Computer Architecture (ISCA) 2021
DOI: 10.1109/isca52012.2021.00051
|View full text |Cite
|
Sign up to set email alerts
|

η-LSTM: Co-Designing Highly-Efficient Large LSTM Training via Exploiting Memory-Saving and Architectural Design Opportunities

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
2
2
1

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(2 citation statements)
references
References 34 publications
0
2
0
Order By: Relevance
“…This section presents the performance results of the neural networks integrated into the three aforementioned scenarios. Four neural networks including LSTM 31 , GRU 32 , CNN 33 , and CNN-LSTM 34 are utilized in this study. Figure 12 illustrates the experimental settings of the aforementioned neural networks.…”
Section: Model Predictionsmentioning
confidence: 99%
“…This section presents the performance results of the neural networks integrated into the three aforementioned scenarios. Four neural networks including LSTM 31 , GRU 32 , CNN 33 , and CNN-LSTM 34 are utilized in this study. Figure 12 illustrates the experimental settings of the aforementioned neural networks.…”
Section: Model Predictionsmentioning
confidence: 99%
“…Concerning GPU implementations, there is fewer related work as they usually pose a smaller research problem as FPGA implementations. However, there are quiet a few implementations [20,21] which focus on LSTM training on GPU platforms in order to reduce energy footprint and accelerate the training algorithm.…”
Section: Related Workmentioning
confidence: 99%