2023
DOI: 10.1007/978-981-19-7402-1_9
|View full text |Cite
|
Sign up to set email alerts
|

Identifying Pitfalls and Solutions in Parallelizing Long Short-Term Memory Network on Graphical Processing Unit by Comparing with Tensor Processing Unit Parallelism

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
2
1

Relationship

1
4

Authors

Journals

citations
Cited by 6 publications
(2 citation statements)
references
References 11 publications
0
2
0
Order By: Relevance
“…Another issue is that it solves the problem of parallelism observed in LSTM networks. However, Transformers cannot represent more than a fixed length of relationships, and splits often do not follow sequence limitations and result in context segmentation, which results in ineffective optimization (Saoud, Al-Marzouqi, and Hussein, 2022;Ravikumar et al, 2023).…”
Section: Transformers In Time Seriesmentioning
confidence: 99%
See 1 more Smart Citation
“…Another issue is that it solves the problem of parallelism observed in LSTM networks. However, Transformers cannot represent more than a fixed length of relationships, and splits often do not follow sequence limitations and result in context segmentation, which results in ineffective optimization (Saoud, Al-Marzouqi, and Hussein, 2022;Ravikumar et al, 2023).…”
Section: Transformers In Time Seriesmentioning
confidence: 99%
“…The RNN performance is due to its intrinsically sequential nature, which can limit the full exploitation of the potential of parallelism in dedicated hardware environments. Transformers, in turn, allow parallel execution in GPUs and TPUs (Ravikumar et al, 2023).…”
Section: Transformers In Time Seriesmentioning
confidence: 99%