Proceedings of the 34th ACM International Conference on Supercomputing 2020
DOI: 10.1145/3392717.3392762
|View full text |Cite
|
Sign up to set email alerts
|

Wavefront parallelization of recurrent neural networks on multi-core architectures

Abstract: Recurrent neural networks (RNNs) are widely used for natural language processing, time-series prediction, or text analysis tasks. The internal structure of RNNs inference and training in terms of data or control dependencies across their fundamental numerical kernels complicate the exploitation of model parallelism, which is the reason why just data-parallelism has been traditionally applied to accelerate RNNs. This paper presents W-Par (Wavefront-Parallelization), a comprehensive approach for RNNs inference a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

1
0

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 27 publications
0
1
0
Order By: Relevance
“…Each square cell is composed of either an LSTM [38] or a GRU cell [19]. Equations ( 1)-( 6) define the computations involved in each LSTM cell, and previous work [39] contains detailed descriptions of all parameters.…”
Section: Background On Brnnmentioning
confidence: 99%
“…Each square cell is composed of either an LSTM [38] or a GRU cell [19]. Equations ( 1)-( 6) define the computations involved in each LSTM cell, and previous work [39] contains detailed descriptions of all parameters.…”
Section: Background On Brnnmentioning
confidence: 99%