Proceedings of the Twenty-Fifth Annual ACM Symposium on Parallelism in Algorithms and Architectures 2013
DOI: 10.1145/2486159.2486174
|View full text |Cite
|
Sign up to set email alerts
|

On-the-fly pipeline parallelism

Abstract: Pipeline parallelism organizes a parallel program as a linear sequence of s stages. Each stage processes elements of a data stream, passing each processed data element to the next stage, and then taking on a new element before the subsequent stages have necessarily completed their processing. Pipeline parallelism is used especially in streaming applications that perform video, audio, and digital signal processing. Three out of 13 benchmarks in PARSEC, a popular software benchmark suite designed for shared-memo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
63
0

Year Published

2013
2013
2018
2018

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 30 publications
(68 citation statements)
references
References 36 publications
(34 reference statements)
2
63
0
Order By: Relevance
“…The pipeline detected by our approach can also be enhanced using the same techniques as Huang et al Rul et al [2007] uses the approach of Raman et al on the level of functions. Lee et al [2013] transform user-annotated code into pipelines using Cilk. Given that our approach can identify the code locations of pipeline stages automatically, it could help users of the approach of Lee et al to quickly find the places to annotate.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…The pipeline detected by our approach can also be enhanced using the same techniques as Huang et al Rul et al [2007] uses the approach of Raman et al on the level of functions. Lee et al [2013] transform user-annotated code into pipelines using Cilk. Given that our approach can identify the code locations of pipeline stages automatically, it could help users of the approach of Lee et al to quickly find the places to annotate.…”
Section: Related Workmentioning
confidence: 99%
“…Following the observation that most parallel patterns are based on assumptions about data dependences, we replace UML diagrams with data-dependence graphs to characterize both the pattern and the software. Our approach, which is implemented as an extension of the data dependence profiler DiscoPoP [Li et al 2013;Li et al 2015], automatically infers potential parallel design patterns from the dependence graph of the program and specifies the division of code blocks according to the pattern structure. Based on this information, the programmer can then easily parallelize the code by moving suggested code blocks into appropriate structures of the pattern.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…This restriction easily captures useful forms of pipeline parallelism [15], and so our race detector can be directly applied to analyze such pipelined programs. Our work can be seen as a generalization of existing race detectors for SP graphs to richer classes of graphs and language constructs.…”
Section: This Workmentioning
confidence: 99%
“…However, their model is more relaxed than ours as it allows non-linear pipelines, therefore leaving open the question for efficient race detection in their case. Linear pipelines are the focus of the work of Lee et al [15] which extends Cilk with support for this setting. Interestingly, their language constructs are easily expressible in our restricted fork-join, but not the other way around, even though both models can express exactly the same task graphs, i.e., the ones having a two-dimensional lattice structure.…”
Section: Generalization Of Series-parallel Constructsmentioning
confidence: 99%