Proceedings of the 25th ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages - POPL '98 1998
DOI: 10.1145/268946.268956
|View full text |Cite
|
Sign up to set email alerts
|

Array SSA form and its use in parallelization

Abstract: Static single assignment (SSA) form for scalars has been a significant advance. It has simplified the way we think about scalar variables. It has simpliied the design of some optimizations and has made other optimizations more effective. Unfortunately none of thii can be be said for SSA form for arrays. The current SSA processing of arrays views an array as a single object. But the kinds of analyses that sophisticated compilers need to perform on arrays, for example those that drive loop parallelization, are a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
72
0

Year Published

2000
2000
2010
2010

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 110 publications
(72 citation statements)
references
References 13 publications
0
72
0
Order By: Relevance
“…To benefit from its algorithmic properties, we extend the SSA form to operate on array blocks. This extension differs from Array SSA proposals [27,28]: it does not attempt to model the data flow of individual array elements. In this form, array blocks are fully renamed, and name conflicts at control-flow points are handled with Φ functions following the rules of strict SSA form.…”
Section: Preliminary Analyses and Transformationsmentioning
confidence: 99%
“…To benefit from its algorithmic properties, we extend the SSA form to operate on array blocks. This extension differs from Array SSA proposals [27,28]: it does not attempt to model the data flow of individual array elements. In this form, array blocks are fully renamed, and name conflicts at control-flow points are handled with Φ functions following the rules of strict SSA form.…”
Section: Preliminary Analyses and Transformationsmentioning
confidence: 99%
“…In this phase the LPC detects loop structures, analyses the data dependencies within them, creates parallel loops where these dependencies can be maintained, and annotates the loops with high-level pseudo code. By performing the analysis of loops at this high-level compiler phase the LPC benefits from Java's strong typing and single static assignment (SSA) form [12].…”
Section: Loop Parallelizing Compilermentioning
confidence: 99%
“…Memory overhead complexity of the array expansion technique proposed in [6] is O(A size ×P ) which, in practice, prevents the application of this method for large array sizes and a high number of processors. In contrast, memory overhead of our inspector-executor method is O(max(f size + P, A size )).…”
Section: Performance Analysismentioning
confidence: 99%
“…Knobe and Sarkar [6] describe a program representation that uses array expansion [13] to enable the parallel execution of irregular assignment computations. Each processor executes a set of iterations preserving the same relative order of the sequential loop.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation