Proceedings of the 2nd International Conference on Supercomputing - ICS '88 1988
DOI: 10.1145/55364.55385
|View full text |Cite
|
Sign up to set email alerts
|

A framework for determining useful parallelism

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
12
0

Year Published

1992
1992
2006
2006

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 54 publications
(12 citation statements)
references
References 10 publications
0
12
0
Order By: Relevance
“…In this extended replication transformation, called full replication, we create copies that are local to the loops that both read and write the arrays. 1 We trade decreased execution time for increased array storage. In addition, the implementation must update the arrays to ensure data consis- tency.…”
Section: Examplementioning
confidence: 99%
See 1 more Smart Citation
“…In this extended replication transformation, called full replication, we create copies that are local to the loops that both read and write the arrays. 1 We trade decreased execution time for increased array storage. In addition, the implementation must update the arrays to ensure data consis- tency.…”
Section: Examplementioning
confidence: 99%
“…Many compilers targeting shared memory systems replicate data to enable concurrent read accesses [1] and further [8] investigates adaptive replication in order to reduce synchronization overheads that may ultimately degrade performance. Memory Parallelism There have been many approaches to improve memory parallelism.…”
Section: Replication For Shared Memory Multiprocessor Systemsmentioning
confidence: 99%
“…Our memory model assumes there will be no conflict or capacity cache misses in one iteration of the innermost loop. 1 The algorithm performs the following five steps:…”
Section: Optimize: Data Locality and Parallelismmentioning
confidence: 99%
“…Many commercial parallelizing compilers do not reveal their optimization strategies to maintain a market advantage. The IBM PTRAN project, an industrial research compiler, has published parallelization algorithms that use control and data dependence, and a wide selection of transformations, but without results [1], [40], [41]. Below, we compare this study with those of parallelizing compilers from Illinois and Stanford [10], [17], [19], [43].…”
Section: Related Workmentioning
confidence: 99%
“…Classic methods [7,8,11,14,24] have lots of problems to identify parallel opportunities outside nested loops. The latest researches in this field [10,5,6], based on pattern-maching techniques, allows to substitute part of a sequential program by an equivalent parallel subprogram.…”
Section: Introductionmentioning
confidence: 99%