2009 18th International Conference on Parallel Architectures and Compilation Techniques 2009
DOI: 10.1109/pact.2009.10
|View full text |Cite
|
Sign up to set email alerts
|

Exploiting Parallelism with Dependence-Aware Scheduling

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
12
0

Year Published

2012
2012
2022
2022

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 23 publications
(12 citation statements)
references
References 17 publications
0
12
0
Order By: Relevance
“…For loops not amenable to static analysis, speculative techniques have been used for run-time parallelization [23], [33], [37], [46]. Zhuang et al [47] inspect run-time dependences to check if contiguous sets of loop iterations are dependent. None of those efforts address distributed memory code generation.…”
Section: Related Workmentioning
confidence: 99%
“…For loops not amenable to static analysis, speculative techniques have been used for run-time parallelization [23], [33], [37], [46]. Zhuang et al [47] inspect run-time dependences to check if contiguous sets of loop iterations are dependent. None of those efforts address distributed memory code generation.…”
Section: Related Workmentioning
confidence: 99%
“…Dynamic schemes to execute loop iterations in parallel when they are detected at run time to be independent have been proposed [16,42]. In [16] loop iterations are speculatively executed in parallel with the possibility of fixing the execution if they are misspeculated.…”
Section: Related Workmentioning
confidence: 99%
“…In [16] loop iterations are speculatively executed in parallel with the possibility of fixing the execution if they are misspeculated. In [42], loop iterations are executed in parallel after they are recognized to be independent at run time; in this work the focus is on minimizing the overhead due to the computation of the dynamic data dependences by performing an approximated and conservative analysis while the original code is in execution.…”
Section: Related Workmentioning
confidence: 99%
“…The other main direction of approaching autoparallelization has been to analyze memory references at run-time, either via inspector/executor [26], or via TLS techniques [25], or via faster but less scalable techniques [30]. These techniques have overhead proportional to the number of the original-loop accesses, and hence we use then only as last resort, once all the lighter predicates failed.…”
Section: Related Workmentioning
confidence: 99%