Proceedings of the Twentieth Annual Symposium on Parallelism in Algorithms and Architectures 2008
DOI: 10.1145/1378533.1378575
|View full text |Cite
|
Sign up to set email alerts
|

Scheduling strategies for optimistic parallel execution of irregular programs

Abstract: Recent application studies have shown that many irregular applications have a generalized data parallelism that manifests itself as iterative computations over worklists of different kinds. In general, there are complex dependencies between iterations. These dependencies cannot be elucidated statically because they depend on the inputs to the program; thus, optimistic parallel execution is the only tractable approach to parallelizing these applications.We have built a system called Galois that supports this st… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
45
1

Year Published

2008
2008
2017
2017

Publication Types

Select...
4
2
2

Relationship

3
5

Authors

Journals

citations
Cited by 46 publications
(46 citation statements)
references
References 28 publications
0
45
1
Order By: Relevance
“…The efficient kd-tree based algorithms are challenging since each iteration potentially has data dependencies with the prior ones. In previous work [12,10,11], we showed that there is exploitable parallelism in the heap-based and locallyordered algorithms using the Galois optimistic approach.…”
Section: Parallelizing Agglomerative Clustering For Multicorementioning
confidence: 96%
“…The efficient kd-tree based algorithms are challenging since each iteration potentially has data dependencies with the prior ones. In previous work [12,10,11], we showed that there is exploitable parallelism in the heap-based and locallyordered algorithms using the Galois optimistic approach.…”
Section: Parallelizing Agglomerative Clustering For Multicorementioning
confidence: 96%
“…The FailedAt attribute of t indicates which live-in variables caused the speculation to fail. Since, when each live-in variable is first read, the instruction address of the read and the space ID before the read are recorded ( Figure 10, line 2), the main thread can retrieve these two values of the first accessed live-in variable by calling another two auxiliary functions GetRecoveryPC and getRecoverySpaceID (lines [16][17]. These two values determine the starting point of the reexecution.…”
Section: Handling Speculative Resultsmentioning
confidence: 99%
“…Instead of state separation, Kulkarni et al proposed a rollback based speculative parallelization technique [15][16][17][18]21]. They introduce two special constructs that users can employ to identify speculative parallelism.…”
Section: Related Workmentioning
confidence: 99%
“…Next, all speculative execution is rolled back (line 13). Finally, ParaMeter "commits" the highest priority elements in the system by re-executing them (lines [16][17][18][19][20][21][22][23]. Any newly created work will be executed in the next round (line 19).…”
Section: Ordered Loopsmentioning
confidence: 99%