Proceedings of the Ninth Annual ACM Symposium on Parallel Algorithms and Architectures - SPAA '97 1997
DOI: 10.1145/258492.258494
|View full text |Cite
|
Sign up to set email alerts
|

Space-efficient scheduling of parallelism with synchronization variables

Abstract: Recent w ork on scheduling algorithms has resulted in provable bounds on the space taken by parallel computations in relation to the space taken by sequential computations. The results for online versions of these algorithms, however, have been limited to computations in which threads can only synchronize with ancestor or sibling threads. Such computations do not include languages with futures or user-speci ed synchronization constraints. Here we extend the results to languages with synchronization variables. … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
34
0

Year Published

1999
1999
2024
2024

Publication Types

Select...
4
3
3

Relationship

2
8

Authors

Journals

citations
Cited by 37 publications
(34 citation statements)
references
References 37 publications
0
34
0
Order By: Relevance
“…On the other hand, threading that is too fine-grained increases scheduling and synchronization overheads, as well as any instruction overheads for running parallel code versus sequential code. 7 Thus judiciously choosing task granularity is an important yet challenging problem.…”
Section: Automatic Selection Of Thread Granularitymentioning
confidence: 99%
“…On the other hand, threading that is too fine-grained increases scheduling and synchronization overheads, as well as any instruction overheads for running parallel code versus sequential code. 7 Thus judiciously choosing task granularity is an important yet challenging problem.…”
Section: Automatic Selection Of Thread Granularitymentioning
confidence: 99%
“…Blelloch et al [7] propose to use futures to generate "non-linear pipelines," another form of parallelism that creates deterministic dependence structure and study scheduling bound for such programs. Their use of future falls under the structured use of futures.…”
Section: Related Workmentioning
confidence: 99%
“…Despite some optimization work they have done, their static scheduling methods still cannot achieve a better load balance especially for irregular DP applications compared with the dynamic scheduling as the EasyPDP runtime has adopted. In contrast, some other schedulers such as work-stealing (WS) scheduler [45] and parallel depth-first search (PDF) scheduler [47], [48], [49], which take data locality into account, can have a good performance for DP algorithms as well due to the reduced cache and TLB misses. We explained in Section 4.2.2 that the fault tolerance is crucial and necessary for parallel DP algorithms, whereas there are no fault-tolerance and recovery mechanisms which have been considered and supported in previous work except our EasyPDP.…”
Section: Related Workmentioning
confidence: 99%