2017
DOI: 10.1145/3140587.3062355
|View full text |Cite
|
Sign up to set email alerts
|

Synthesis of divide and conquer parallelism for loops

Abstract: This paper focuses on automated synthesis of divide-andconquer parallelism, which is a common parallel programming skeleton supported by many cross-platform multithreaded libraries. The challenges of producing (manually or automatically) a correct divide-and-conquer parallel program from a given sequential code are two-fold: (1) assuming that individual worker threads execute a code identical to the sequential code, the programmer has to provide the extra code for dividing the tasks and combining the computati… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
45
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
3
2
1

Relationship

1
5

Authors

Journals

citations
Cited by 17 publications
(45 citation statements)
references
References 64 publications
0
45
0
Order By: Relevance
“…Several studies have shown the usefulness of partial evaluation or function simplification for developing parallel reductions, including those on deriving parallel reductions on arrays/lists and trees [Callahan 1992;Farzan and Nicolet 2017;Fisher and Ghuloum 1994;Jiang et al 2018;Matsuzaki et al 2005;Morihata and Matsuzaki 2010;Raychev et al 2015] and those on parallel querying of semi-structured databases [Buneman et al 2006;Cong et al 2007Cong et al , 2012. λ as extends those studies and builds a foundation for studying parallel reductions in general-purpose higher-order languages.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…Several studies have shown the usefulness of partial evaluation or function simplification for developing parallel reductions, including those on deriving parallel reductions on arrays/lists and trees [Callahan 1992;Farzan and Nicolet 2017;Fisher and Ghuloum 1994;Jiang et al 2018;Matsuzaki et al 2005;Morihata and Matsuzaki 2010;Raychev et al 2015] and those on parallel querying of semi-structured databases [Buneman et al 2006;Cong et al 2007Cong et al , 2012. λ as extends those studies and builds a foundation for studying parallel reductions in general-purpose higher-order languages.…”
Section: Related Workmentioning
confidence: 99%
“…We hope for parallel programming environments to support a wide variety of nontrivial reductions that real programs may contain, including those with more than one operator like poly, those using control operators such as break (Figure 1 (b)), those with prefix-sum patterns that calculate not only the summary but also all intermediate results (Figure 1 (c)), and those traversing nonlinear structures such as trees (Figure 1 (d)). Although there have been many studies on systematically developing parallel reductions [Chi and Mu 2011;Deitz et al 2006;Emoto et al 2012Emoto et al , 2010Farzan and Nicolet 2017;Fedyukovich et al 2017;Fisher and Ghuloum 1994;Gorlatch 1999;Hu et al 1997Jiang et al 2018;Matsuzaki et al 2005Matsuzaki et al , 2006; Matsuzaki 2010, 2011;Morihata et al 2009;Morita et al 2007;Raychev et al 2015;Sato and Iwasaki 2011;Suganuma et al 1996;Xu et al 2004], those studies consider only specific forms of reductions, and none of them can uniformly deal with all the kinds of reductions shown in Figure 1. This paper introduces a calculus named λ as , a simply typed lambda calculus with algebraic simplification.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Several studies have shown the usefulness of partial evaluation or function simplification for developing parallel reductions, including those on deriving parallel reductions on arrays/lists and trees (Callahan, 1992;Fisher & Ghuloum, 1994;Hu et al, 1998;Chin et al, 1998;Matsuzaki et al, 2005;Morihata & Matsuzaki, 2010;Raychev et al, 2015;Farzan & Nicolet, 2017;Jiang et al, 2018;Farzan & Nicolet, 2019) and those on parallel querying of semi-structured databases (Buneman et al, 2006;Cong et al, 2007Cong et al, , 2012. They generally focus on specific reduction patterns to enable automation of reduction parallelisation.…”
Section: Related Workmentioning
confidence: 99%
“…Synthesizing for Acceleration: Other work has used program synthesis as a mechanism by which programs can be automatically accelerated. For example, recent work uses program synthesis to generate parallel versions of sequential code, with a focus on numerical array-processing algorithms with single-pass control flow [39], [40]. The space of programs tackled however is highly restricted.…”
Section: Related Workmentioning
confidence: 99%