2018
DOI: 10.1145/3276491
|View full text |Cite
|
Sign up to set email alerts
|

GraphIt: a high-performance graph DSL

Abstract: The performance bottlenecks of graph applications depend not only on the algorithm and the underlying hardware, but also on the size and structure of the input graph. As a result, programmers must try different combinations of a large set of techniques, which make tradeoffs among locality, work-efficiency, and parallelism, to develop the best implementation for a specific algorithm and type of graph. Existing graph frameworks and domain specific languages (DSLs) lack flexibility, supporting only a limited set … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
93
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
3
2

Relationship

3
7

Authors

Journals

citations
Cited by 142 publications
(93 citation statements)
references
References 56 publications
(50 reference statements)
0
93
0
Order By: Relevance
“…For the dense traversals, instead of simply mapping over the vertices with a parallel-for loop, we added an edgeaware parallelization scheme that creates tasks containing a roughly equal number of edges that are managed by the work-stealing scheduler [98]. We found this optimization to signicantly improve load balancing for hypergraphs with highly-skewed degree distributions.…”
Section: Optimizationsmentioning
confidence: 99%
“…For the dense traversals, instead of simply mapping over the vertices with a parallel-for loop, we added an edgeaware parallelization scheme that creates tasks containing a roughly equal number of edges that are managed by the work-stealing scheduler [98]. We found this optimization to signicantly improve load balancing for hypergraphs with highly-skewed degree distributions.…”
Section: Optimizationsmentioning
confidence: 99%
“…Several compilers target graph operations such as breath-first-search or shortest path (e.g. [Wang et al 2016;Zhang et al 2018]). In contrast, we focus on generating high-performance traversal code for spatially coherent access to hierarchical and sparse data structures.…”
Section: Related Work 81 Array Compilersmentioning
confidence: 99%
“…Besides Gluon, there are other prior works that propose various compilers for converting BSP style programs written in Galois model to Distributed systems [19] and on to GPUs [42]. Many shared memory frameworks [35,40,42,49,57,58] have also been proposed for graph analytics.…”
Section: Related Workmentioning
confidence: 99%