2007
DOI: 10.1142/s0129626407002843
|View full text |Cite
|
Sign up to set email alerts
|

Challenges in Parallel Graph Processing

Abstract: Graph algorithms are becoming increasingly important for solving many problems in scientific computing, data mining and other domains. As these problems grow in scale, parallel computing resources are required to meet their computational and memory requirements. Unfortunately, the algorithms, software, and hardware that have worked well for developing mainstream parallel scientific applications are not necessarily effective for large-scale graph problems. In this paper we present the inter-relationships betwee… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
199
0
1

Year Published

2011
2011
2018
2018

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 393 publications
(200 citation statements)
references
References 13 publications
0
199
0
1
Order By: Relevance
“…The absolute performance numbers we achieve on the large-scale parallel systems Hopper and Franklin at NERSC are significantly higher than prior work. The performance results, coupled with our analysis of communication and memory access costs of the two algorithms, challenges conventional wisdom that fine-grained communication is inherent in parallel graph algorithms and necessary for achieving high performance [25]. We list below optimizations that we intend to explore in future work, and some open questions related to design of distributed-memory graph algorithms.…”
Section: Discussionmentioning
confidence: 97%
See 1 more Smart Citation
“…The absolute performance numbers we achieve on the large-scale parallel systems Hopper and Franklin at NERSC are significantly higher than prior work. The performance results, coupled with our analysis of communication and memory access costs of the two algorithms, challenges conventional wisdom that fine-grained communication is inherent in parallel graph algorithms and necessary for achieving high performance [25]. We list below optimizations that we intend to explore in future work, and some open questions related to design of distributed-memory graph algorithms.…”
Section: Discussionmentioning
confidence: 97%
“…There is thus an additional all-to-all communication step at the end of each frontier expansion. Interprocessor communication is considered a significant performance bottleneck in prior work on distributed graph algorithms [10,25]. The relative costs of inter-processor communication and local computation depends on the quality of the graph partitioning and the topological characteristics of the interconnection network.…”
Section: Parallel Bfs: Prior Workmentioning
confidence: 99%
“…A survey [29] of both hardware and software concerns for parallel graph processing lists 'poor locality' as one of its chief concerns. Although it is cognizant of the greater cost of heavily multithreaded systems, it argues they are better for graph algorithms due to their memory latency tolerance and support for fine-grained dynamic threading.…”
Section: Related Workmentioning
confidence: 99%
“…This approach is in-place and recursive in nature. Challanges in parallel graph processing is discussed in [3] Our computations involve matrices and therefore some fast matrix multiplication algorithms such as [19][6] are also our area of concern. As CPU implementations have several limitations of performance so some cache optimization techniques and cache friendly implementations are given in [2]and [5] using recursion for dense graphs.…”
Section: Problem Time Complexitymentioning
confidence: 99%