Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis 2015
DOI: 10.1145/2807591.2807604
|View full text |Cite
|
Sign up to set email alerts
|

Scaling iterative graph computations with GraphMap

Abstract: In recent years, systems researchers have devoted considerable effort to the study of large-scale graph processing. Existing distributed graph processing systems such as Pregel, based solely on distributed memory for their computations, fail to provide seamless scalability when the graph data and their intermediate computational results no longer fit into the memory; and most distributed approaches for iterative graph computations do not consider utilizing secondary storage a viable solution. This paper presen… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
3
2
2

Relationship

1
6

Authors

Journals

citations
Cited by 26 publications
(7 citation statements)
references
References 24 publications
0
7
0
Order By: Relevance
“…We note that inter-GPU bandwidth within a node is larger than inter-node bandwidth, so our comparisons must be considered in this light; however, as we noted in Section I, we believe that our results motivate a future focus GraphReduce [15] and Frog (asynchronous) [16], [17] are out-of-core GPU approaches, GraphMap [27] targets CPU distributed-memory clusters, and Totem [13] is an heterogeneous CPU-GPU approach. While out-of-core approaches have the promise to process graphs much larger than in-core work such as ours, our framework can comfortably process the largest graphs they used in any of their results [15]- [17], [27]. For these comparisons, we use the smallest number of GPUs possible for individual comparisons, and achieve much less processing time.…”
Section: Comparisons Vs Previous Mgpu Workmentioning
confidence: 81%
See 1 more Smart Citation
“…We note that inter-GPU bandwidth within a node is larger than inter-node bandwidth, so our comparisons must be considered in this light; however, as we noted in Section I, we believe that our results motivate a future focus GraphReduce [15] and Frog (asynchronous) [16], [17] are out-of-core GPU approaches, GraphMap [27] targets CPU distributed-memory clusters, and Totem [13] is an heterogeneous CPU-GPU approach. While out-of-core approaches have the promise to process graphs much larger than in-core work such as ours, our framework can comfortably process the largest graphs they used in any of their results [15]- [17], [27]. For these comparisons, we use the smallest number of GPUs possible for individual comparisons, and achieve much less processing time.…”
Section: Comparisons Vs Previous Mgpu Workmentioning
confidence: 81%
“…GraphReduce [15] and Frog (asynchronous) [16], [17] are out-of-core GPU approaches, GraphMap [27] targets CPU distributed-memory clusters, and Totem [13] is an heterogeneous CPU-GPU approach. While out-of-core approaches have the promise to process graphs much larger than in-core work such as ours, our framework can comfortably process the largest graphs they used in any of their results [15]- [17], [27]. For these comparisons, we use the smallest number of GPUs possible for individual comparisons, and achieve much less processing time.…”
Section: Comparisons Vs Previous Mgpu Workmentioning
confidence: 99%
“…Graph data analysis have attracted active research in the last decade Zhou et al, , 2010Cheng et al, 2011;Zhou and Liu, 2012;Cheng et al, 2012;Lee et al, 2013;Palanisamy et al, 2014;Su et al, 2015;Zhou et al, 2015b;Bao et al, 2015;Zhou et al, 2015d;Zhou et al, 2015a,c;Lee et al, 2015;Zhou et al, 2016;Zhou, 2017;Palanisamy et al, 2018;Zhou et al, 2018b,a;Zhou et al, 2019c,b,d;Zhou and Liu, 2019;Wu et al, , 2021aZhang et al, 2020b;Zhou et al, 2020c,a;Goswami et al, 2020;. Various adver-sarial attack models have been developed to show the vulnerability of graph learning models in node classification (Dai et al, 2018;Zügner et al, 2018;Wang and Gong, 2019;Xu et al, 2019b;Zügner and Günnemann, 2019;Takahashi, 2019;Entezari et al, 2020;Sun et al, 2020b;Zügner et al, 2020;Xi et al, 2021;He et al, 2021), community detection (Chen et al, 2017b;Waniek et al, 2018;, network embedding (Chen et al, 2018;Bojchevski and Günnemann, 2019;Chang et al, 2020), graph classification (Dai et al, 2018;Xi et al, 2021), link prediction…”
Section: Adversarial Attacks On Text and Graph Datamentioning
confidence: 99%
“…The procedure GETOUTSTANDINGUSAGE() accepts a set of partitions that each needs to be generated and returns the additional budget of each dependent partition. It first creates a map C to store the number of times each partition visited and a queue Q to keep the partitions to be visited (lines [13][14]. Q is initialized with the partitions in parts.…”
Section: B Algorithm 1: Partition Identificationmentioning
confidence: 99%
“…Both datasets were stored on HDFS and the data block size was set to the default value (i.e., 64MB). They have also been widely used in other works [13]- [15].…”
Section: ) Datasetsmentioning
confidence: 99%