The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
Massive Graph Analytics 2022
DOI: 10.1201/9781003033707-18
|View full text |Cite
|
Sign up to set email alerts
|

Executing Dynamic Data-Graph Computations Deterministically Using Chromatic Scheduling

Abstract: A data-graph computation -popularized by such programming systems as Galois, Pregel, GraphLab, PowerGraph, and GraphChi -is an algorithm that performs local updates on the vertices of a graph. During each round of a data-graph computation, an update function atomically modifies the data associated with a vertex as a function of the vertex's prior data and that of adjacent vertices. A dynamic data-graph computation updates only an active subset of the vertices during a round, and those updates determine the set… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(5 citation statements)
references
References 72 publications
0
5
0
Order By: Relevance
“…Thus, in this work, we focus on minibatch training. For minibatch training, existing systems include Dist-DGL (Zheng et al, 2020), Quiver, GNNLab (Yang et al, 2022c), WholeGraph (Yang et al, 2022b), DSP (Cai et al, 2023), PGLBox (Jiao et al, 2023), SALIENT++ (Kaler et al, 2023), NextDoor (Jangda et al, 2021), P 3 (Gandhi & Iyer, 2021). Here, the main performance bottleneck is the cost of sampling minibatches.…”
Section: Distributed Gnn Systemsmentioning
confidence: 99%
“…Thus, in this work, we focus on minibatch training. For minibatch training, existing systems include Dist-DGL (Zheng et al, 2020), Quiver, GNNLab (Yang et al, 2022c), WholeGraph (Yang et al, 2022b), DSP (Cai et al, 2023), PGLBox (Jiao et al, 2023), SALIENT++ (Kaler et al, 2023), NextDoor (Jangda et al, 2021), P 3 (Gandhi & Iyer, 2021). Here, the main performance bottleneck is the cost of sampling minibatches.…”
Section: Distributed Gnn Systemsmentioning
confidence: 99%
“…It allows us to offer a feasible solution and establishes lower bound results for EVG. In practice, they can be supported by invoking established solutions, e.g., parallel GNN inference [19,31] and subgraph pattern matching [23,46], respectively. Pattern generators.…”
Section: Generating Explanation Viewsmentioning
confidence: 99%
“…Efficient subgraph extraction is the main direction of recent system works to scale SGRL models. These techniques include PPR-based [4,52] and random walk-based [51] subgraph samplers, node neighborhood sampling through CUDA kernel (DGL, [11]), tensor operations (PyG, [36]), and performanceengineered sampler (SALIENT, [21]), as well as parallel sampling for temporal graphs [58]. Some frameworks also customize data structures to better support subgraph operations and gain higher throughput, such as associative arrays in SUREL [51], temporal-CSR in TGL [58] and GPU-orientated dictionary in NAT [32].…”
Section: Related Workmentioning
confidence: 99%