2020
DOI: 10.48550/arxiv.2007.12102
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Efficient and near-optimal algorithms for sampling small connected subgraphs

Abstract: We consider the following problem: given a graph G = (V, E) and an integer k, sample a connected induced k-node subgraph of G (also called k-graphlet) uniformly at random. The best algorithms known achieve ε-uniformity and are based on random walks or color coding. The random walk approach is elegant, but has a worst-case running time of ∆ Θ(k) log n ε where n = |V | and ∆ is the maximum degree of G. Color coding is more efficient, but requires a preprocessing phase with running time and space 2 Θ(k) O(m log 1… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 19 publications
(54 reference statements)
0
1
0
Order By: Relevance
“…This is often the case for image-processing applications [7,33,35], as it is straightforward to sample a k × k patch uniformly at random from an image. However, the similar problem of uniformly randomly sampling a k-node connected subnetwork from a network is not straightforward [3,16,22,58]. For our purpose of developing dictionary learning for networks, we use motif sampling, which was introduced recently in [29].…”
Section: Supplementary Information: Learning Low-rank Latent Mesoscal...mentioning
confidence: 99%
“…This is often the case for image-processing applications [7,33,35], as it is straightforward to sample a k × k patch uniformly at random from an image. However, the similar problem of uniformly randomly sampling a k-node connected subnetwork from a network is not straightforward [3,16,22,58]. For our purpose of developing dictionary learning for networks, we use motif sampling, which was introduced recently in [29].…”
Section: Supplementary Information: Learning Low-rank Latent Mesoscal...mentioning
confidence: 99%