2015
DOI: 10.1109/tpami.2014.2334607
|View full text |Cite
|
Sign up to set email alerts
|

Bayesian Models of Graphs, Arrays and Other Exchangeable Random Structures

Abstract: The natural habitat of most Bayesian methods is data represented by exchangeable sequences of observations, for which de Finetti's theorem provides the theoretical foundation. Dirichlet process clustering, Gaussian process regression, and many other parametric and nonparametric Bayesian models fall within the remit of this framework; many problems arising in modern data analysis do not. This article provides an introduction to Bayesian models of graphs, matrices, and other data that can be modeled by random st… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
166
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
4
3

Relationship

1
6

Authors

Journals

citations
Cited by 179 publications
(166 citation statements)
references
References 47 publications
0
166
0
Order By: Relevance
“…We denote the probability that a randomly selected node has a degree k by f ( k ), the degree distribution of G by , and the mean degree of G by μ ( G ). We assume that G is involution invariant 23, 24 , that is from the vantage point of any randomly selected vertex, the rest of the connected network is probabilistically the same.…”
Section: Background and Approachmentioning
confidence: 99%
“…We denote the probability that a randomly selected node has a degree k by f ( k ), the degree distribution of G by , and the mean degree of G by μ ( G ). We assume that G is involution invariant 23, 24 , that is from the vantage point of any randomly selected vertex, the rest of the connected network is probabilistically the same.…”
Section: Background and Approachmentioning
confidence: 99%
“…If the missing structure is relevant to statistical analysis, its absence constitutes a form of model misspecification. To put the problem into context, it is useful to compare some commonly used models: (a)graphon models (as mentioned above) (these have recently received considerable attention in statistics and include stochastic block models (Goldenberg et al ., ), and many models that are popular in machine learning (see Orbanz and Roy ()); (b)models that randomize a graph to match a given degree sequence, such as the configuration model (Durrett, ); (c)models describing a formation process, such as PA models (Barabási and Albert, ). …”
Section: Introductionmentioning
confidence: 99%
“…Borgs et al . () and Orbanz and Roy ()): the generated graph is dense; unless it is k ‐partite or disconnected, the distance between any two vertices in an infinite graph is almost surely 1 or 2, etc. A graphon can encode any fixed pattern on some number n of vertices, but this pattern then occurs on every possible subgraph of size n with fixed probability. (b)Configuration models are popular in probability (because of their simplicity) but have limited use in statistics unless the quantity of interest is the degree sequence itself: they are ‘maximally random’ given the degrees, in a manner similar to exponential families being maximally random given a sufficient statistic, and are thus insensitive to any structure that is not captured by the degrees.…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations