2021
DOI: 10.48550/arxiv.2104.08438
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Bayesian graph convolutional neural networks via tempered MCMC

Rohitash Chandra,
Ayush Bhagat,
Manavendra Maharana
et al.

Abstract: Deep learning models, such as convolutional neural networks, have long been applied to image and multi-media tasks, particularly those with structured data. More recently, there has been more attention to unstructured data that can be represented via graphs. These types of data are often found in health and medicine, social networks, and research data repositories. Graph convolutional neural networks have recently gained attention in the field of deep learning that takes advantage of graph-based data represent… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
6
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
3
2

Relationship

3
2

Authors

Journals

citations
Cited by 5 publications
(6 citation statements)
references
References 81 publications
0
6
0
Order By: Relevance
“…Langevin-gradient proposal distribution that incorporate Gaussian noise with gradients for a single iteration (epoch). Recently, Langevin-gradient proposal distribution in MCMC sampling has been utilised for novel Bayesian neural learning methods [32,75,76,77].…”
Section: Langevin Gradient Metropolis-hastingsmentioning
confidence: 99%
“…Langevin-gradient proposal distribution that incorporate Gaussian noise with gradients for a single iteration (epoch). Recently, Langevin-gradient proposal distribution in MCMC sampling has been utilised for novel Bayesian neural learning methods [32,75,76,77].…”
Section: Langevin Gradient Metropolis-hastingsmentioning
confidence: 99%
“…Teye et al [71] showed that training deep models with batch normalization is equal to that of estimating the inference in Bayesian networks. Chandra et al [72] proposed Bayesian graph deep learning techniques that use MCMC samples with Langevin-gradient. Mandt et al [73] used SGD with a constant learning rate (constant SGD) to simulate the Markov chain with a stationary distribution and showed that constant SGD can approximate the posterior inference.…”
Section: Markov Chain Monte Carlo (Mcmc)mentioning
confidence: 99%
“…To overcome this drawback, Pal et.al [17] introduced a non-parametric BGCN, which uses the node features, training labels, and observed graph for posterior inference. Following this idea, many Bayesian-based approaches are proposed [18][19][20].…”
Section: Introductionmentioning
confidence: 99%