Wavelets and Sparsity XVIII 2019
DOI: 10.1117/12.2529608
|View full text |Cite
|
Sign up to set email alerts
|

Experimental performance of graph neural networks on random instances of max-cut

Abstract: This note explores the applicability of unsupervised machine learning techniques towards hard optimization problems on random inputs. In particular we consider Graph Neural Networks (GNNs) -a class of neural networks designed to learn functions on graphs -and we apply them to the max-cut problem on random regular graphs. We focus on the max-cut problem on random regular graphs because it is a fundamental problem that has been widely studied. In particular, even though there is no known explicit solution to com… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
39
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
3

Relationship

1
6

Authors

Journals

citations
Cited by 18 publications
(40 citation statements)
references
References 24 publications
1
39
0
Order By: Relevance
“…In this section we evaluate RUN-CSP’s performance on this problem. Yao et al, (2019) proposed two unsupervised GNN architectures for M ax -C ut . One was trained through policy gradient descent on a non-differentiable loss function while the other used a differentiable relaxation of this loss.…”
Section: Methodsmentioning
confidence: 99%
See 3 more Smart Citations
“…In this section we evaluate RUN-CSP’s performance on this problem. Yao et al, (2019) proposed two unsupervised GNN architectures for M ax -C ut . One was trained through policy gradient descent on a non-differentiable loss function while the other used a differentiable relaxation of this loss.…”
Section: Methodsmentioning
confidence: 99%
“…We use their results as well as their baseline results for Extremal Optimization (EO) ( Boettcher and Percus, 2001 ) and a classical approach based on semi-definite programming (SDP) ( Goemans and Williamson, 1995 ) as baselines for RUN-CSP. To evaluate the sizes of graph cuts, Yao et al, (2019) introduced a relative performance measure called P-value given by where z is the predicted cut size for a d -regular graph with n nodes. Based on results of Dembo et al, (2017) , they showed that the expected P -value of d -regular graphs approaches as .…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…Implementating a universally approximating graph neural network using the formulation from the previous section would be prohibitively expensive. There is work characterizing the expressive power of message passing neural networks, mainly in terms of the graph isomorphism problem [80,57,14,50,73,12], and there is research on the design of graph networks that are expressive enough to perform specific tasks, like solving specific combinatorial optimization problems [6,11,60,38,81,37,8].…”
Section: Universal Approximation Via Linear Invariant Layers and Irre...mentioning
confidence: 99%