2022
DOI: 10.3390/condmat7020038
|View full text |Cite
|
Sign up to set email alerts
|

Neural Annealing and Visualization of Autoregressive Neural Networks in the Newman–Moore Model

Abstract: Artificial neural networks have been widely adopted as ansatzes to study classical and quantum systems. However, for some notably hard systems, such as those exhibiting glassiness and frustration, they have mainly achieved unsatisfactory results, despite their representational power and entanglement content, thus suggesting a potential conservation of computational complexity in the learning process. We explore this possibility by implementing the neural annealing method with autoregressive neural networks on … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
10
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(10 citation statements)
references
References 45 publications
0
10
0
Order By: Relevance
“…Several other groups [17][18][19][20] investigated a problem related to sampling, namely that of simulated annealing [31] for finding ground states of optimization problems. This is an a priori slightly easier problem, because simulated annealing does not need to equilibrate at all temperatures to find a solution [32,33].…”
Section: State Of the Artmentioning
confidence: 99%
See 3 more Smart Citations
“…Several other groups [17][18][19][20] investigated a problem related to sampling, namely that of simulated annealing [31] for finding ground states of optimization problems. This is an a priori slightly easier problem, because simulated annealing does not need to equilibrate at all temperatures to find a solution [32,33].…”
Section: State Of the Artmentioning
confidence: 99%
“…A recently developed line of research, see e.g. [13][14][15][16][17][18][19][20], proposed to solve the problem in an elegant and universal way, by machine learning proper MCMC moves. In a nutshell, the idea is to learn an auxiliary probability distribution P a (σ), which (i) can be sampled efficiently (e.g.…”
Section: Introduction 1motivationsmentioning
confidence: 99%
See 2 more Smart Citations
“…In the context of our work, we harness the power of autoregressive models for the purposes of solving optimization problems by sequentially sampling solutions to these problems [3]. This sampling scheme allows to obtain perfect samples as opposed to Metropolis sampling which can produce correlated samples, and which can get stuck in local minima if the optimization problem has a rugged optimization landscape [20].…”
Section: Vcamentioning
confidence: 99%