2021
DOI: 10.1038/s42256-021-00401-3
|View full text |Cite
|
Sign up to set email alerts
|

Variational neural annealing

Abstract: Many important challenges in science and technology can be cast as optimization problems. When viewed in a statistical physics framework, these can be tackled by simulated annealing, where a gradual cooling procedure helps search for groundstate solutions of a target Hamiltonian. While powerful, simulated annealing is known to have prohibitively slow sampling dynamics when the optimization landscape is rough or glassy. Here we show that by generalizing the target distribution with a parameterized model, an ana… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

1
53
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 42 publications
(54 citation statements)
references
References 51 publications
1
53
0
Order By: Relevance
“…A common way to represent the conditional probabilities in Eq.8 is by means of feedforward deep neural networks with sharing schemes architectures (35,48) to reduce the number of parameters. Due to the possible high variability in the dependence of p(x i |x <i ) on x <i (40), instead of adopting a sharing parameters scheme we reduce the number of parameters by limiting the dependency of the conditional probability to a subset of x <i . The subset considered is formed by all x j ∈ x <i such that x j is at most a second-order neighbor of i in the graph induced by the contact network, i.e.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…A common way to represent the conditional probabilities in Eq.8 is by means of feedforward deep neural networks with sharing schemes architectures (35,48) to reduce the number of parameters. Due to the possible high variability in the dependence of p(x i |x <i ) on x <i (40), instead of adopting a sharing parameters scheme we reduce the number of parameters by limiting the dependency of the conditional probability to a subset of x <i . The subset considered is formed by all x j ∈ x <i such that x j is at most a second-order neighbor of i in the graph induced by the contact network, i.e.…”
Section: Methodsmentioning
confidence: 99%
“…Deep autoregressive neural networks are used to generate samples according to a probability distribution learned from data, for instance for images (28), audio (29), text (30,31) and protein sequences (32) generation tasks and, more generally, as a probability density estimator (33)(34)(35). Autoregressive neural networks have recently been used to approximate the joint probability distributions of many (discrete) variables in statistical physics models (36), and applied in different physical contexts (37)(38)(39)(40). In this work, we show how to use a deep autoregressive neural network architecture to efficiently sample from a posterior distribution composed of a prior, given by the epidemic propagation model (even though the parameters of such model can be contextually inferred), and from an evidence given by (timescattered) observations of the state of a subset of individuals.…”
Section: Introductionmentioning
confidence: 99%
“…A common way to represent the conditional probabilities in Eq.8 is by means of feed-forward deep neural networks with sharing schemes architectures [35,48] to reduce the number of parameters. Due to the possible high variability in the dependence of p(x i |x <i ) on x <i [40], instead of adopting a sharing parameters scheme we reduce the number of parameters by limiting the dependency of the conditional probability to a subset of x <i . The subset considered is formed by all x j ∈ x <i such that x j is at most a second-order neighbor of i in the graph induced by the contact network, i.e.…”
Section: Learning the Posterior Probability Using Autoregressive Neur...mentioning
confidence: 99%
“…Deep autoregressive neural networks are used to generate samples according to a probability distribution learned from data, for instance for images [28], audio [29], text [30,31] and protein sequences [32] generation tasks and, more generally, as a probability density estimator [33][34][35]. Autoregressive neural networks have recently been used to approximate the joint probability distributions of many (discrete) variables in statistical physics models [36], and applied in different physical contexts [37][38][39][40]. In this work, we show how to use a deep autoregressive neural network architecture to efficiently sample from a posterior distribution composed of a prior, given by the epidemic propagation model (even though Epidemic inference through generative neural networks the parameters of such model can be contextually inferred), and from an evidence given by (time-scattered) observations of the state of a subset of individuals.…”
Section: Introductionmentioning
confidence: 99%
“…Modern neural network strategies provide VMC ansätze that can systematically be made powerful enough that their expressiveness is no longer the limiting bottleneck. Instead, the optimization process often requires long convergence times [35][36][37][38][39] and physics-inspired modifications of the network structure are sometimes needed to reach accuracies comparable to traditional VMC approaches [40,41]. In addition, the power of wisely chosen initializations to improve convergence has already been demonstrated in traditional VMC methods [42,43].…”
mentioning
confidence: 99%