2022
DOI: 10.1140/epjb/s10051-022-00296-y
|View full text |Cite
|
Sign up to set email alerts
|

Variational autoencoder analysis of Ising model statistical distributions and phase transitions

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 67 publications
0
2
0
Order By: Relevance
“…[37,38] Further, through the unsupervised learning of representations in which each latent variable maps to a particular physical order parameter or physical feature such as for example, gender, presence or absence of glasses or hair, etc., VAE:s enable interpretable latent code manipulation, which can be leveraged in many practical contexts. [36,39] While VAE reconstruction is easily implemented and computationally efficient, the L2 (logistic regression) distance between the generated and original images is often significant, especially for latent spaces with few dimensions. This effect is clearly identified in simple models such as the benchmark Ising model.…”
mentioning
confidence: 99%
See 1 more Smart Citation
“…[37,38] Further, through the unsupervised learning of representations in which each latent variable maps to a particular physical order parameter or physical feature such as for example, gender, presence or absence of glasses or hair, etc., VAE:s enable interpretable latent code manipulation, which can be leveraged in many practical contexts. [36,39] While VAE reconstruction is easily implemented and computationally efficient, the L2 (logistic regression) distance between the generated and original images is often significant, especially for latent spaces with few dimensions. This effect is clearly identified in simple models such as the benchmark Ising model.…”
mentioning
confidence: 99%
“…This effect is clearly identified in simple models such as the benchmark Ising model. [39] However, it is manifested for images as a high degree of blur that is not necessarily evident from the pixelby-pixel loss which can also be anomalously large if the reconstructed image is slightly rotated or offset. [40] Since the final layers of a deep CNN capture long-range spatial correlations in the input image rather than local pixel-to-pixel variations, feature perceptual loss functions that better quantify visual perception can be constructed by comparing the latent representations [41] or the VAE latent vectors [37] of two images within a VAE constructed from convolutional neural networks.…”
mentioning
confidence: 99%