2018
DOI: 10.48550/arxiv.1807.03039
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Glow: Generative Flow with Invertible 1x1 Convolutions

Abstract: Flow-based generative models (Dinh et al., 2014) are conceptually attractive due to tractability of the exact log-likelihood, tractability of exact latent-variable inference, and parallelizability of both training and synthesis. In this paper we propose Glow, a simple type of generative flow using an invertible 1 × 1 convolution. Using our method we demonstrate a significant improvement in log-likelihood on standard benchmarks. Perhaps most strikingly, we demonstrate that a generative model optimized towards t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
191
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 119 publications
(192 citation statements)
references
References 14 publications
(45 reference statements)
1
191
0
Order By: Relevance
“…For the normalizing flow in this work, we build on the same basic architecture and training procedure as Ting & Weinberg (2021), using a batch size of 1024 and 500 epochs. This architecture consists of eight units of a "Neural Spline Flow", each of which consists of three layers of densely connected neural networks with 16 neurons, coupled with a "Conv1x1" operation (often referred to as "GLOW" in the machine learning literature; see, e.g., Kingma & Dhariwal 2018;Durkan et al 2019). See Green & Ting (2020); Ting & Weinberg (2021) for additional details, particularly Fig.…”
Section: Inferring the Population Density With Normalizing Flowsmentioning
confidence: 99%
“…For the normalizing flow in this work, we build on the same basic architecture and training procedure as Ting & Weinberg (2021), using a batch size of 1024 and 500 epochs. This architecture consists of eight units of a "Neural Spline Flow", each of which consists of three layers of densely connected neural networks with 16 neurons, coupled with a "Conv1x1" operation (often referred to as "GLOW" in the machine learning literature; see, e.g., Kingma & Dhariwal 2018;Durkan et al 2019). See Green & Ting (2020); Ting & Weinberg (2021) for additional details, particularly Fig.…”
Section: Inferring the Population Density With Normalizing Flowsmentioning
confidence: 99%
“…is the determinant of the neural network function's Jacobian matrix. Therefore, the probability of a random sample from q θ (x) is easy to evaluate when the Jacobian matrix is computationally tractable, as is the case with commonly used normalizing flow forms includes NICE (Dinh et al 2014), Real-NVP (Dinh et al 2016), and Glow (Kingma & Dhariwal 2018). As described in Section 4.3, in this work we choose to use a Real-NVP generative network to parameterize q θ (x).…”
Section: Normalizing Flowmentioning
confidence: 99%
“…Normalizing Flows (NF) (Rezende and Mohamed 2015) are used to learn transformations between data distributions with special property that their transform process is bijective and the flow model can be used in both directions. Real-NVP (Dinh, Sohl-Dickstein, and Bengio 2016) and Glow (Kingma and Dhariwal 2018) are two typical methods for NF, in which both forward and reverse processes can be processed quickly. NF is generally used to generate data from variables sampled in a specific probability distribution, such as images or audios.…”
Section: Normalizing Flowmentioning
confidence: 99%
“…For example, they estimated the multidimensional Gaussian distribution (Li et al 2021;Defard et al 2020) by calculating the mean and variance for features, or used a clustering algorithm to estimate these normal features by normal clustering (Reiss et al 2021;Roth et al 2021). Recently, some works (Rudolph, Wandt, and Rosenhahn 2021;Gudovskiy, Ishizaka, and Kozuka 2021) began to use normalizing flow (Kingma and Dhariwal 2018) to estimate distribution. Through a trainable process that maximizes the log-likelihood of normal image features, they embed normal image features into standard normal distribution and use the probability to identify and locate anomalies.…”
Section: Introductionmentioning
confidence: 99%