2021
DOI: 10.48550/arxiv.2109.14851
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

The Deep Minimizing Movement Scheme

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
6
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(6 citation statements)
references
References 27 publications
0
6
0
Order By: Relevance
“…Pushing forward a probability distribution by the proximal operator corresponds to one step of the JKO scheme for Wasserstein gradient flow of a linear functional in the space of distributions [Jordan et al, 1998, Benamou et al, 2016. Compared to recent works on neural Wasserstein gradient flow [Mokrov et al, 2021, Hwang et al, 2021, Bunne et al, 2021, where a separate network is needed to parameterize the pushforward map for every JKO step, our linear functional yields a pushforward map that is identical for each step; this property allows us to use a single neural network as a parameterization.…”
Section: Related Workmentioning
confidence: 99%
“…Pushing forward a probability distribution by the proximal operator corresponds to one step of the JKO scheme for Wasserstein gradient flow of a linear functional in the space of distributions [Jordan et al, 1998, Benamou et al, 2016. Compared to recent works on neural Wasserstein gradient flow [Mokrov et al, 2021, Hwang et al, 2021, Bunne et al, 2021, where a separate network is needed to parameterize the pushforward map for every JKO step, our linear functional yields a pushforward map that is identical for each step; this property allows us to use a single neural network as a parameterization.…”
Section: Related Workmentioning
confidence: 99%
“…The time and spatial discretizations of Wasserstein gradient flows are extensively studied in literature (Jordan et al, 1998;Junge et al, 2017;Carrillo et al, 2021a,b;Bonet et al, 2021;Liutkus et al, 2019;Frogner & Poggio, 2020). Recently, neural networks have been applied in solving or approximating Wasserstein gradient flows (Mokrov et al, 2021;Lin et al, 2021b,a;Alvarez-Melis et al, 2021;Bunne et al, 2021;Hwang et al, 2021;Fan et al, 2021). For sampling algorithms, di Langosco et al (2021) learns the transportation function by solving an unregularized variational problem in the family of vector-output deep neural networks.…”
Section: Introductionmentioning
confidence: 99%
“…Very recently, efforts have been made in using NNs to solve evolution PDEs [7,14,30]. In these approaches, instead of approximating the solution of the evolution PDE over the the whole time-space domain, the NNs are used to represent the solution with time-dependent parameters.…”
Section: Introductionmentioning
confidence: 99%
“…We note that the numerical algorithm to solve the generalized diffusion (5) can also be formulated in the Eulerian frame of reference based the notion of Wasserstein gradient flow [32]. We refer the interested readers to a recent work [30] for a neural network implementation of the minimizing movement scheme with the Wasserstein distance. Since gradient flows also provide a continuous variational formulation of many machine learning algorithms [18,62], the numerical approaches developed here will have wide applications in many machine learning tasks, such as supervised learning [18], variational inference [62], density estimation [60] and generative models [29].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation