2020
DOI: 10.48550/arxiv.2003.08063
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Stable Neural Flows

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
28
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
4

Relationship

1
8

Authors

Journals

citations
Cited by 20 publications
(28 citation statements)
references
References 0 publications
0
28
0
Order By: Relevance
“…Our approach, which relies on using the energy as a Lyapunov function for an entire class of models with fixed nonlinear structure, is challenging for application to higher-order nonlinearities where generic Lyapunov functions are often unknown. Fortunately, data-driven methods are now increasingly used to discover Lyapunov functions and barrier functions for nonlinear control [91,[119][120][121][122][123][124][125][126]. These methods build a heuristic Lyapunov function for a given dataset, rendering the search for a Lyapunov function tractable but possibly at the cost of model generality.…”
Section: Discussionmentioning
confidence: 99%
“…Our approach, which relies on using the energy as a Lyapunov function for an entire class of models with fixed nonlinear structure, is challenging for application to higher-order nonlinearities where generic Lyapunov functions are often unknown. Fortunately, data-driven methods are now increasingly used to discover Lyapunov functions and barrier functions for nonlinear control [91,[119][120][121][122][123][124][125][126]. These methods build a heuristic Lyapunov function for a given dataset, rendering the search for a Lyapunov function tractable but possibly at the cost of model generality.…”
Section: Discussionmentioning
confidence: 99%
“…As the complexity or the dimensionality of the modeling task increases, ODE-based networks demand a more advanced solver that significantly impacts their efficiency (Poli et al, 2020), stability (Bai et al, 2019, Chang et al, 2019, Lechner et al, 2020b, Massaroli et al, 2020a and performance . A large body of research went into improving the computational overhead of these solvers, for example, by designing hypersolvers (Poli et al, 2020), deploying augmentation methods (Dupont et al, 2019, Massaroli et al, 2020b, pruning (Liebenwein et al, 2021) and by regularizing the continuous flows (Finlay et al, 2020, Kidger et al, 2020, Massaroli et al, 2020a. To enhance the performance of an ODE-based model, especially in time series modeling tasks (Gleeson et al, 2018), solutions provided for stabilizing their gradient propagation (Erichson et al, 2021, Lechner and Hasani, 2020, Li et al, 2020.…”
Section: Related Workmentioning
confidence: 99%
“…The community has worked out solutions for resolving this computational overhead and facilitating the training of neural ODEs, for instance, by relaxing the stiffness of a flow by state augmentation techniques (Dupont et al, 2019, Massaroli et al, 2020b, reformulating the forward-pass as a rootfinding problem (Bai et al, 2019), using regularization schemes (Finlay et al, 2020, Kidger et al, 2020, Massaroli et al, 2020a, or improving the inference time of the network (Poli et al, 2020).…”
Section: Introductionmentioning
confidence: 99%
“…By using adjoint method [5],the gradients w.r.t.θ can be computed memory-efficiently during back propagation. Recent works [10,26,7,38,12] have tried to analyze this framework theoretically, overcome the instability issue and improve memory-efficiency. Researchers also pay attention to apply NODE to other fields such as medical image [28], reinforcement learning [9], video generation [17,41] and graph data [37].…”
Section: Background and Related Workmentioning
confidence: 99%