2017
DOI: 10.1007/978-3-319-66709-6_23
|View full text |Cite
|
Sign up to set email alerts
|

Variational Networks: Connecting Variational Methods and Deep Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
111
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
6
1

Relationship

2
5

Authors

Journals

citations
Cited by 114 publications
(111 citation statements)
references
References 32 publications
0
111
0
Order By: Relevance
“…For example, the number of RBFs that are used to model the activation functions in a smoothed function approximation, defines the flexibility to approximate arbitrary functions in an accurate way. In our experimental setup as well as in the latest studies on image processing tasks (32), we reduced the number of RBFs compared to the initial work (6) by a half without a loss in performance but with reduced training time.…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…For example, the number of RBFs that are used to model the activation functions in a smoothed function approximation, defines the flexibility to approximate arbitrary functions in an accurate way. In our experimental setup as well as in the latest studies on image processing tasks (32), we reduced the number of RBFs compared to the initial work (6) by a half without a loss in performance but with reduced training time.…”
Section: Discussionmentioning
confidence: 99%
“…Hence, instead of adding more and more layers and creating deeper networks, we introduce more structure and flexibility in the individual layers, which might help to reduce the overall complexity of the network. As shown in (32) for image denoising and non-blind deblurring, fixing the activation functions to less flexible, e.g., convex, functions might also lead to a decrease in performance for our application.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Typically, the regularizer is adapted such that the end point image of the trajectory lies in a proximity of the ground truth image. However, even the general class of Field of Experts type regularizers is not able 1 arXiv:1907.08488v1 [math.OC] 19 Jul 2019 to capture the entity of the complex structure of natural images, that is why the end point image substantially differs from the ground truth image. To address this insufficient modeling, we advocate an optimal control problem using the gradient flow differential equation as the state equation and a cost functional that quantifies the distance of the ground truth image and the gradient flow trajectory evaluated at the stopping time T .…”
Section: Introductionmentioning
confidence: 99%
“…Unrolling is a generic technique to include iterative energy minimization into neural network blocks, used also in lowlevel vision [57], medical image reconstruction [28], single image depth super-resolution [44], and semantic 3D reconstruction [6]. It allows us to combine the classical multiview SR with learning-based single-view SR methods.…”
Section: Multi-view Aggregation (Mva)mentioning
confidence: 99%