2015
DOI: 10.48550/arxiv.1505.02120
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Bilevel approaches for learning of variational imaging models

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
9
0

Year Published

2016
2016
2019
2019

Publication Types

Select...
4
2

Relationship

1
5

Authors

Journals

citations
Cited by 6 publications
(9 citation statements)
references
References 0 publications
0
9
0
Order By: Relevance
“…Recently established concepts of bilevel optimisation and parameter learning for variational imaging models (cf. [37,38]) might supplement our framework.…”
Section: Mitosisanalyser Frameworkmentioning
confidence: 96%
See 1 more Smart Citation
“…Recently established concepts of bilevel optimisation and parameter learning for variational imaging models (cf. [37,38]) might supplement our framework.…”
Section: Mitosisanalyser Frameworkmentioning
confidence: 96%
“…Consequently, this may in turn lead to enhancement of image processing. Recently established concepts of bilevel optimisation and parameter learning for variational imaging models (cf [37,38]…”
mentioning
confidence: 99%
“…This was first developed for hyperparameter optimization in neural networks (Larsen et al, 1996) and developed further by Pedregosa (2016). Similar approaches have been used for hyperparameter optimization in log-linear models (Foo et al, 2008), kernel selection (Chapelle et al, 2002;Seeger, 2007), and image reconstruction (Kunisch & Pock, 2013;Calatroni et al, 2015). Both approaches struggle with certain hyperparameters, since they differentiate gradient descent or the training loss with respect to the hyperparameters.…”
Section: Related Workmentioning
confidence: 99%
“…Once the parameters in the variational model are learned on the basis of the training set, then the learned model is used for new image data. See [2] for a recent review on bilevel learning in image processing.…”
Section: The Bilevel Optimization Problem In Function Spacementioning
confidence: 99%
“…We have outlined the reason for the Huber regularization above. The reason for the addition of the elliptic term µ Du 2 2 to (1.1) is, that it numerically renders the inversion of the Hessian of the lower level functional more robust and that it places the problem in Hilbert space and therefore opens up a large toolbox for the analysis of the smoothed problem and its approximation properties, see also [8].…”
Section: The Bilevel Optimization Problem In Function Spacementioning
confidence: 99%