We study linear inverse problems under the premise that the forward operator is not at hand but given indirectly through some input-output training pairs. We demonstrate that regularization by projection and variational regularization can be formulated by using the training data only and without making use of the forward operator. We study convergence and stability of the regularized solutions in view of Seidman (1980 J. Optim. Theory Appl. 30 535), who showed that regularization by projection is not convergent in general, by giving some insight on the generality of Seidman’s nonconvergence example. Moreover, we show, analytically and numerically, that regularization by projection is indeed capable of learning linear operators, such as the Radon transform.
In this work we analyse the functional J (u) = ∇u ∞ defined on Lipschitz functions with homogeneous Dirichlet boundary conditions. Our analysis is performed directly on the functional without the need to approximate with smooth p-norms. We prove that its ground states coincide with multiples of the distance function to the boundary of the domain. Furthermore, we compute the L 2 -subdifferential of J and characterize the distance function as unique non-negative eigenfunction of the subdifferential operator. We also study properties of general eigenfunctions, in particular their nodal sets. Furthermore, we prove that the distance function can be computed as asymptotic profile of the gradient flow of J and construct analytic solutions of fast marching type. In addition, we give a geometric characterization of the extreme points of the unit ball of J .Finally, we transfer many of these results to a discrete version of the functional defined on a finite weighted graph. Here, we analyze properties of distance functions on graphs and their gradients. The main difference between the continuum and discrete setting is that the distance function is not the unique non-negative eigenfunction on a graph.
The goal of this paper is to further develop an approach to inverse problems with imperfect forward operators that is based on partially ordered spaces. Studying the dual problem yields useful insights into the convergence of the regularised solutions and allow us to obtain convergence rates in terms of Bregman distances -as usual in inverse problems, under an additional assumption on the exact solution called the source condition. These results are obtained for general absolutely one-homogeneous functionals. In the special case of TV-based regularisation we also study the structure of regularised solutions and prove convergence of their level sets to those of an exact solution. Finally, using the developed theory, we adapt the concept of debiasing to inverse problems with imperfect operators and propose an approach to pointwise error estimation in TV-based regularisation.Keywords: inverse problems, imperfect forward models, total variation, extended support, Bregman distances, convergence rates, error estimation, debiasing where A : L 1 (Ω) → L ∞ (Ω) is a linear operator and Ω ⊂ R m is a bounded domain. We assume that there exists a non-negative solution of (1.1).For an appropriate functional J (·) : L 1 → R + ∪{∞} we consider non-negative J -minimising solutions, which solve the following problem:We assume that the feasible set in (1.2) has at least one point with a finite value of J and denote a (possibly non-unique) solution of (1.2) byū J . Throughout this paper it is assumed that the regularisation functional J (·) is convex, proper and absolutely one-homogeneous.In practice the data f are not known precisely and only their perturbed versionf is available. In this case, we cannot simply replace the constraint Au = f in (1.2) with Au =f , since the solutions of the original problem (1.1) would no longer be feasible in this case. Therefore, we need to relax the equality in (1.2) to guarantee the feasibility of solutions of the original problem (1.1). This is the idea of the residual method [20,23]. If the error in the data is bounded by some known constant δ, the residual method accounts to solving the following constrained problem: min( 1.3)The fidelity function becomes in this case the characteristic function of the convex set {u : Au− f δ}. In the linear case, the residual method is equivalent to Tikhonov regularisation min u∈L 1with the regularisation parameter α = α(f , δ) chosen according to Morozov's discrepancy principle [23]. In many practical situations not only the data contain errors, but also the forward operator, that generated the data, are not perfectly known. In order to guarantee the feasibility of solutions of the original problem (1.1) in the constrained problem (1.3), one needs to account for the errors in the operator in the feasible set. If the errors in the operator are bounded by a known constant h (in the operator norm), the feasible set can be amended as follows in order to guarantee feasibility of the solutions of the original problem (1.1):whereà is the noisy operator. This optimi...
Mathematical formulations of applied inverse problems often involve operator equations in normed functional spaces. In many cases, these spaces can, in addition, be endowed with a partial order relation, which turns them into Banach lattices. The fact that two tools, such as a partial order relation and a monotone (with respect to this partial order) norm, are available to the researchers, gives them a clear advantage of having more freedom in the problem formulations. For instance, errors in the approximate data are sometimes easier to describe in terms of pointwise bounds. Inverse problems in partially ordered normed spaces (Banach lattices) have been studied before in the case when a compact set containing the unknown exact solution is available a priori. It turned out that under this assumption it is possible, even in the ill-posed case, to compute 'pointwise' bounds for the unknown exact solution (or rather, bounds by means of the appropriate partial order), thus providing an error estimate by means of the partial order. Also, a useful property of this approach was that one was able to quantify the uncertainty in the operator by means of linear inequalities that were included in the corresponding optimization problems as (linear) constraints and made the computations easier. However, the compactness assumption might sometimes be too demanding in practice. This paper aims at revealing the possibilities and advantages of using partial order in solving inverse problems in the case when no compact set of prior restrictions is available, concentrating on linear inverse (possibly ill-posed) problems.
We present and analyse an approach to image reconstruction problems with imperfect forward models based on partially ordered spaces -Banach lattices. In this approach, errors in the data and in the forward models are described using order intervals. The method can be characterised as the lattice analogue of the residual method, where the feasible set is defined by linear inequality constraints. The study of this feasible set is the main contribution of this paper. Convexity of this feasible set is examined in several settings and modifications for introducing additional information about the forward operator are considered. Numerical examples demonstrate the performance of the method in deblurring with errors in the blurring kernel.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.