Image deblurring is an important topic in imaging science. In this review, we consider together fluorescence microscopy and optical/infrared astronomy because of two common features: in both cases the imaging system can be described, with a sufficiently good approximation, by a convolution operator, whose kernel is the so-called point-spread function (PSF); moreover, the data are affected by photon noise, described by a Poisson process. This statistical property of the noise, that is common also to emission tomography, is the basis of maximum likelihood and Bayesian approaches introduced in the mid eighties. From then on, a huge amount of literature has been produced on these topics. This review is a tutorial and a review of a relevant part of this literature, including some of our previous contributions. We discuss the mathematical modeling of the process of image formation and detection, and we introduce the so-called Bayesian paradigm that provides the basis of the statistical treatment of the problem. Next, we describe and discuss the most frequently used algorithms as well as other approaches based on a different description of the Poisson noise. We conclude with a review of other topics related to image deblurring such as boundary effect correction, space-variant PSFs, super-resolution, blind deconvolution and multiple-image deconvolution.
Abstract. This paper is the first part of a work which is concerned with linear methods for the solution of linear inverse problems with discrete data. Such problems occur frequently in instrumental science, for example tomography, radar, sonar, optical imaging, particle sizing and so on. We give a general formulation of the problem by extending the approach of Backus and Gilbert and by defining a mapping from an infinite-dimensional function space into a finite-dimensional vector space. The singular system of this mapping is introduced and used to define natural bases both in the solution and in the data space. We analyse in this context normal solutions, least-squares solutions and generalised inverses. We illustrate the wide applicability of the singular system technique by discussing several examples in detail. Particular attention is devoted to showing the many connections between this method and techniques developed in other topics like the extrapolation of band-limited signals and the interpolation of functions specified on a finite set of points. For example, orthogonal polynomials for least-squares approximation, spline functions and discrete prolate spheroidal functions are particular cases of the singular functions introduced in this paper.The problem of numerical stability is briefiy discussed but the investigation ofthe methods developed for overcoming this difficulty, like truncated expansions in the singular bases. regularised solutions, iterative methods and so on, is deferred to a second part of this work.
Several methods based on different image models have been proposed and developed for image denoising. Some of them, such as total variation (TV) and wavelet thresholding, are based on the assumption of additive Gaussian noise. Recently the TV approach has been extended to the case of Poisson noise, a model describing the effect of photon counting in applications such as emission tomography, microscopy and astronomy. For the removal of this kind of noise we consider an approach based on a constrained optimization problem, with an objective function describing TV and other edge-preserving regularizations of the Kullback-Leibler divergence. We introduce a new discrepancy principle for the choice of the regularization parameter, which is justified by the statistical properties of the Poisson noise. For solving the optimization problem we propose a particular form of a general scaled gradient projection (SGP) method, recently introduced for image deblurring. We derive the form of the scaling from a decomposition of the gradient of the regularization functional into a positive and a negative part. The beneficial effect of the scaling is proved by means of numerical simulations, showing that the performance of the proposed form of SGP is superior to that of the most efficient gradient projection methods. An extended numerical analysis of the dependence of the solution on the regularization parameter is also performed to test the effectiveness of the proposed discrepancy principle.
In applications of imaging science, such as emission tomography, fluorescence microscopy and optical/infrared astronomy, image intensity is measured via the counting of incident particles (photons, γ-rays, etc). Fluctuations in the emission-counting process can be described by modeling the data as realizations of Poisson random variables (Poisson data). A maximum-likelihood approach for image reconstruction from Poisson data was proposed in the mid-1980s. Since the consequent maximization problem is, in general, ill-conditioned, various kinds of regularizations were introduced in the framework of the so-called Bayesian paradigm. A modification of the well-known Tikhonov regularization strategy results in the data-fidelity function being a generalized Kullback-Leibler divergence. Then a relevant issue is to find rules for selecting a proper value of the regularization parameter. In this paper we propose a criterion, nicknamed discrepancy principle for Poisson data, that applies to both denoising and deblurring problems and fits quite naturally the statistical properties of the data. The main purpose of the paper is to establish conditions, on the data and the imaging matrix, ensuring that the proposed criterion does actually provide a unique value of the regularization parameter for various classes of regularization functions. A few numerical experiments are performed to demonstrate its effectiveness. More extensive numerical analysis and comparison with other proposed criteria will be the object of future work.
In the first part of this work a general dcnnition of an inverse problem with tji.scrcte tint" hus been given und an ;in;ilysis in terms of singular sy.sicms, has been 13erf".rme^' T.1"' ' p"' t' lcm"' ' thc """' "iwl stiibilily of the solution^which' in that paper was only briefly discussed, is the main topic of this second part. When the condition number of the problem is too large, a small error on the data can produce an cxlremcly large error o" the gcncrali.scd .solution, which thcruforc has no physical meaning. We review most ot the methiMl.s which have been developed for overcoming this difticiiity, including numcriuil filtering. Tikhunov rcguliirisiition. itcriltivc methods, the Backus-Oilbcrtmethod iindso on. Rcgul.irisation methods for the st.lWc approxiination orgcnuralisccl .solutions obliiined th.T.llgh n""im)sillio" "f.'iuitabte .scminorms (C-gcner.iliscd solulion.s), such as the method of Phillips, arc also considered.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.