2004
DOI: 10.1002/cpa.20042
|View full text |Cite
|
Sign up to set email alerts
|

An iterative thresholding algorithm for linear inverse problems with a sparsity constraint

Abstract: We consider linear inverse problems where the solution is assumed to have a sparse expansion on an arbitrary preassigned orthonormal basis. We prove that replacing the usual quadratic regularizing penalties by weighted p -penalties on the coefficients of such expansions, with 1 ≤ p ≤ 2, still regularizes the problem. Use of such p -penalized problems with p < 2 is often advocated when one expects the underlying ideal noiseless solution to have a sparse expansion with respect to the basis under consideration. T… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

14
3,654
1
12

Year Published

2009
2009
2017
2017

Publication Types

Select...
5
4
1

Relationship

0
10

Authors

Journals

citations
Cited by 4,162 publications
(3,681 citation statements)
references
References 45 publications
14
3,654
1
12
Order By: Relevance
“…The sparsity of nature images under these tight frames has been successfully used to solve many image restoration tasks including image denoising, non-blind image deblurring, image inpainting, etc (e.g. [12,6,4]). Therefore, we believe that the high sparsity of images under certain suitable tight frame system is also a good regularization on the latent image in our blind deblurring problem.…”
Section: Our Approachmentioning
confidence: 99%
“…The sparsity of nature images under these tight frames has been successfully used to solve many image restoration tasks including image denoising, non-blind image deblurring, image inpainting, etc (e.g. [12,6,4]). Therefore, we believe that the high sparsity of images under certain suitable tight frame system is also a good regularization on the latent image in our blind deblurring problem.…”
Section: Our Approachmentioning
confidence: 99%
“…Two major categories of such techniques include parallel imaging (PI) [3]- [5] and compressed sensing (CS) [6]. While with PI missing k-space data are interpolated based on a priori knowledge of the coil sensitivity profiles, CS interpolates the missing data by imposing an a priori transform domain sparsity constraint to regularize the reconstruction problem [7]. Due to the independence of CS and PI reconstruction constraints, a combined CS and PI reconstruction allows for further acceleration [8].…”
Section: Introductionmentioning
confidence: 99%
“…These reconstruction algorithms are generally in the state-of-the-art compressive sensing (CS) framework, utilizing prior knowledge effectively and permitting accurate and stable reconstruction from a more limited amount of raw data than requested by the classic Shannon sampling theory. CS-inspired reconstruction algorithms can be roughly categorized into the following stages (Wang et al , 2011): (1) The 1st stage: Candes’ total variation (TV) minimization method and variants (initially used for MRI and later on tried out for CT) (Li and Santosa, ’96; Jonsson et al , ’98; Candes and Tao, 2005; Landi and Piccolomini, 2005; Yu et al , 2005; Candes et al , 2006, 2008; Block et al , 2007; Landi et al , 2008; Sidky and Pan, 2008; Yu and Wang, 2009); (2) the 2nd stage: Soft-thresholding method adapted for X-ray CT to guarantee the convergence (Daubechies et al , 2004; Yu and Wang, 2010; Liu et al , 2011; Yu et al , 2011); and (3) the 3rd stage: Dictionary learning (DL) and non-local mean methods being actively developed by our group and others (Kreutz-Delgado et al , 2003; Gao et al , 2011; Lu et al , 2012; Xu et al , 2012; Zhao et al , 2012a,b). …”
Section: Introductionmentioning
confidence: 99%