2016
DOI: 10.1007/978-3-319-46487-9_45
|View full text |Cite
|
Sign up to set email alerts
|

Learning High-Order Filters for Efficient Blind Deconvolution of Document Photographs

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
21
0

Year Published

2017
2017
2021
2021

Publication Types

Select...
6
2
1

Relationship

1
8

Authors

Journals

citations
Cited by 41 publications
(21 citation statements)
references
References 20 publications
0
21
0
Order By: Relevance
“…3) Iterative shrinkage: This is a variant of sparse reconstruction which recasts the regularization problem as an iterative procedure where dominant feature coefficients are preserved during each iteration. Different regularizers can be found for image deconvolution in [57]- [61]. 4) Variational regularization: Known as the total variation (TV) method, in which the priors for either the blur kernel or latent image are regulated by the TV-norm [48]- [52], this norm preserves sharp edges while preventing Gibbs oscillations for recovery.…”
Section: Arxiv:181010725v2 [Eessiv] 19 Jul 2019mentioning
confidence: 99%
See 1 more Smart Citation
“…3) Iterative shrinkage: This is a variant of sparse reconstruction which recasts the regularization problem as an iterative procedure where dominant feature coefficients are preserved during each iteration. Different regularizers can be found for image deconvolution in [57]- [61]. 4) Variational regularization: Known as the total variation (TV) method, in which the priors for either the blur kernel or latent image are regulated by the TV-norm [48]- [52], this norm preserves sharp edges while preventing Gibbs oscillations for recovery.…”
Section: Arxiv:181010725v2 [Eessiv] 19 Jul 2019mentioning
confidence: 99%
“…Trained CNN model to classify blur vs clean as image prior for regularized minimization formulation Simoes [35] 2016 NB • Diagonalizing unknown convolution operator using FFT and solving via ADMM Kim [36] 2015 B • • Encode temporal/spatial coherency of dynamic scene using optical-flow/TV regularized minimization Liu [37] 2014 B • • • Estimate blur from image spectral property and feed into a regularized TV/eigenvalue minimization Mosleh [38] 2014 N • • • Encode ringing artifacts using Gabor wavelets and fit into a regularized minimization for cancelation Pan [39] 2014 N • • • Text image deblurring regularized by sparse encoding of spatial/gradient domains Pan [40] 2013 B • • Estimates the kernel and the deblurred image from a combined sparse regularization framework Kim [41] 2013 B • • • Dynamic image deconvolution using TV/Tikhonov/temporal-sparsity regularized minimization Shen [42] 2012 B • • • TV/Tikhonov regularized minimization for image deconvolution Sroubek [43] 2012 B • • 1-regularized minimization for image deconvolution Dong [44] 2011 NB • • Learn adaptive bases and use in adaptive regularized minimization for sparse reconstruction Zhang [45], [46] 2011 B • • • Sparse regulation of images via KSVD library for deconvolution and apply to facial recognition Bai [47] 2018 B • • Both kernel/image recovered via combined regularization using reweighted graph TV priors Lou [48] 2015 N • • Weighted differences of TV regularizers in 1/ 2 norms and solved by split variable technique Zhang [49] 2014 N • • • Local/non-local similarities defined by TV 1 /TV 2 and regulated by combined minimization Xu [50] 2012 B • Regulate motion by difference of depth map and deconvolve via non-convex TV minimization Chan [51] 2011 N • Deconvolve image/videos using spatial/temporal TV regularization solved by split varying technique Afonso [52] 2010 N • Deconvolve image using TV regularization solved by split varying technique Li [53] 2018 B • • Non-iterative deconvolution via combination of Wiener filters, solution by a system linear equations Bertero [54] 2010 N • Generalized Kullback-Leiblar divergence function to regularize Poisson images Cho [16] 2009 B • • Separate recovery of motion kernel and image from residual image using Tikhonov regularization Wiener [55], [56] 1949 N • Regulate image spectrum in Fourier domain with inverse kernel response Xiao …”
Section: Authormentioning
confidence: 99%
“…This approach has been extended to license plates in [36]. [40] proposes to learn a multi-scale cascade of shrinkage fields model. This model however does not seem to generalize to natural images.…”
Section: Related Workmentioning
confidence: 99%
“…Some representative examples of such methods include trainable random field models such as separable Markov random field (MRFSepa) [35] regression tree fields (RTF) [36], cascaded shrinkage fields (CSF) [8], trainable nonlinear reaction diffusion (TRD) models [9] and their extensions [37], [38], [39]. The state-ofthe-art CSF and TRD methods can be derived from the FoE model [30] by unrolling corresponding optimization iterations to be feed-forward networks, where the parameters of each network are trained by minimizing the error between its output images and ground truth for each specific task.…”
Section: Introductionmentioning
confidence: 99%