The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
2016
DOI: 10.1016/j.sigpro.2015.11.025
|View full text |Cite
|
Sign up to set email alerts
|

Single image super-resolution using regularization of non-local steering kernel regression

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
10
0

Year Published

2017
2017
2020
2020

Publication Types

Select...
9
1

Relationship

0
10

Authors

Journals

citations
Cited by 27 publications
(10 citation statements)
references
References 28 publications
0
10
0
Order By: Relevance
“…As multi-scale pyramid is built using up scaling factor of 2 , to achieve scale factors of 2,4,8 is quite time consuming task and actually there are no prominent variations in the image structure with such small step size. Average PSNR of [19] when compared with existing SR methods like [20], [21] is improved on four benchmark datasets viz. Set5, Set14, BSD500 & UIUC, but PSNR of [22] was found to be more.…”
Section: A Techniques For Single Image Super Resolution (Sisr)mentioning
confidence: 99%
“…As multi-scale pyramid is built using up scaling factor of 2 , to achieve scale factors of 2,4,8 is quite time consuming task and actually there are no prominent variations in the image structure with such small step size. Average PSNR of [19] when compared with existing SR methods like [20], [21] is improved on four benchmark datasets viz. Set5, Set14, BSD500 & UIUC, but PSNR of [22] was found to be more.…”
Section: A Techniques For Single Image Super Resolution (Sisr)mentioning
confidence: 99%
“…Traditional TV regularisation assumes that most of the natural image is smooth; hence, the TV modulus value of a natural image should be small. The traditional model of TV regularisation is as follows [42–44]: H^normalt=argminbold-italicHnormalt}{||bold-italicHnormalt+λ2∥∥normalΨbold-italicHnormaltbold-italicLnormalt22where H^normalt is the reconstructed HR image; bold-italicHnormalt is the HR image to be optimised; λ is the Lagrange multiplier; bold-italicLnormalt is the original LR image; and Y is a degradation matrix.…”
Section: Related Workmentioning
confidence: 99%
“…For example, kernel based regression [26], [27] was used to learn nonlinear regression functions for mapping lowresolution feature vectors to high-resolution feature vectors in [11], [28]. Steering kernel regression was used by [29], [30]. Wang et al [31] used active-sampling Gaussian process regression for super-resolution.…”
Section: A Brief Review On Single-image Super-resolutionmentioning
confidence: 99%