2013 IEEE Conference on Computer Vision and Pattern Recognition 2013
DOI: 10.1109/cvpr.2013.51
|View full text |Cite
|
Sign up to set email alerts
|

Beta Process Joint Dictionary Learning for Coupled Feature Spaces with Application to Single Image Super-Resolution

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
88
0

Year Published

2014
2014
2019
2019

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 148 publications
(88 citation statements)
references
References 15 publications
0
88
0
Order By: Relevance
“…The exemplar patches can be generated from external datasets [10,2], the input image itself [11,9], or combined sources [44]. Various learning methods of the mapping functions have been proposed such as weighted average [31,2], kernel regression [17], support vector regression [23], Gaussian process regression [13], sparse dictionary representation [46,7,5,24,45,39,47,19,14]. In addition to equally averaging overlapped patches, several methods for blending overlapped pixels have been proposed including weighted averaging [11,44], Markov Random Fields [10], and Conditional Random Fields [38].…”
Section: Related Workmentioning
confidence: 99%
“…The exemplar patches can be generated from external datasets [10,2], the input image itself [11,9], or combined sources [44]. Various learning methods of the mapping functions have been proposed such as weighted average [31,2], kernel regression [17], support vector regression [23], Gaussian process regression [13], sparse dictionary representation [46,7,5,24,45,39,47,19,14]. In addition to equally averaging overlapped patches, several methods for blending overlapped pixels have been proposed including weighted averaging [11,44], Markov Random Fields [10], and Conditional Random Fields [38].…”
Section: Related Workmentioning
confidence: 99%
“…It was also compared with promising reconstruction methods that employ wavelet transform (WT), such as Wavelet Zero Padding (WZP) [31] and Demirel-Anbarjafari Super Resolution (DASR) [12]. Finally, the proposed SR-WAFE-SR technique was compared with state-of-the-art learning methods that use sparse representation and CNN such as Super-Resolution with Sparse Mixing Estimators (SME) of Mallat et al [15], the Sparse coding of Yang et al (ScSR) [14], Beta Process Join Dictionary Learning (BP-JDL) of He [16], and the Super-Resolution Convolutional Neural Network (SRCNN) of Dong et al [20].…”
Section: Simulation Results and Discussionmentioning
confidence: 99%
“…A common problem in the learning methods is the use of dictionaries and how to learn and to train over-complete dictionaries. He et al [16] proposed a Bayesian method to learn the over-complete dictionaries. This Bayesian method employs a beta process model and shows that the sparse representation can be decomposed to values and dictionary atom indicators.…”
Section: Introductionmentioning
confidence: 99%
“…There are a number of ways to evaluate the results of super-resolution: Some papers judge the quality of the results by their Mean Squared Error per pixel (MSE) to the ground truth [6], some use the related Peak-Signal-to-Noise Ratio (PSNR) [23] and others rely on Structured Similarity (SSIM) as a measure of error [20,21]. PSNR is logarithmically proportional to MSE and both can be argued to only inaccurately represent the human understanding of better or worse regarding the quality of an image-reconstruction.…”
Section: Discussionmentioning
confidence: 99%
“…Most super-resolution approaches rely on datasets with very low resolutions [6,20,21]. However, the strength of the presented model lies in its speed and applicability to large images.…”
Section: Methodsmentioning
confidence: 99%