2019
DOI: 10.1016/j.jvcir.2019.03.027
|View full text |Cite
|
Sign up to set email alerts
|

SRLibrary: Comparing different loss functions for super-resolution over various convolutional architectures

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
3
3

Relationship

0
10

Authors

Journals

citations
Cited by 54 publications
(5 citation statements)
references
References 11 publications
0
5
0
Order By: Relevance
“…The reason of why we used L1 loss, is explained in the thesis (Anagün 2018). In this study, the L1 loss function is faster than the other loss functions and gives effective results for solving the SR problem (Anagun et al 2019).…”
Section: Methods and Toolsmentioning
confidence: 95%
“…The reason of why we used L1 loss, is explained in the thesis (Anagün 2018). In this study, the L1 loss function is faster than the other loss functions and gives effective results for solving the SR problem (Anagun et al 2019).…”
Section: Methods and Toolsmentioning
confidence: 95%
“…Finally, we apply the pixel shuffling operation to increase the spatial resolution of LF features, and further employ a 3×3 convolution to obtain the super-resolved LF image L SR . Following most existing works [60,59,35,61,56,37,77,78,71], we use the L 1 loss function to train our network due to its robustness to outliers [2].…”
Section: Feature Upsamplingmentioning
confidence: 99%
“…We use Charbonnier loss [3] to quantify the error between the high-resolution output and the given ground truth image. Charbonnier loss is known to be insensitive to outliers and for super-resolution tasks, experimental evaluation has shown that it provides better PSNR/SSIM accuracies over other conventional loss functions [1].…”
Section: Loss Functionmentioning
confidence: 99%