2021
DOI: 10.1109/access.2021.3056061
|View full text |Cite
|
Sign up to set email alerts
|

Recurrently-Trained Super-Resolution

Abstract: We are motivated by the observation that for problems where inputs and outputs are in the same form such as in image enhancement, deep neural networks can be reinforced by retraining the network using a new target set to the output for the original target. As an example, we introduce a new learning strategy for super-resolution by recurrently training the same simple network. Unlike the existing self-trained SR, which involves a single stage of learning with multiple runs at test time, our method trains the sa… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 18 publications
(19 reference statements)
0
2
0
Order By: Relevance
“…Mean square error (MSE) is a relatively traditional but widely used cost function. The mean square error is a cost function per pixel, and the mean square error is used as the cost function in RTSR [43]. RTSR believes that the quality of the high score image as the supervision image can be continuously improved through the trained super score network, so as to continuously supervise the training of the original super score network with the high score image with better quality, so as to improve the performance of the super score model.…”
Section: Mean Square Error Cost Functionmentioning
confidence: 99%
“…Mean square error (MSE) is a relatively traditional but widely used cost function. The mean square error is a cost function per pixel, and the mean square error is used as the cost function in RTSR [43]. RTSR believes that the quality of the high score image as the supervision image can be continuously improved through the trained super score network, so as to continuously supervise the training of the original super score network with the high score image with better quality, so as to improve the performance of the super score model.…”
Section: Mean Square Error Cost Functionmentioning
confidence: 99%
“…In spite that the reverse image filtering idea was introduced very recently [30], it has already contributed to developing new deep learning architectures [6], studies on iterative depth B Alexander G. Belyaev a.belyaev@hw.ac.uk Lizhong Wang lw56@hw.ac.uk Pierre-Alain Fayolle fayolle@u-aizu.ac.jp 1 Heriot-Watt University, Edinburgh, UK 2 University of Aizu, Aizu-Wakamatsu, Japan and image super-resolution [22,35], and image restoration and sharpening [7,28].…”
Section: Introduction and Contributionsmentioning
confidence: 99%