2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition 2018
DOI: 10.1109/cvpr.2018.00082
|View full text |Cite
|
Sign up to set email alerts
|

Fast and Accurate Single Image Super-Resolution via Information Distillation Network

Abstract: Recently, deep convolutional neural networks (CNNs) have been demonstrated remarkable progress on single image super-resolution. However, as the depth and width of the networks increase, CNN-based super-resolution methods have been faced with the challenges of computational complexity and memory consumption in practice. In order to solve the above questions, we propose a deep but compact convolutional network to directly reconstruct the high resolution image from the original low resolution image. In general… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
490
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 717 publications
(541 citation statements)
references
References 24 publications
1
490
0
Order By: Relevance
“…where H I M D N (·) is our IMDN. It is optimized with mean absolute error (MAE) loss followed most of previous works [2,11,18,36,38]. Given a training set…”
Section: Methods 31 Frameworkmentioning
confidence: 99%
See 1 more Smart Citation
“…where H I M D N (·) is our IMDN. It is optimized with mean absolute error (MAE) loss followed most of previous works [2,11,18,36,38]. Given a training set…”
Section: Methods 31 Frameworkmentioning
confidence: 99%
“…Very recently, Li et al [17] exploited feedback mechanism that enhancing low-level representation with high-level ones. For lightweight networks, Hui et al [11] developed the information distillation network for better exploiting hierarchical features by separation processing of the current feature maps. And Ahn [2] designed an architecture that implemented a cascading mechanism on a residual network to boost the performance.…”
Section: Single Image Super-resolutionmentioning
confidence: 99%
“…Section 4.3), which may be problematic if the brightness varies across input views. Table 1 presents a quantitative comparison of our method with two baselines (for an upsampling factor ×4): an reimplementation of [55] with the more robust L 1 -dataterm, and the original version of the single-image SR network [22]. It also shows respective performance of MVA, SIP and MVA+SIP network.…”
Section: Comparison With State-of-the-artmentioning
confidence: 99%
“…Quantitative comparison of different texture super-resolution techniques (upscaling factor ×4, same initial texture atlas, all computed on the Y-channel images). The top two rows are baselines : our reimplementation of [55] with primal-dual optimization scheme and L 1 -dataterm (first row), single-image super-resolution network [22] (second row). We evaluate the individual components of our proposed approach: MVA subnet trained on our data, which is very similar to [55], but estimates the local blur from the data (third row); SIP subnet alone trained only on DIV2K (fourth row); and our complete network MVA+SIP (last two rows).…”
Section: Ablation Studymentioning
confidence: 99%
See 1 more Smart Citation