2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition 2018
DOI: 10.1109/cvpr.2018.00420
|View full text |Cite
|
Sign up to set email alerts
|

Feature Super-Resolution: Make Machine See More Clearly

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
33
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
5
5

Relationship

0
10

Authors

Journals

citations
Cited by 47 publications
(33 citation statements)
references
References 7 publications
0
33
0
Order By: Relevance
“…Other applications of SR include object detection ( Li et al, 2017a ; Tan, Yan & Bare, 2018 ), stereo image SR ( Duan & Xiao, 2019 ; Guo, Chen & Huang, 2019 ; Wang et al, 2019a ), and super-resolution in optical microscopy ( Qiao et al, 2021 ). Overall, SR plays a vital role in multi-disciplines, from medical science, computer vision to satellite imaging and remote sensing.…”
Section: Domain-specific Applications Of Super-resolutionmentioning
confidence: 99%
“…Other applications of SR include object detection ( Li et al, 2017a ; Tan, Yan & Bare, 2018 ), stereo image SR ( Duan & Xiao, 2019 ; Guo, Chen & Huang, 2019 ; Wang et al, 2019a ), and super-resolution in optical microscopy ( Qiao et al, 2021 ). Overall, SR plays a vital role in multi-disciplines, from medical science, computer vision to satellite imaging and remote sensing.…”
Section: Domain-specific Applications Of Super-resolutionmentioning
confidence: 99%
“…However, the performance is achieved only for large size images having rich object structure and high quality appearance. e resolution of images with small size is low, which limits the learning of discriminative representations, thus leading to identification failure [18]. In this experiment, the impact of down-scaling operation on image representations can be evaluated on Oxford5K dataset [19] by our AQPHT and the widespread deep model of VGG16.…”
Section: Experiments On Scalingmentioning
confidence: 99%
“…Thus, our input and output have the same number of channels. Our SA architecture inspires from [6] which has shown that upsampling and downsampling module within the architecture improves the accuracy in SR. For the activation unit, by following ref [12,29], we prefer the LeakyReLU over ReLU activation whereas we use the linear bottleneck layer as suggested in [23].…”
Section: Spatial Attention (Sa)mentioning
confidence: 99%