2020
DOI: 10.1007/978-3-030-58542-6_17
|View full text |Cite
|
Sign up to set email alerts
|

LatticeNet: Towards Lightweight Image Super-Resolution with Lattice Block

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
57
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 174 publications
(83 citation statements)
references
References 31 publications
0
57
0
Order By: Relevance
“…Recently, some fast and lightweight SISR architectures have been introduced to tackle image SR. These methods can be approximately divided into three categories: the knowledge distillation-based methods [19,27,28], the neural architecture search-based methods [41,42], and the model design-based methods [26,43]. Knowledge distillation aims to transfer the knowledge from a teacher network to a student network.…”
Section: Single Image Super-resolutionmentioning
confidence: 99%
See 1 more Smart Citation
“…Recently, some fast and lightweight SISR architectures have been introduced to tackle image SR. These methods can be approximately divided into three categories: the knowledge distillation-based methods [19,27,28], the neural architecture search-based methods [41,42], and the model design-based methods [26,43]. Knowledge distillation aims to transfer the knowledge from a teacher network to a student network.…”
Section: Single Image Super-resolutionmentioning
confidence: 99%
“…In addition, CARN [26] proposes a cascading mechanism based on a residual network to boost performance. LatticeNet [43] proposes a lattice block in which two butterfly structures are applied to combine two residual blocks. These works indicate that the lightweight SR networks can maintain a good trade-off between performance and model complexity.…”
Section: Single Image Super-resolutionmentioning
confidence: 99%
“…Recently, some researchers have proposed attention-based models to improve the SR performance. Several works, such as RCAN [6], SAN [21], MCAN [47], A 2 F [48], LatticeNet [49], and DIN [50], introduce the channel attention (CA) mechanism to SR which makes the network learn more useful features. To learn more discriminative features, some researchers utilize both the channel attention and spatial attention, such as HRAN [51], MIRNet [52], CSFM [53], and BAM [54].…”
Section: Attention Mechanismmentioning
confidence: 99%
“…At present, the excellent lightweight SR network models are LatticeNet [27], SLUA [28], MAFFSRN [29], and RFDN [30]. The MAFFSRN network initially uses a convolution to extract features and then uses many FFGs to refine and enhance features.…”
Section: Related Research Backgroundmentioning
confidence: 99%