2022
DOI: 10.1155/2022/9637460
|View full text |Cite
|
Sign up to set email alerts
|

Image Reconstruction Based on Progressive Multistage Distillation Convolution Neural Network

Abstract: To address the problem that some current algorithms suffer from the loss of some important features due to rough feature distillation and the loss of key information in some channels due to compressed channel attention in the network, we propose a progressive multistage distillation network that gradually refines the features in stages to obtain the maximum amount of key feature information in them. In addition, to maximize the network performance, we propose a weight-sharing information lossless attention blo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

1
0

Authors

Journals

citations
Cited by 1 publication
(3 citation statements)
references
References 19 publications
0
3
0
Order By: Relevance
“…However, these algorithms perform feature compression brutally, which can result in the loss of some key information. To this end, Cai [26] improved the distillation method, gradually distilling the key features of each stage from coarse to fine, and then using convolution layer to polymerize these features. Wang [13] proposed a scheme for adaptive learning of airspace and channel masks, which greatly reduces the computational complexity of the network while maintaining the performance of the network.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…However, these algorithms perform feature compression brutally, which can result in the loss of some key information. To this end, Cai [26] improved the distillation method, gradually distilling the key features of each stage from coarse to fine, and then using convolution layer to polymerize these features. Wang [13] proposed a scheme for adaptive learning of airspace and channel masks, which greatly reduces the computational complexity of the network while maintaining the performance of the network.…”
Section: Related Workmentioning
confidence: 99%
“…Niu [33] designed a layer attention block and a channel space attention block to more comprehensively and selectively exploit information-rich features by modeling the inter-dependencies between different layers, channels, and locations. Cai [26] proposed a weight-sharing information lossless attention block, which enhances the recovery of high-frequency information such as edge textures through a weight-sharing auxiliary branching module. Wang [34] constructed a lightweight recurrent residual channel attention block to further improve the network performance by introducing recurrent connections in the attention module with a reduced number of module parameters.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation