2020
DOI: 10.1007/978-3-030-58545-7_14
|View full text |Cite
|
Sign up to set email alerts
|

Sequential Convolution and Runge-Kutta Residual Architecture for Image Compressed Sensing

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(3 citation statements)
references
References 27 publications
0
3
0
Order By: Relevance
“…Because the fixed measuring matrix used in the CS methods needs a lot of storage space and must satisfy the RIP principle, we used sequential three convolutional layers to replace the fixed measuring matrix in the traditional CS compression process to compress the original signals. The key point is that the convolution can be represented by a multiplication between matrix and matrix [ 29 ]. The process can be formulated as: where and ( i = 1, 2, 3) are the weights and bias, respectively, in i -th convolutional layer in the compression module, and y is the output (i.e., measurement).…”
Section: Methodsmentioning
confidence: 99%
“…Because the fixed measuring matrix used in the CS methods needs a lot of storage space and must satisfy the RIP principle, we used sequential three convolutional layers to replace the fixed measuring matrix in the traditional CS compression process to compress the original signals. The key point is that the convolution can be represented by a multiplication between matrix and matrix [ 29 ]. The process can be formulated as: where and ( i = 1, 2, 3) are the weights and bias, respectively, in i -th convolutional layer in the compression module, and y is the output (i.e., measurement).…”
Section: Methodsmentioning
confidence: 99%
“…We used 4 sequential convolutional layers and 1 up-sampling layer instead of the measurement matrix in the traditional CS method. Because the convolution is representable as a matrix-to-matrix multiplication [ 36 ], a sequential 4-layer convolution formula is (5): where and are the weight and bias in the ith convolutional layer in the CS-Block, respectively. The convolution can be expressed as a linear representation of the original signal , so convolution can replace the traditional CS sampling matrix in the network model to sample and compress the information.…”
Section: Methodsmentioning
confidence: 99%
“…Recently, deep compressive sensing (DCS) methods (Sun et al 2020;Chen et al 2020;You et al 2021;Song, Chen, and Zhang 2021) have been developed to solve these two issues of CS through an end-to-end learning manner, leveraging the robust learning and representation abilities of neural networks. Zheng et al (2020) introduced RK-CCSNet, a method that employs sequential convolution modules (SCM) to compress image size by means of filter compression, This approach effectively avoids block To address aforementioned challenges, we propose a multi-level cross-sampling and frequency-divided reconstruction network (MCFD-Net) to achieve higher quality image CS, as illustrated in Fig. 2.…”
Section: Introductionmentioning
confidence: 99%