2022
DOI: 10.1007/s00500-022-06982-4
|View full text |Cite
|
Sign up to set email alerts
|

A unified framework of deep unfolding for compressed color imaging

Abstract: Traditional iterative-based reconstruction algorithms for compressed color imaging often suffer from long reconstruction time and low reconstruction accuracy at extreme low-rate subsampling. This paper proposes a model-driven deep learning framework for compressed color imaging. In the training step, extract the image blocks at the same position of the R, G, and B channel images as the ground truth, then singular value decomposition is performed on the measurement matrix to obtain the optimized measurement mat… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 14 publications
0
2
0
Order By: Relevance
“…The reconstruction of the color image involves selecting the image blocks located at the same position (x,y) from the three-channel images. The grayscale values (P c (x,y), c = R, G, B) of each pixel were retrieved and afterwards utilized in real-time to synthesize a color image with dimensions of 256 × 256 pixels, employing a specific technique, whose formula is as follows [43] ⎡…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…The reconstruction of the color image involves selecting the image blocks located at the same position (x,y) from the three-channel images. The grayscale values (P c (x,y), c = R, G, B) of each pixel were retrieved and afterwards utilized in real-time to synthesize a color image with dimensions of 256 × 256 pixels, employing a specific technique, whose formula is as follows [43] ⎡…”
Section: Resultsmentioning
confidence: 99%
“…The reconstruction of the color image involves selecting the image blocks located at the same position ( x , y ) from the three‐channel images. The grayscale values (P c (x,y), c = R, G, B) of each pixel were retrieved and afterwards utilized in real‐time to synthesize a color image with dimensions of 256 × 256 pixels, employing a specific technique, whose formula is as follows [ 43 ] []Rx,yGx,yBx,ybadbreak=[]PRx,yPGx,yPBx,y,0goodbreak≤xgoodbreak≤256,0goodbreak≤ygoodbreak≤256$$\begin{equation}\left[ { \def\eqcellsep{&}\begin{array}{@{}*{1}{c}@{}} {R\left( {x,y} \right)}\\ {G\left( {x,y} \right)}\\ {B\left( {x,y} \right)} \end{array} } \right] = \left[ { \def\eqcellsep{&}\begin{array}{@{}*{1}{c}@{}} {{P}_R\left( {x,y} \right)}\\ {{P}_G\left( {x,y} \right)}\\ {{P}_B\left( {x,y} \right)} \end{array} } \right],0 \le x \le 256,0 \le y \le 256\end{equation}$$Furthermore, the pixels of the reconstructed image are significantly influenced by the frame number of the encoded patterns ( F n ). This frame number is determined using a differential computation approach, as described below: Fnbadbreak=0.33em20.33emgoodbreak×Mgoodbreak×Ngoodbreak×SR$$\begin{equation}{F}_n = \ 2\ \times M \times N \times SR\end{equation}$$where M × N is the pixels of a reconstruction image, and SR is the sampling rate.…”
Section: Resultsmentioning
confidence: 99%