2020
DOI: 10.3390/s20030594
|View full text |Cite
|
Sign up to set email alerts
|

Compressive Sensing Spectroscopy Using a Residual Convolutional Neural Network

Abstract: Compressive sensing (CS) spectroscopy is well known for developing a compact spectrometer which consists of two parts: compressively measuring an input spectrum and recovering the spectrum using reconstruction techniques. Our goal here is to propose a novel residual convolutional neural network (ResCNN) for reconstructing the spectrum from the compressed measurements. The proposed ResCNN comprises learnable layers and a residual connection between the input and the output of these learnable layers. The ResCNN … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
30
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 31 publications
(32 citation statements)
references
References 38 publications
0
30
0
Order By: Relevance
“…The input data of ResNet are 2D matrices [N UU × N B ], so the proposed architecture is as follows: (i) initially we have a residual layer composed of a convolutional layer 2D (conv2D), for feature extraction, followed by a batch normalization layer, which aims to make the network faster and more stable during the normalization process, and then the activation function rectified linear unit (ReLu). Then, we have another layer conv2D followed by a layer batch normalization, now, however, we add a Add, H(x) = F(x) + x, which has, in order to calculate the residual of the network, which really must be learned compared to what was already known from the input data, F(x) = H(x) − x, where F(x) is mapping of the learnable layers and x are the input data [34]. Finishing with the ReLu activation function; (ii) the next layer is a max pooling 2D, which aims to reduce the dimensionality of the layer's input data and allow assumptions about the resources contained in the clustered sub-regions [34]; (iii) a second residual layer is applied, where we have a conv2D layer followed by a batch normalization and a ReLu, sequentially another conv2D and batch normalization.…”
Section: Residual Convolutional Neural Networkmentioning
confidence: 99%
See 2 more Smart Citations
“…The input data of ResNet are 2D matrices [N UU × N B ], so the proposed architecture is as follows: (i) initially we have a residual layer composed of a convolutional layer 2D (conv2D), for feature extraction, followed by a batch normalization layer, which aims to make the network faster and more stable during the normalization process, and then the activation function rectified linear unit (ReLu). Then, we have another layer conv2D followed by a layer batch normalization, now, however, we add a Add, H(x) = F(x) + x, which has, in order to calculate the residual of the network, which really must be learned compared to what was already known from the input data, F(x) = H(x) − x, where F(x) is mapping of the learnable layers and x are the input data [34]. Finishing with the ReLu activation function; (ii) the next layer is a max pooling 2D, which aims to reduce the dimensionality of the layer's input data and allow assumptions about the resources contained in the clustered sub-regions [34]; (iii) a second residual layer is applied, where we have a conv2D layer followed by a batch normalization and a ReLu, sequentially another conv2D and batch normalization.…”
Section: Residual Convolutional Neural Networkmentioning
confidence: 99%
“…Then, we have another layer conv2D followed by a layer batch normalization, now, however, we add a Add, H(x) = F(x) + x, which has, in order to calculate the residual of the network, which really must be learned compared to what was already known from the input data, F(x) = H(x) − x, where F(x) is mapping of the learnable layers and x are the input data [34]. Finishing with the ReLu activation function; (ii) the next layer is a max pooling 2D, which aims to reduce the dimensionality of the layer's input data and allow assumptions about the resources contained in the clustered sub-regions [34]; (iii) a second residual layer is applied, where we have a conv2D layer followed by a batch normalization and a ReLu, sequentially another conv2D and batch normalization. All this is in parallel with a conv2D and, also, a batch normalization.…”
Section: Residual Convolutional Neural Networkmentioning
confidence: 99%
See 1 more Smart Citation
“…This was adopted for instance in [15], where dense convolutional neural networks (CNNs) were trained to remove artifacts from high-dimensional NMR signals. Similarly, in [16], a residual convolutional neural network, including CNNs and fully connected layers, is built to perform compressive sensing spectroscopy.…”
Section: Introductionmentioning
confidence: 99%
“…Compressive sensing (CS) spectroscopy is well known for developing a compact spectrometer that consists of two parts: compressively measuring an input spectrum and recovering the spectrum using reconstruction techniques. Kim et al [ 15 ] have proposed a residual convolutional neural network for reconstructing the spectrum from the compressed measurements. The proposed network comprises learnable layers and a residual connection between the input and the output of these learnable layers.…”
mentioning
confidence: 99%