2020
DOI: 10.1109/jstsp.2020.2982777
|View full text |Cite
|
Sign up to set email alerts
|

Geometric Approaches to Increase the Expressivity of Deep Neural Networks for MR Reconstruction

Abstract: Recently, deep learning approaches have been extensively investigated to reconstruct images from accelerated magnetic resonance image (MRI) acquisition. Although these approaches provide significant performance gain compared to compressed sensing MRI (CS-MRI), it is not clear how to choose a suitable network architecture to balance the trade-off between network complexity and performance. Recently, it was shown that an encoder-decoder convolutional neural network (CNN) can be interpreted as a piecewise linear … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
7
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
3
2
1

Relationship

2
4

Authors

Journals

citations
Cited by 13 publications
(7 citation statements)
references
References 39 publications
(66 reference statements)
0
7
0
Order By: Relevance
“…If the full k-space data is available, the benefit of bootstrap aggregation may seem unclear. However, in our prior work [22], we demonstrated that instead of using just a single stronger deep learner, bootstrap aggregation can still provide high quality image reconstruction in the compressed sensing MRI.…”
Section: B Bootstrap Aggregation For Motion Artifact Correctionmentioning
confidence: 86%
See 3 more Smart Citations
“…If the full k-space data is available, the benefit of bootstrap aggregation may seem unclear. However, in our prior work [22], we demonstrated that instead of using just a single stronger deep learner, bootstrap aggregation can still provide high quality image reconstruction in the compressed sensing MRI.…”
Section: B Bootstrap Aggregation For Motion Artifact Correctionmentioning
confidence: 86%
“…Our network architecture is based on U-Net [42] and it consists of convolution layers, instance normalization [43], activation layers and pooling layers. Furthermore, we employ adaptive residual learning [22] to improve reconstruction performance. Specifically, the network's original output and the residual output are combined by the channel concatenation, from which the final output is generated using the last 1 × 1 convolution layer.…”
Section: B Network Architecturementioning
confidence: 99%
See 2 more Smart Citations
“…Moreover, we utilize nonlinear attention module which is known to enhance the expressivity of the network (Cha et al, 2020). For G Θ in Step I, due to the large discrepancy between the input and the desired distribution, we utilize the same network architecture from Fig.…”
Section: Network Architecture 421 Generator Architecturementioning
confidence: 99%