2017
DOI: 10.1002/mrm.26977
|View full text |Cite
|
Sign up to set email alerts
|

Learning a variational network for reconstruction of accelerated MRI data

Abstract: Variational network reconstructions preserve the natural appearance of MR images as well as pathologies that were not included in the training data set. Due to its high computational performance, that is, reconstruction time of 193 ms on a single graphics card, and the omission of parameter tuning once the network is trained, this new approach to image reconstruction can easily be integrated into clinical workflow. Magn Reson Med 79:3055-3071, 2018. © 2017 International Society for Magnetic Resonance in Medici… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

11
1,421
0
1

Year Published

2018
2018
2024
2024

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 1,314 publications
(1,433 citation statements)
references
References 53 publications
(108 reference statements)
11
1,421
0
1
Order By: Relevance
“…Assessment of image quality from the trained networks using the synthetic test data, described above, demonstrated <0.01% difference in RMSE and <0.5% difference in SSIM (further information in Supporting Information Figure S8). This is similar to previous deep learning studies, which have shown limited differences in RMSE and SSIM values using scriptl1‐loss or SSIM‐loss functions, however, with improved visual quality compared to scriptl2‐loss . Nevertheless, further investigation of different loss functions and other optimizations of learning would be desirable in the future.…”
Section: Discussionsupporting
confidence: 88%
“…Assessment of image quality from the trained networks using the synthetic test data, described above, demonstrated <0.01% difference in RMSE and <0.5% difference in SSIM (further information in Supporting Information Figure S8). This is similar to previous deep learning studies, which have shown limited differences in RMSE and SSIM values using scriptl1‐loss or SSIM‐loss functions, however, with improved visual quality compared to scriptl2‐loss . Nevertheless, further investigation of different loss functions and other optimizations of learning would be desirable in the future.…”
Section: Discussionsupporting
confidence: 88%
“…This is significantly different from how human radiologists learn to read and interpret MRI images. Radiologists have been trained by reading thousands of MRI images to develop remarkable skills to recognise certain reproducible anatomical and contextual patterns in the images even with known artefacts presented [1], [44]. Our deep learning based DAGAN method aims to imitate this human learning procedure, and therefore shifts the conventional online nonlinear optimisation into an offline training procedure.…”
Section: Discussionmentioning
confidence: 99%
“…For the former learning scheme, there is lack of adaptivity, and for the latter one, the resulting dictionary in sparse coding is not hierarchical as in the deep learning based methods, which in general could provide superior results. In addition, the performance of our DAGAN method is also improved by enriching the training datasets with a comprehensive data augmentation that has not been considered in previous dictionary learning or deep learning based methods [26], [42], [36], [43], [44], [45]. Once a DAGAN model has been trained, it can be used to infer any new input raw data with the same undersampling ratio.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Therefore, the mapping relationship can be learned by DNN if sufficient qualified training data can be obtained. However, compared with convex L 1 minimization where the characteristics can be understood more easily, deep‐learning based methods are often considered to be ‘black boxes’ and are still difficult to interpret …”
Section: Methodsmentioning
confidence: 99%