Background: To review and evaluate approaches to convolutional neural network (CNN) reconstruction for accelerated cardiac MR imaging in the real clinical context. Methods: Two CNN architectures, Unet and residual network (Resnet) were evaluated using quantitative and qualitative assessment by radiologist. Four different loss functions were also considered: pixel-wise (L1 and L2), patch-wise structural dissimilarity (Dssim) and feature-wise (perceptual loss). The networks were evaluated using retrospectively and prospectively under-sampled cardiac MR data.Results: Based on our assessments, we find that Resnet and Unet achieve similar image quality but that former requires only 100,000 parameters compared to 1.3 million parameters for the latter. The perceptual loss function performed significantly better than L1, L2 or Dssim loss functions as determined by the radiologist scores.Conclusions: CNN image reconstruction using Resnet yields comparable image quality to Unet with 10X the number of parameters. This has implications for training with significantly lower data requirements.Network training using the perceptual loss function was found to better agree with radiologist scoring compared to L1, L2 or Dssim loss functions.
Purpose To propose and evaluate a deep learning model for rapid and accurate calculation of myocardial T1/T2 values based on a previously proposed Bloch equation simulation with slice profile correction (BLESSPC) method. Methods Deep learning Bloch equation simulations (DeepBLESS) models are proposed for rapid and accurate T1 estimation for the MOLLI T1 mapping sequence with balanced SSFP readouts and T1/T2 estimation for a radial simultaneous T1 and T2 mapping (radial T1‐T2) sequence. The DeepBLESS models were trained separately based on simulated radial T1‐T2 and MOLLI data, respectively. The DeepBLESS T1‐T2 estimation accuracy was evaluated based on simulated data with different noise levels. The DeepBLESS model was compared with BLESSPC in simulation, phantom, and in vivo studies for the MOLLI sequence at 1.5 T and radial T1‐T2 sequence at 3 T. Results After DeepBLESS was trained, in phantom studies, DeepBLESS and BLESSPC achieved similar accuracy and precision in T1‐T2 estimations for both MOLLI and radial T1‐T2 (P > .05). For in vivo, DeepBLESS and BLESSPC generated similar myocardial T1/T2 values for radial T1‐T2 at 3 T (T1: 1366 ± 31 ms for both methods, P > .05; T2: 37.4 ms ± 0.9 ms for both methods, P > .05), and similar myocardial T1 values for the MOLLI sequence at 1.5 T (1044 ± 20 ms for both methods, P > .05). DeepBLESS generated a T1/T2 map in less than 1 second. Conclusion The DeepBLESS model offers an almost instantaneous approach for estimating accurate T1/T2 values, replacing BLESSPC for both MOLLI and radial T1‐T2 sequences, and is promising for multiparametric mapping in cardiac MRI.
The aim of this study was to develop a deep neural network for respiratory motion compensation in free‐breathing cine MRI and evaluate its performance. An adversarial autoencoder network was trained using unpaired training data from healthy volunteers and patients who underwent clinically indicated cardiac MRI examinations. A U‐net structure was used for the encoder and decoder parts of the network and the code space was regularized by an adversarial objective. The autoencoder learns the identity map for the free‐breathing motion‐corrupted images and preserves the structural content of the images, while the discriminator, which interacts with the output of the encoder, forces the encoder to remove motion artifacts. The network was first evaluated based on data that were artificially corrupted with simulated rigid motion with regard to motion‐correction accuracy and the presence of any artificially created structures. Subsequently, to demonstrate the feasibility of the proposed approach in vivo, our network was trained on respiratory motion‐corrupted images in an unpaired manner and was tested on volunteer and patient data. In the simulation study, mean structural similarity index scores for the synthesized motion‐corrupted images and motion‐corrected images were 0.76 and 0.93 (out of 1), respectively. The proposed method increased the Tenengrad focus measure of the motion‐corrupted images by 12% in the simulation study and by 7% in the in vivo study. The average overall subjective image quality scores for the motion‐corrupted images, motion‐corrected images and breath‐held images were 2.5, 3.5 and 4.1 (out of 5.0), respectively. Nonparametric‐paired comparisons showed that there was significant difference between the image quality scores of the motion‐corrupted and breath‐held images (P < .05); however, after correction there was no significant difference between the image quality scores of the motion‐corrected and breath‐held images. This feasibility study demonstrates the potential of an adversarial autoencoder network for correcting respiratory motion‐related image artifacts without requiring paired data.
Purpose To develop and evaluate a parallel imaging and convolutional neural network combined image reconstruction framework for low‐latency and high‐quality accelerated real‐time MR imaging. Methods Conventional Parallel Imaging reconstruction resolved as gradient descent steps was compacted as network layers and interleaved with convolutional layers in a general convolutional neural network. All parameters of the network were determined during the offline training process, and applied to unseen data once learned. The proposed network was first evaluated for real‐time cardiac imaging at 1.5 T and real‐time abdominal imaging at 0.35 T, using threefold to fivefold retrospective undersampling for cardiac imaging and threefold retrospective undersampling for abdominal imaging. Then, prospective undersampling with fourfold acceleration was performed on cardiac imaging to compare the proposed method with standard clinically available GRAPPA method and the state‐of‐the‐art L1‐ESPIRiT method. Results Both retrospective and prospective evaluations confirmed that the proposed network was able to images with a lower noise level and reduced aliasing artifacts in comparison with the single‐coil based and L1‐ESPIRiT reconstructions for cardiac imaging at 1.5 T, and the GRAPPA and L1‐ESPIRiT reconstructions for abdominal imaging at 0.35 T. Using the proposed method, each frame can be reconstructed in less than 100 ms, suggesting its clinical compatibility. Conclusion The proposed Parallel Imaging and convolutional neural network combined reconstruction framework is a promising technique that allows low‐latency and high‐quality real‐time MR imaging.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.