Purpose
To introduce a novel deep learning method for Robust and Accelerated Reconstruction (RoAR) of quantitative and B0‐inhomogeneity‐corrected
R2* maps from multi‐gradient recalled echo (mGRE) MRI data.
Methods
RoAR trains a convolutional neural network (CNN) to generate quantitative
R2∗ maps free from field inhomogeneity artifacts by adopting a self‐supervised learning strategy given (a) mGRE magnitude images, (b) the biophysical model describing mGRE signal decay, and (c) preliminary‐evaluated F‐function accounting for contribution of macroscopic B0 field inhomogeneities. Importantly, no ground‐truth
R2* images are required and F‐function is only needed during RoAR training but not application.
Results
We show that RoAR preserves all features of
R2* maps while offering significant improvements over existing methods in computation speed (seconds vs. hours) and reduced sensitivity to noise. Even for data with SNR = 5 RoAR produced
R2* maps with accuracy of 22% while voxel‐wise analysis accuracy was 47%. For SNR = 10 the RoAR accuracy increased to 17% vs. 24% for direct voxel‐wise analysis.
Conclusions
RoAR is trained to recognize the macroscopic magnetic field inhomogeneities directly from the input magnitude‐only mGRE data and eliminate their effect on
R2∗ measurements. RoAR training is based on the biophysical model and does not require ground‐truth
R2* maps. Since RoAR utilizes signal information not just from individual voxels but also accounts for spatial patterns of the signals in the images, it reduces the sensitivity of
R2* maps to the noise in the data. These features plus high computational speed provide significant benefits for the potential usage of RoAR in clinical settings.
Purpose
To introduce two novel learning‐based motion artifact removal networks (LEARN) for the estimation of quantitative motion‐ and B0‐inhomogeneity‐corrected R2∗ maps from motion‐corrupted multi‐Gradient‐Recalled Echo (mGRE) MRI data.
Methods
We train two convolutional neural networks (CNNs) to correct motion artifacts for high‐quality estimation of quantitative B0‐inhomogeneity‐corrected R2∗ maps from mGRE sequences. The first CNN, LEARN‐IMG, performs motion correction on complex mGRE images, to enable the subsequent computation of high‐quality motion‐free quantitative R2∗ (and any other mGRE‐enabled) maps using the standard voxel‐wise analysis or machine learning‐based analysis. The second CNN, LEARN‐BIO, is trained to directly generate motion‐ and B0‐inhomogeneity‐corrected quantitative R2∗ maps from motion‐corrupted magnitude‐only mGRE images by taking advantage of the biophysical model describing the mGRE signal decay.
Results
We show that both CNNs trained on synthetic MR images are capable of suppressing motion artifacts while preserving details in the predicted quantitative R2∗ maps. Significant reduction of motion artifacts on experimental in vivo motion‐corrupted data has also been achieved by using our trained models.
Conclusion
Both LEARN‐IMG and LEARN‐BIO can enable the computation of high‐quality motion‐ and B0‐inhomogeneity‐corrected R2∗ maps. LEARN‐IMG performs motion correction on mGRE images and relies on the subsequent analysis for the estimation of R2∗ maps, while LEARN‐BIO directly performs motion‐ and B0‐inhomogeneity‐corrected R2∗ estimation. Both LEARN‐IMG and LEARN‐BIO jointly process all the available gradient echoes, which enables them to exploit spatial patterns available in the data. The high computational speed of LEARN‐BIO is an advantage that can lead to a broader clinical application.
While significant progress has been achieved in studying resting-state functional networks in a healthy human brain and in a wide range of clinical conditions, many questions related to their relationship to the brain’s cellular constituents remain. Here, we use quantitative Gradient-Recalled Echo (qGRE) MRI for mapping the human brain cellular composition and BOLD (blood–oxygen level-dependent) MRI to explore how the brain cellular constituents relate to resting-state functional networks. Results show that the BOLD signal-defined synchrony of connections between cellular circuits in network-defined individual functional units is mainly associated with the regional neuronal density, while the between-functional units’ connectivity strength is also influenced by the glia and synaptic components of brain tissue cellular constituents. These mechanisms lead to a rather broad distribution of resting-state functional network properties. Visual networks with the highest neuronal density (but lowest density of glial cells and synapses) exhibit the strongest coherence of the BOLD signal as well as the strongest intra-network connectivity. The Default Mode Network (DMN) is positioned near the opposite part of the spectrum with relatively low coherence of the BOLD signal but with a remarkably balanced cellular contents, enabling DMN to have a prominent role in the overall organization of the brain and hierarchy of functional networks.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.