Purpose
To develop a scan‐specific model that estimates and corrects k‐space errors made when reconstructing accelerated MRI data.
Methods
Scan‐specific artifact reduction in k‐space (SPARK) trains a convolutional‐neural‐network to estimate and correct k‐space errors made by an input reconstruction technique by back‐propagating from the mean‐squared‐error loss between an auto‐calibration signal (ACS) and the input technique’s reconstructed ACS. First, SPARK is applied to generalized autocalibrating partially parallel acquisitions (GRAPPA) and demonstrates improved robustness over other scan‐specific models, such as robust artificial‐neural‐networks for k‐space interpolation (RAKI) and residual‐RAKI. Subsequent experiments demonstrate that SPARK synergizes with residual‐RAKI to improve reconstruction performance. SPARK also improves reconstruction quality when applied to advanced acquisition and reconstruction techniques like 2D virtual coil (VC‐) GRAPPA, 2D LORAKS, 3D GRAPPA without an integrated ACS region, and 2D/3D wave‐encoded imaging.
Results
SPARK yields SSIM improvement and 1.5 – 2× root mean squared error (RMSE) reduction when applied to GRAPPA and improves robustness to ACS size for various acceleration rates in comparison to other scan‐specific techniques. When applied to advanced reconstruction techniques such as residual‐RAKI, 2D VC‐GRAPPA and LORAKS, SPARK achieves up to 20% RMSE improvement. SPARK with 3D GRAPPA also improves RMSE performance by ~2×, SSIM performance, and perceived image quality without a fully sampled ACS region. Finally, SPARK synergizes with non‐Cartesian, 2D and 3D wave‐encoding imaging by reducing RMSE between 20% and 25% and providing qualitative improvements.
Conclusion
SPARK synergizes with physics‐based acquisition and reconstruction techniques to improve accelerated MRI by training scan‐specific models to estimate and correct reconstruction errors in k‐space.
Transfer learning has witnessed remarkable progress in recent years, for example, with the introduction of augmentation-based contrastive self-supervised learning methods. While a number of large-scale empirical studies on the transfer performance of such models have been conducted, there is not yet an agreed-upon set of control baselines, evaluation practices, and metrics to report, which often hinders a nuanced and calibrated understanding of the real efficacy of the methods. We share an evaluation standard that aims to quantify and communicate transfer learning performance in an informative and accessible setup. This is done by baking a number of simple yet critical control baselines in the evaluation method, particularly the 'blind-guess' (quantifying the dataset bias), 'scratchmodel' (quantifying the architectural contribution), and 'maximal-supervision' (quantifying the upper-bound). To demonstrate how the evaluation standard can be employed, we provide an example empirical study investigating a few basic questions about self-supervised learning. For example, using this standard, the study shows the effectiveness of existing self-supervised pre-training methods is skewed towards image classification tasks versus dense pixel-wise predictions. In general, we encourage using/reporting the suggested control baselines in evaluating transfer learning in order to gain a more meaningful and informative understanding.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.