Magnetic resonance imaging (MRI) reconstruction is an active inverse problem which can be addressed by conventional compressed sensing (CS) MRI algorithms that exploit the sparse nature of MRI in an iterative optimization-based manner. However, two main drawbacks of iterative optimization-based CSMRI methods are time-consuming and are limited in model capacity. Meanwhile, one main challenge for recent deep learning-based CSMRI is the trade-off between model performance and network size. To address the above issues, we develop a new multi-scale dilated network for MRI reconstruction with high speed and outstanding performance. Comparing to convolutional kernels with same receptive fields, dilated convolutions reduce network parameters with smaller kernels and expand receptive fields of kernels to obtain almost same information. To maintain the abundance of features, we present global and local residual learnings to extract more image edges and details. Then we utilize concatenation layers to fuse multi-scale features and residual learnings for better reconstruction. Compared with several non-deep and deep learning CSMRI algorithms, the proposed method yields better reconstruction accuracy and noticeable visual improvements. In addition, we perform the noisy setting to verify the model stability, and then extend the proposed model on a MRI super-resolution task.Multi-scale.The first category of CSMRI algorithms are iterative optimization-based CSMRI, in which the sparsity is enforced in specific transform domain or underlying latent representation of images, and then an alternating iterative optimization scheme is adopted to CSMRI reconstruction [2]- [11]. A pioneering work of CSMRI is Sparse MRI [2], which exploits an off-the-shelf basis to capture a specific feature (wavelets recover point-like features, contourlets capture curvelike features). A hybrid TV regularizer combined with a L 0 -regularized treestructured sparsity constraint [3] is introduced to overcome model-dependent difference 0.69/0.003 3.21/0.032 0.49/0.011 and Fig. 13. It is noted that the proposed MDN performs better reconstruction results than VDSR on a huge dataset.
CONCLUSION AND PROSPECTA novel multi-scale dilated network (MDN) has been presented for CSMRI. The proposed MDN is composed by cascading two basic blocks where dilated convolutions, global and local residual learnings, and concatenation layers are integrated to extend the receptive fields of convolutional kernels for reducing network parameters, maintaining features abundance, and fusing multi-scale features, respectively. Final experiments demonstrate that MDN achieves outstanding performance with training huge and diverse data, and the proposed network outperforms several competitive CSMRI algorithms in subjective and objective assessments. In addition, the proposed model is effective in MR noisy