2019
DOI: 10.1007/978-3-030-33391-1_13
|View full text |Cite
|
Sign up to set email alerts
|

Self-supervised Learning of Inverse Problem Solvers in Medical Imaging

Abstract: In the past few years, deep learning-based methods have demonstrated enormous success for solving inverse problems in medical imaging. In this work, we address the following question: Given a set of measurements obtained from real imaging experiments, what is the best way to use a learnable model and the physics of the modality to solve the inverse problem and reconstruct the latent image? Standard supervised learning based methods approach this problem by collecting data sets of known latent images and their … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
22
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 23 publications
(22 citation statements)
references
References 9 publications
0
22
0
Order By: Relevance
“…Given the importance of training without fully sampled data, there have been several studies that have tried to tackle this issue. For purely data‐driven de‐aliasing of single‐coil data using image domain to image‐domain mapping without the encoding operator, a self‐supervised approach has been proposed 62 using a mixture of measurement and k‐space losses. Unlike our approach, it uses all available data for training and loss (ie, identical sets).…”
Section: Discussionmentioning
confidence: 99%
“…Given the importance of training without fully sampled data, there have been several studies that have tried to tackle this issue. For purely data‐driven de‐aliasing of single‐coil data using image domain to image‐domain mapping without the encoding operator, a self‐supervised approach has been proposed 62 using a mixture of measurement and k‐space losses. Unlike our approach, it uses all available data for training and loss (ie, identical sets).…”
Section: Discussionmentioning
confidence: 99%
“…The key idea of RoAR is to use self‐supervised learning , illustrated in Figure 2B, to train the parameters θ of the model Iθ. The idea of using self‐supervised learning has recently gained popularity in several distinct imaging applications for addressing the lack of ground‐truth training data 26‐30 . Recent work in MRSI spectral quantification has seen the integration of CNN’s with physical models as a means to avoid dependence on ground‐truth labels 31 .…”
Section: Methodsmentioning
confidence: 99%
“…The idea of using self-supervised learning has recently gained popularity in several distinct imaging applications for addressing the lack of ground-truth training data. [26][27][28][29][30] Recent work in MRSI spectral quantification has seen the integration of CNN's with physical models as a means to avoid dependence on ground-truth labels. 31 The self-supervised learning in RoAR is enabled through our knowledge of the analytical biophysical model connecting the mGRE signal with biological tissue microstructure.…”
Section: Roar: Architecture and Trainingmentioning
confidence: 99%
“…Recent works have also proposed the new concept of self-supervised learning for MRI reconstruction. 10,11 An early study has shown that a denoising deep learning network can be successfully trained using pairs of noisy images. 7 Self-supervised learning relies on a hypothesis that image noise and artifacts are typically incoherent in training data pairs; thus, minimizing a loss between them readily regularizes the learning to capture coherent image content.…”
Section: F I G U R Ementioning
confidence: 99%
“…More recently, several works have investigated unsupervised or self-supervised learning for the reconstruction of undersampled static MR images. [7][8][9][10][11][12] Although the specific implementations of these works vary from one to the other, they all train CNNs on undersampled data sets directly without fully sampled references, and inherent MR physical models (eg, Fourier encoding and coil sensitivity encoding) are incorporated as training regularizations. The results in these works have shown that with proper design of network training, unsupervised or self-supervised learning can achieve similar reconstruction performance compared with supervised learning.…”
Section: Introductionmentioning
confidence: 99%