2018
DOI: 10.1007/978-3-030-00536-8_13
|View full text |Cite
|
Sign up to set email alerts
|

RS-Net: Regression-Segmentation 3D CNN for Synthesis of Full Resolution Missing Brain MRI in the Presence of Tumours

Abstract: Accurate synthesis of a full 3D MR image containing tumours from available MRI (e.g. to replace an image that is currently unavailable or corrupted) would provide a clinician as well as downstream inference methods with important complementary information for disease analysis. In this paper, we present an end-to-end 3D convolution neural network that takes a set of acquired MR image sequences (e.g. T1, T2, T1ce) as input and concurrently performs (1) regression of the missing full resolution 3D MRI (e.g. FLAIR… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
19
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 18 publications
(20 citation statements)
references
References 13 publications
0
19
0
Order By: Relevance
“…Though all the methods discussed above propose a multiinput method, none of the methods have been proposed to synthesize multiple missing sequences (multi-output), and in one single pass. All three methods [16], [5], and [24] synthesize only one sequence (either T 2f lair or T 2 , many-to-one setting) in the presence of varying number of input sequences, while [23] only synthesizes MRA using information from multiple inputs (many-to-one). Although the work presented in [23] is close to our proposed method, theirs is not a truly multimodal network (many-to-many), since there is no empirical evidence that their method will generalize to multiple scenarios.…”
Section: B Multimodal Synthesismentioning
confidence: 99%
“…Though all the methods discussed above propose a multiinput method, none of the methods have been proposed to synthesize multiple missing sequences (multi-output), and in one single pass. All three methods [16], [5], and [24] synthesize only one sequence (either T 2f lair or T 2 , many-to-one setting) in the presence of varying number of input sequences, while [23] only synthesizes MRA using information from multiple inputs (many-to-one). Although the work presented in [23] is close to our proposed method, theirs is not a truly multimodal network (many-to-many), since there is no empirical evidence that their method will generalize to multiple scenarios.…”
Section: B Multimodal Synthesismentioning
confidence: 99%
“…As the rapid growth of applying deep learning in MRI, [19][20][21] recently, deep learning-based end-to-end frameworks have been investigated for multimodal MR image synthesis. 13,[22][23][24][25][26][27][28][29][30][31][32] Especially, the achievable accuracy of synthesis has been highly improved with the superior image synthesis capability of generative adversarial networks (GANs). 33 These deep neural network-based methods can be grouped into three main categories depending on their input/output modalities: (a) single-input single-output (SISO), (b) multi-input singleoutput (MISO), (c) multi-input multi-output (MIMO).…”
Section: Introductionmentioning
confidence: 99%
“…A regression-segmentation 3D convolutional neural network was implemented to simultaneously synthesize missing MRI modality and segment tumor regions. 22 A scalable GANbased model was developed to flexibly take arbitrary subsets of the multiple modalities as input and generate the target modality. 27 Very recently, Zhou et al proposed a hybrid-fusion network consisting of modality-specific, multi-modal fusion, and image synthesis subnetworks to learn the correlations among multiple modalities with enhanced multi-level fusion strategy thus improving the performance of synthesis.…”
Section: Introductionmentioning
confidence: 99%
“…A fundamentally different approach for accelerated MRI is to perform fully-sampled acquisitions of a subset of the desired contrasts (i.e., source contrasts), and then to synthesize missing contrasts (i.e., target contrasts). This approach requires an intensity-based mapping model estimated using a collection of image pairs in both source and target contrast [23]- [50]. The model can be based on sparse linear mapping between source and target patches [32], or deep neural networks for enhanced accuracy [28], [34]- [36], [39]- [50].…”
Section: Introductionmentioning
confidence: 99%
“…This approach requires an intensity-based mapping model estimated using a collection of image pairs in both source and target contrast [23]- [50]. The model can be based on sparse linear mapping between source and target patches [32], or deep neural networks for enhanced accuracy [28], [34]- [36], [39]- [50]. Although deep models for synthesis are promising, local inaccuracies may occur in synthesized images when the source contrast is less sensitive to differences in relaxation parameters of two tissues compared to the target contrast, or vice versa.…”
Section: Introductionmentioning
confidence: 99%