2018
DOI: 10.1007/978-3-319-75541-0_14
|View full text |Cite
|
Sign up to set email alerts
|

2D-3D Fully Convolutional Neural Networks for Cardiac MR Segmentation

Abstract: In this paper, we develop a 2D and 3D segmentation pipelines for fully automated cardiac MR image segmentation using Deep Convolutional Neural Networks (CNN). Our models are trained end-to-end from scratch using the ACD Challenge 2017 dataset comprising of 100 studies, each containing Cardiac MR images in End Diastole and End Systole phase. We show that both our segmentation models achieve near state-of-the-art performance scores in terms of distance metrics and have convincing accuracy in terms of clinical pa… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

2
64
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 79 publications
(66 citation statements)
references
References 11 publications
2
64
0
Order By: Relevance
“…This contextual information can include shape priors learned from labels or multiview images (Zotti et al, 2017(Zotti et al, , 2019Chen et al, 2019b). Others extract spatial information from adjacent slices to assist the segmentation, using recurrent units (RNNs) or multi-slice networks (2.5D networks) (Poudel et al, 2016;Patravali et al, 2017;Du et al, 2019;Zheng et al, 2018).…”
Section: Ventricle Segmentationmentioning
confidence: 99%
See 1 more Smart Citation
“…This contextual information can include shape priors learned from labels or multiview images (Zotti et al, 2017(Zotti et al, , 2019Chen et al, 2019b). Others extract spatial information from adjacent slices to assist the segmentation, using recurrent units (RNNs) or multi-slice networks (2.5D networks) (Poudel et al, 2016;Patravali et al, 2017;Du et al, 2019;Zheng et al, 2018).…”
Section: Ventricle Segmentationmentioning
confidence: 99%
“…LGE MR imaging enables the Table 2. Segmentation accuracy of state-of-the-art segmentation methods verified on the cardiac bi-ventricular segmentation challenge (ACDC) dataset All the methods were evaluated on the same test set (50 subjects 2D GridNet-MD with registered shape prior 0.938 0.894 0.910 Khened et al (2019) 2D Dense U-net with inception module 0.941 0.894 0.907 Baumgartner et al (2017) 2D U-net with cross entropy loss 0.937 0.897 0.908 Zotti et al (2017) 2D GridNet with registered shape prior 0.931 0.890 0.912 Jang et al (2017) 2D M-Net with weighted cross entropy loss 0.940 0.885 0.907 Painchaud et al (2019) FCN followed by an AE for shape correction 0.936 0.889 0.909 Wolterink et al (2017c) Multi-input 2D dilated FCN, segmenting paired ED and ES frames simultaneously 0.940 0.885 0.900 Patravali et al (2017) 2D U-net with a Dice loss 0.920 0.890 0.865 Rohé et al (2017) Multi-atlas based method combined with 3D CNN for registration 0.929 0.868 0.881 Tziritas and Grinias (2017) Level-set +markov random field (MRF); Non-deep learning method 0.907 0.798 0.803 Yang et al (2017c) 3D FCN with deep supervision 0.820 N/A 0.780…”
Section: Scar Segmentationmentioning
confidence: 99%
“…However, in the context of cardiac image segmentation, due to the low through-plane resolution characteristic of cardiac MRI and the shortcomings of 3D methods such as reduction of training images and high risk of overfitting, 15,18 the segmentation performance of 3D methods may be limited to some extent. For instance, the previous work [19][20][21] evaluated on the automatic cardiac diagnosis challenge (ACDC) dataset indicated that the proposed 3D models did not meet the expectations in performance improvement over the corresponding 2D models. These motivate us to explore how to effectively leverage spatial context in 2D methods to combine the advantages of 2D and 3D methods.…”
Section: Introductionmentioning
confidence: 95%
“…Thus, the correlation between slices is weak except for adjacent slices. A direct way to use the contextual information of the adjacent slices is early fusion strategy, i.e., stacking them as input channels and output the prediction of the middle or target slice, as in the study of Patravali . This strategy treats the adjacent slices and the target slice equally and ignores the information provided by the already predicted segmentation.…”
Section: Introductionmentioning
confidence: 99%
“…Existing popular fully-3D networks are smaller to reduce memory footprints, and may lack the capcity to learn challenging segmentation tasks. Inspired by existing work on combining 2D and 3D computations for volumetric data, both in deep learning 5,6 and more generally 26 , we experimented with combinations of 2D and 3D neural modules to trade off between computational efficiency and spatial context. The highest-performing network architecture in this paper, 2D-3D+3x3x3, is a composition of a 2D U-net-style encoder-decoder and 3D convolutional spatial pyramid, with additional 3x3x3 convolutions at the beginning of convolution blocks in the encoder-decoder.…”
Section: /16mentioning
confidence: 99%