2017
DOI: 10.48550/arxiv.1709.03199
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

3D Densely Convolutional Networks for Volumetric Segmentation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
25
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 12 publications
(25 citation statements)
references
References 0 publications
0
25
0
Order By: Relevance
“…They reduced the computational and memory costs, which was quite a severe issue for 3D CNN, via small kernels with a deeper network. Bui et al proposed a deep densely convolutional network for volumetric brain segmentation [22]. This architecture provided a dense connection between layers.…”
Section: Related Workmentioning
confidence: 99%
“…They reduced the computational and memory costs, which was quite a severe issue for 3D CNN, via small kernels with a deeper network. Bui et al proposed a deep densely convolutional network for volumetric brain segmentation [22]. This architecture provided a dense connection between layers.…”
Section: Related Workmentioning
confidence: 99%
“…The performance of JAS-GAN had further been demonstrated by comparing it with the widely used methods and the state-of-the-art methods. For the segmentations of LA and atrial scars, we compared the segmentation performance of JAS-GAN to the 2D U-Net [26], 3D U-Net [40], 3D DenseNet [41], SegNet [42], the method (MVTT) proposed by yang et al [43] and two methods aiming to tackle the imbalance issue (Tversky loss [44] and surface loss [45]). We also compare the LA segmentation performance of JAS-GAN to the method (MTL) proposed by Chen et al [46].…”
Section: G Analysis Of Joint Discriminative Networkmentioning
confidence: 99%
“…Deep convolutional neural networks have shown great potential in medical imaging on account of dominance over traditional methods in applications such as segmentation of neuroanatomy (Bui et al, 2017;Moeskops et al, 2016;Zhang et al, 2015;, lesions (Valverde et al, 2017;Brosch et al, 2015;Kamnitsas et al, 2017;, and tumors (Havaei et al, 2017;Pereira et al, 2016;Wachinger et al, 2017) using voxelwise networks (Moeskops et al, 2016;Havaei et al, 2017;Salehi et al, 2017, 3D voxelwise networks Kamnitsas et al, 2017) and Fully Convolutional Networks (FCNs) (C ¸içek et al, 2016;Milletari et al, 2016;Salehi et al, 2017. FCNs have shown better performance while also being faster in training and testing than voxelwise methods (Salehi et al, 2017.…”
Section: Introductionmentioning
confidence: 99%
“…Among these, the densely connected networks, referred to as DenseNets (Huang et al, 2017) and a few of its extensions, such as a 3D version called DenseSeg (Bui et al, 2017) and a fully convolutional two-path edition (FC-DenseNet) (Jegou et al, 2017), have shown promising results in image segmentation tasks (Dolz et al, 2018). For example the DenseSeg showed top performance in the 2017 MICCAI isointense infant brain MRI segmentation (iSeg) grand challenge 1 , which is considered a very difficult image segmentation task for both traditional and deep learning approaches.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation