2022
DOI: 10.1101/2022.11.03.22281923
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Comparing 3D, 2.5D, and 2D Approaches to Brain Image Segmentation

Abstract: Deep-learning methods for auto-segmenting brain images either segment one slice of the image (2D), five consecutive slices of the image (2.5D), or an entire volume of the image (3D). Whether one approach is superior for auto-segmenting brain images is not known. We compared these three approaches (3D, 2.5D, and 2D) across three auto-segmentation models (capsule networks, UNets, and nnUNets) to segment brain structures. We used 3430 brain MRIs, acquired in a multi-institutional study, to train and test our mode… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 40 publications
(81 reference statements)
0
3
0
Order By: Relevance
“…Firstly, the increase in the number of parameters of the 3D network leads to an increase in computational complexity and training time, and requires higher computing power. Research findings indicate that The 3D approach requires 20 times more computational memory than 2D approaches (Avesta et al 2022). Secondly, with limited GPU memory, 2D networks can efficiently capture broad and diverse receptive fields, enabling the aggregation of in-plane, multiscale contextual cues for satisfactory segmentation (Bian et al 2018).…”
Section: Choice Of 2d and 3d Networkmentioning
confidence: 99%
“…Firstly, the increase in the number of parameters of the 3D network leads to an increase in computational complexity and training time, and requires higher computing power. Research findings indicate that The 3D approach requires 20 times more computational memory than 2D approaches (Avesta et al 2022). Secondly, with limited GPU memory, 2D networks can efficiently capture broad and diverse receptive fields, enabling the aggregation of in-plane, multiscale contextual cues for satisfactory segmentation (Bian et al 2018).…”
Section: Choice Of 2d and 3d Networkmentioning
confidence: 99%
“…The model was trained utilizing the computational power of an NVIDIA Tesla V100 GPU. Although some studies favor 2.5D and 3D U-Nets over 2D U-Nets (23,24), providing this extra information doesn't consistently enhance accuracy (15). Additionally, 2D CNNs are more computationally efficient than 2.5D or 3D U-Nets, requiring fewer resources for processing.…”
Section: In-house Model For Automated Delineationmentioning
confidence: 99%
“…They discovered that the 3D brain MRIs far outperformed the 2D and 2.5D inputs. However, the 3D inputs required more memory for training (11).…”
Section: Related Workmentioning
confidence: 99%