2023
DOI: 10.3390/bioengineering10020181
|View full text |Cite
|
Sign up to set email alerts
|

Comparing 3D, 2.5D, and 2D Approaches to Brain Image Auto-Segmentation

Abstract: Deep-learning methods for auto-segmenting brain images either segment one slice of the image (2D), five consecutive slices of the image (2.5D), or an entire volume of the image (3D). Whether one approach is superior for auto-segmenting brain images is not known. We compared these three approaches (3D, 2.5D, and 2D) across three auto-segmentation models (capsule networks, UNets, and nnUNets) to segment brain structures. We used 3430 brain MRIs, acquired in a multi-institutional study, to train and test our mode… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 23 publications
(8 citation statements)
references
References 36 publications
0
5
0
Order By: Relevance
“…Our more favorable results in segmenting the hippocampus are likely because of the 3D structure of our CapsNet, which can use the contextual information in the volume of the image rather than just a section of the image to better segment the complex shape of the hippocampus. 39 Our study has several limitations. Our models were tested on only 3 brain structures that are commonly segmented on brain MRIs, meaning that our findings may not generalize across other imaging modalities and anatomic structures.…”
Section: Discussionmentioning
confidence: 91%
“…Our more favorable results in segmenting the hippocampus are likely because of the 3D structure of our CapsNet, which can use the contextual information in the volume of the image rather than just a section of the image to better segment the complex shape of the hippocampus. 39 Our study has several limitations. Our models were tested on only 3 brain structures that are commonly segmented on brain MRIs, meaning that our findings may not generalize across other imaging modalities and anatomic structures.…”
Section: Discussionmentioning
confidence: 91%
“…Postprocessing was carried out in a 3D Slicer module (http://www.slicer.org) to increase speed, remove noise, and maximize contours of segmentations. The code for this is available 22 . The average DSC for the ossicles was 0.84 in the training data set and 0.85 for the test data set across all algorithms.…”
Section: Resultsmentioning
confidence: 99%
“…To prepare training data sets, synapses were manually demarcated as the presynaptic membrane, rigid synaptic cleft, and postsynaptic membrane. Networks that use slice-to-slice context have been shown to have higher accuracy 34 , therefore we chose to train 2.5D and 3D architectures to detect features using several sections of context, as synapses and vesicle clouds can be tracked through approximately 3-10+ sections in our datasets (Figure 1A-C, details are described in Methods).…”
Section: Resultsmentioning
confidence: 99%