2021
DOI: 10.1002/mp.15404
|View full text |Cite
|
Sign up to set email alerts
|

Technical note: The effect of image annotation with minimal manual interaction for semiautomatic prostate segmentation in CT images using fully convolutional neural networks

Abstract: The goal is to study the performance improvement of a deep learning algorithm in three-dimensional (3D) image segmentation through incorporating minimal user interaction into a fully convolutional neural network (CNN). Methods: A U-Net CNN was trained and tested for 3D prostate segmentation in computed tomography (CT) images. To improve the segmentation accuracy, the CNN's input images were annotated with a set of border landmarks to supervise the network for segmenting the prostate. The network was trained an… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
2
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1
1
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 14 publications
0
2
0
Order By: Relevance
“…The original CT images of 512×512×N voxels had an intensity corresponding to the Hounsfield unit (HU), where N ranged from 61 to 375. During training in the transformer network, the input images were normalized to a range from 0 to 1 [41]. The network was trained using the AdamW optimizer and a modified loss function that combined cross entropy and dice loss under deep supervision.…”
Section: Implementation and Evaluationmentioning
confidence: 99%
“…The original CT images of 512×512×N voxels had an intensity corresponding to the Hounsfield unit (HU), where N ranged from 61 to 375. During training in the transformer network, the input images were normalized to a range from 0 to 1 [41]. The network was trained using the AdamW optimizer and a modified loss function that combined cross entropy and dice loss under deep supervision.…”
Section: Implementation and Evaluationmentioning
confidence: 99%
“…The original CT images of 512 × 512 × N voxels had an intensity corresponding to the Hounsfield unit (HU), where N ranged from 61 to 375. During training in the transformer network, the input images were normalized to a range from 0 to 1[40]. The network was trained using the AdamW optimizer (an open-source Pytorch library) and a modified loss function that combined cross entropy and dice loss under deep supervision.…”
mentioning
confidence: 99%