2019
DOI: 10.1016/j.media.2019.07.005
|View full text |Cite
|
Sign up to set email alerts
|

Accurate and robust deep learning-based segmentation of the prostate clinical target volume in ultrasound images

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
40
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 83 publications
(40 citation statements)
references
References 22 publications
0
40
0
Order By: Relevance
“…The present study uses a network architecture called 3D U-net, which has been used in previous studies involving deep learning in brachytherapy. 6,18,27,30,36 The U-net structure has been used for segmentation of both needles and organs such as the prostate. 18,30 A similar structure called V-net has also been proposed, and this too has been used for segmentation of the prostate.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…The present study uses a network architecture called 3D U-net, which has been used in previous studies involving deep learning in brachytherapy. 6,18,27,30,36 The U-net structure has been used for segmentation of both needles and organs such as the prostate. 18,30 A similar structure called V-net has also been proposed, and this too has been used for segmentation of the prostate.…”
Section: Discussionmentioning
confidence: 99%
“…Previous studies include work on digitizing gynaecological applicators, 6,26,27 prostate seeds, 28,29 and segmentation of the prostate boundaries. [30][31][32] Recent publications by Zhang et al 18 , Wang et al 33 and Dise et al 26 have used deep learning for identifying needles in 3D TRUS images.…”
Section: Discussionmentioning
confidence: 99%
“…Manual segmentation of the prostate on TRUS imaging is time-consuming and often not reproducible. For these reasons, several studies have applied deep learning to automatically segment the prostate using TRUS imaging [25][26][27][28][29][30][31].…”
Section: Challenges Applying Deep Learning To Abdominal Us Imagingmentioning
confidence: 99%
“…To that goal, we replaced the output layer with a new layer consisting of only a single neuron. (2) We reduced the output size by a factor of 4 by replacing the output layer with a new layer consisting of 16 neurons to see if higher performance is achievable by reducing the number of parameters. In this case, aberrator profiles were downsampled to a vector of size 16 during training the network.…”
Section: ) Output Sizementioning
confidence: 99%