2019
DOI: 10.1177/1533033819884561
|View full text |Cite
|
Sign up to set email alerts
|

The Tumor Target Segmentation of Nasopharyngeal Cancer in CT Images Based on Deep Learning Methods

Abstract: Radiotherapy is the main treatment strategy for nasopharyngeal carcinoma. A major factor affecting radiotherapy outcome is the accuracy of target delineation. Target delineation is time-consuming, and the results can vary depending on the experience of the oncologist. Using deep learning methods to automate target delineation may increase its efficiency. We used a modified deep learning model called U-Net to automatically segment and delineate tumor targets in patients with nasopharyngeal carcinoma. Patients w… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
44
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 44 publications
(50 citation statements)
references
References 39 publications
0
44
0
Order By: Relevance
“…For mean DSC, our method outperformed Lin and Men's methods (73.72% vs. 72.05% vs. 67.35%). In addition, the 2D U-Net proposed by Li et al (2015), is implemented to automatically delineate tumor of NPC patients in their datasets with DSC of 71.78% (Li et al, 2019). It has been reported in (Zou et al, 2004) that a DSC value >70% indicates good segmentation performance (Zou et al, 2004).…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…For mean DSC, our method outperformed Lin and Men's methods (73.72% vs. 72.05% vs. 67.35%). In addition, the 2D U-Net proposed by Li et al (2015), is implemented to automatically delineate tumor of NPC patients in their datasets with DSC of 71.78% (Li et al, 2019). It has been reported in (Zou et al, 2004) that a DSC value >70% indicates good segmentation performance (Zou et al, 2004).…”
Section: Discussionmentioning
confidence: 99%
“…In this study, the proposed method was compared with three classic methods: 3D CNN (Lin et al, 2019), 2D DDNN (Men et al, 2017), 2D U-Net (Li et al, 2019). The models except for 2D U-Net were trained and validated on the same CT datasets of our department to ensure a neutral comparison.…”
Section: Performance Of the Modelmentioning
confidence: 99%
See 1 more Smart Citation
“…MvDA-VC method has achieved good performance in addressing the problem of object recognition from multiple views Zhao et al [ 12 ]: it uses fully convolutional networks with an auxiliary path to achieve automatic segmentation of NPC on dual-modality PET-CT images. The proposed method improves NPC segmentation by guiding the training of lower layers by auxiliary paths Li et al [ 13 ]: it proposes a modified version of the U-Net, which performs well on NPC segmentation by modifying the downsampling layers and upsampling layers to have a similar learning ability and predict the same spatial resolution as the source image …”
Section: Experiments and Resultsmentioning
confidence: 99%
“…[ 12 ] used fully convolutional networks with auxiliary paths to achieve automatic segmentation of NPC on PET-CT images. [ 13 ] used a modified U-Net model to automatically segment NPC on CT images from 502 patients. [ 14 ] proposed an automated method based on CNN for NPC segmentation on dual-sequence MRI (i.e., T1-w and T2-w) from 44 patients.…”
Section: Introductionmentioning
confidence: 99%