2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018) 2018
DOI: 10.1109/isbi.2018.8363561
|View full text |Cite
|
Sign up to set email alerts
|

3D fully convolutional networks for co-segmentation of tumors on PET-CT images

Abstract: Positron emission tomography and computed tomography (PET-CT) dual-modality imaging provides critical diagnostic information in modern cancer diagnosis and therapy. Automated accurate tumor delineation is essentially important in computer-assisted tumor reading and interpretation based on PET-CT. In this paper, we propose a novel approach for the segmentation of lung tumors that combines the powerful fully convolutional networks (FCN) based semantic segmentation framework (3D-UNet) and the graph cut based co-s… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
63
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 78 publications
(63 citation statements)
references
References 13 publications
(34 reference statements)
0
63
0
Order By: Relevance
“…Many studies [4][5][6] have indicated that the co-segmentation method by combining different segmentation methods is treated as an energy minimization problem to delineate the gross tumor contours. Besides, due to the superior contrast of PET images and high spatial resolution of CT images, more recent methods and techniques [7][8][9][10] in the field of clinic and lesion segmentation prefer to integrate PET and CT images.…”
Section: Co-segmentation Methodsmentioning
confidence: 99%
“…Many studies [4][5][6] have indicated that the co-segmentation method by combining different segmentation methods is treated as an energy minimization problem to delineate the gross tumor contours. Besides, due to the superior contrast of PET images and high spatial resolution of CT images, more recent methods and techniques [7][8][9][10] in the field of clinic and lesion segmentation prefer to integrate PET and CT images.…”
Section: Co-segmentation Methodsmentioning
confidence: 99%
“…Xu et al [11] cascaded two V-Nets [12] to detect bone lesions, using CT alone as the input to the first V-Net and a pre-fused PET-CT image for the second. Similarly, Zhong et al [13] trained one U-Net [44] for PET and one for CT, combining the results using a graph cut algorithm. None of this prior work, however, considered how the visual characteristics, specific to each image at different locations, could be integrated in a spatially varying manner.…”
Section: Related Workmentioning
confidence: 99%
“…‱ A two-branch (TB) CNN, implementing a fusion strategy where each modality was processed separately and the outputs from each modality were combined [9], [13], [41]. The CNN was similar to the architecture in Fig.…”
Section: G Experimental Designmentioning
confidence: 99%
“…The performance of these methods relies heavily on a large set of training data, which include manually annotated images for learning features in the medical scans. Neural Networks (Goceri and Goceri, 2015;Ramrez et al, 2018;Zhong et al, 2018), Support Vectors Machines (Lee et al, 2008;Amiri et al, 2016) and Random Forests (Shah et al, 2017;Bauer et al, 2012;Conze et al, 2016) are amount the popular methods used for medical images segmentation. Unlike these supervised methods, unsupervised segmentation techniques are very flexible as they can be applied on a small set of data from which they can learn classification rules based some similarity criterion.…”
Section: Related Workmentioning
confidence: 99%