2020
DOI: 10.1109/tmi.2019.2923601
|View full text |Cite
|
Sign up to set email alerts
|

Co-Learning Feature Fusion Maps From PET-CT Images of Lung Cancer

Abstract: The analysis of multi-modality positron emission tomography and computed tomography (PET-CT) images requires combining the sensitivity of PET to detect abnormal regions with anatomical localization from CT. However, current methods for PET-CT image analysis either process the modalities separately or fuse information from each modality based on knowledge about the image analysis task. These methods generally do not consider the spatially varying visual characteristics that encode different information across t… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
92
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 167 publications
(92 citation statements)
references
References 77 publications
0
92
0
Order By: Relevance
“…For instance, Kumar et al. [30] introduced a new supervised CNN to fuse complementary multi-modality information from lung cancer scans. Ozdemir et al.…”
Section: Related Workmentioning
confidence: 99%
“…For instance, Kumar et al. [30] introduced a new supervised CNN to fuse complementary multi-modality information from lung cancer scans. Ozdemir et al.…”
Section: Related Workmentioning
confidence: 99%
“…Kumar et al. ( 2020 ) proposed a co-learning based fusion maps for obtained more efficient multi-modality fused biomedical images. A convolutional neural network (CNN) was also used for the prediction and segmentation of potential objects.…”
Section: Literature Reviewmentioning
confidence: 99%
“…CNN can decompose the original images to high frequency and low frequency images [146], and select the rule of regional matching to fuse the two high frequency and low frequency images to get the final fusion images. Kumar et al [147] developed a supervised CNN to learn to merge the data from PET-CT images of lung cancer. CNN has also been applied to fuse medical images MRI/CT, MRI/SPECT, multiparametric MR images [148] and PET/MRI [149].…”
Section: Convolutional Neural Networkmentioning
confidence: 99%
“…For the fusion of PET-MRI for clinical applications, the wavelet transformation-based method [66,73,74,130], IHS-PCA [31] and deep learning methods [151,154] are generally used. On the other hand, wavelet transformation-based methods [52,57,168,169] and deep learning [147] are generally applied for the fusion of PET-CT. Here, we show an example in Figure 5, which compares the accuracy of the fused image of PET/MRI and single modal MRI in the correct identification of a patient with liver lesions [183].…”
Section: Pet/ct and Pet/mrimentioning
confidence: 99%