2022
DOI: 10.1007/s11045-021-00813-9
|View full text |Cite
|
Sign up to set email alerts
|

Integrated fusion framework using hybrid domain and deep neural network for multimodal medical images

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(2 citation statements)
references
References 33 publications
0
2
0
Order By: Relevance
“…In addition to being visually appealing to the human eye, fused images are useful for various subsequent tasks [54][55][56]. Multimodal image fusion finds applications in medicine, surveillance, and remote sensing [57][58][59][60]. The extraction of features is typically the first step in multi-modal image fusion methods, followed by identifying and classifying image elements to determine the most notable features.…”
Section: Multi-modal Fusionmentioning
confidence: 99%
“…In addition to being visually appealing to the human eye, fused images are useful for various subsequent tasks [54][55][56]. Multimodal image fusion finds applications in medicine, surveillance, and remote sensing [57][58][59][60]. The extraction of features is typically the first step in multi-modal image fusion methods, followed by identifying and classifying image elements to determine the most notable features.…”
Section: Multi-modal Fusionmentioning
confidence: 99%
“…Deep learning techniques have mostly been employed to improve incident photon timing resolution and localization accuracy with the objective of enhancing overall spatial and time-of-flight (TOF) resolutions in PET. The purpose of the research being done on quantitative SPECT and PET imaging right now is to get rid of the effects of noise, artifacts, and movement [16].…”
Section: Introductionmentioning
confidence: 99%