2022
DOI: 10.1016/j.sigpro.2022.108637
|View full text |Cite
|
Sign up to set email alerts
|

Multimodal image fusion via coupled feature learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
13
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 31 publications
(13 citation statements)
references
References 44 publications
0
13
0
Order By: Relevance
“…In this section, the performance of the proposed method ICIF is verified with qualitative and quantitative evaluations. The compared methods include MDLatLRR [17], ResNetFusion [35], GANMcC [35], NestFuse [39], SEDRFuse [37], STDFusionNet [36], FusionGAN [45], RTVD-VIF [57], and the MMIF [58]. To ensure an objective evaluation, the following evaluation indexes are used, namely, average gradient (AG) [59], information entropy (H) [60], standard deviation (SD) [61], spatial frequency (SF) [62], edge strength (EI) [63], fusion loss function (L AB/F ) [56], fusion volume function (Q AB/F ) [56], and the artifact function (N AB/F ) [56].…”
Section: Resultsmentioning
confidence: 99%
“…In this section, the performance of the proposed method ICIF is verified with qualitative and quantitative evaluations. The compared methods include MDLatLRR [17], ResNetFusion [35], GANMcC [35], NestFuse [39], SEDRFuse [37], STDFusionNet [36], FusionGAN [45], RTVD-VIF [57], and the MMIF [58]. To ensure an objective evaluation, the following evaluation indexes are used, namely, average gradient (AG) [59], information entropy (H) [60], standard deviation (SD) [61], spatial frequency (SF) [62], edge strength (EI) [63], fusion loss function (L AB/F ) [56], fusion volume function (Q AB/F ) [56], and the artifact function (N AB/F ) [56].…”
Section: Resultsmentioning
confidence: 99%
“…learning based infrared and visible image fusion method (MMIF) [29]. This method decomposes the source image into correlated and uncorrelated components, and does not require any training data.…”
Section: Veshki Et Al Proposed a Coupled Dictionarymentioning
confidence: 99%
“…In general, visible frame contains more detail than infrared frame, however, under low light conditions, infrared frame will contain more details, so Equation ( 28) needs to be improved. In this paper, the standard deviation [49] is used to measure how much detail information is contained in the two kinds of frames, and the standard deviation is calculated as shown in Equation (29).…”
Section: Controller Designmentioning
confidence: 99%
“…Similarly, Liu et al [34] introduced channel and spatial dual attention mechanisms to fuse 3D multimodal information, and designed specific loss functions for different modality features. Veshki et al [35] divided the images into relevant and irrelevant components. For the relevant components, the maximum absolute value rule was adopted.…”
Section: Multimodal Feature Fusionmentioning
confidence: 99%