Fourteenth International Conference on Digital Image Processing (ICDIP 2022) 2022
DOI: 10.1117/12.2644537
|View full text |Cite
|
Sign up to set email alerts
|

Fusion of infrared and visible sensor images based on anisotropic diffusion and fast guided filter

Abstract: Infrared images and visible images can obtain different image information in the same scene, especially in low-light scenes, infrared images can obtain image information that cannot be obtained by visible images. In order to obtain more useful information in the environment such as glimmer, infrared and visible images can be fused. In this paper, an image fusion method based on anisotropic diffusion and fast guided filter is proposed. Firstly, the source images are decomposed into base layers and detail layers… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 12 publications
0
1
0
Order By: Relevance
“…Over the past several decades, a number of image fusion methods have been proposed, broadly classified into two categories: traditional methods and deep learningbased methods. Classic traditional fusion methods include multiscale transform methods [7], [8], [9], [10], sparse representation methods [11], [12], [13], subspace methods [14], [15], total variation methods [16], and various hybrid methods [8], [17]. These primarily employ relevant mathematical transformations to manually analyze the activity level of source image information and design fusion rules in the spatial or transform domain.…”
Section: Introductionmentioning
confidence: 99%
“…Over the past several decades, a number of image fusion methods have been proposed, broadly classified into two categories: traditional methods and deep learningbased methods. Classic traditional fusion methods include multiscale transform methods [7], [8], [9], [10], sparse representation methods [11], [12], [13], subspace methods [14], [15], total variation methods [16], and various hybrid methods [8], [17]. These primarily employ relevant mathematical transformations to manually analyze the activity level of source image information and design fusion rules in the spatial or transform domain.…”
Section: Introductionmentioning
confidence: 99%