2023
DOI: 10.1016/j.dsp.2023.103910
|View full text |Cite
|
Sign up to set email alerts
|

Infrared-visible image fusion method based on sparse and prior joint saliency detection and LatLRR-FPDE

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 17 publications
(4 citation statements)
references
References 68 publications
0
3
0
Order By: Relevance
“…Saliency-based approaches are used to capture and preserve critical features from both infrared and visible images within the fused image. These techniques not only improve the visibility of objects and scenes under low-contrast or low-light conditions but also strengthen the detection and recognition functions of computer vision systems [42], [43]. Saliency-based methods follow three main steps.…”
Section: A Pixel-level Image Fusion Algorithmsmentioning
confidence: 99%
“…Saliency-based approaches are used to capture and preserve critical features from both infrared and visible images within the fused image. These techniques not only improve the visibility of objects and scenes under low-contrast or low-light conditions but also strengthen the detection and recognition functions of computer vision systems [42], [43]. Saliency-based methods follow three main steps.…”
Section: A Pixel-level Image Fusion Algorithmsmentioning
confidence: 99%
“…The above operations of E‐step and M‐step are repeated until convergence. The EM algorithm is commonly used in conventional methods, such as the infrared and visible light image fusion method based on LatLRR and FPED proposed by Li et al [ 28 ] In this method, EM algorithm is applied to the fusion of high‐frequency details to capture small differences in the grayscale, so that the fused results retain more details. Although this method has excellent fusion performance, it still cannot avoid the adverse effects caused by manual fusion rules in conventional methods.…”
Section: Related Workmentioning
confidence: 99%
“…With the rapid development of image stitching and image fusion technologies, methods for obtaining multi-view or even global perspectives through multiple single viewpoints have been widely applied in human production and daily life [1][2][3][4][5]. For instance, the extensive use of technologies such as panoramic images, autonomous driving, and virtual reality (VR) enables the precise remote observation of scenes by individuals [6][7][8].…”
Section: Introductionmentioning
confidence: 99%