2013
DOI: 10.1016/j.neucom.2012.12.015
|View full text |Cite
|
Sign up to set email alerts
|

Fast saliency-aware multi-modality image fusion

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
35
0

Year Published

2013
2013
2023
2023

Publication Types

Select...
8
1

Relationship

1
8

Authors

Journals

citations
Cited by 116 publications
(35 citation statements)
references
References 26 publications
0
35
0
Order By: Relevance
“…They define the weight maps by two quality measures, say local contrast and color consistency, in the fusion process. Han et al [38] employed the saliency detection to generate the saliency maps for objects or regions. The MRF model was utilized to combine the saliency map and the co-occurrence of hot spots and motion.…”
Section: Related Workmentioning
confidence: 99%
“…They define the weight maps by two quality measures, say local contrast and color consistency, in the fusion process. Han et al [38] employed the saliency detection to generate the saliency maps for objects or regions. The MRF model was utilized to combine the saliency map and the co-occurrence of hot spots and motion.…”
Section: Related Workmentioning
confidence: 99%
“…As a result, it is necessary to find a new path to process both visible and thermal image based on their characteristics. Inspired by several multi-modality image fusion approaches [32][33][34], where color and infrared images are integrated for saliency-based image fusion [32,34] and image registration [33], the fusion of the two image modalities (RGB and thermal) offers new insights for the supplementary information they can provide. This has proven to be a success in determining the refined foreground map by the fusion of both visible and thermal binary maps.…”
Section: Introductionmentioning
confidence: 99%
“…Such a detector that mimics the human visual attention mechanism has been served as a foundation for many computer vision applications including object classification (Peng and Shao, 2015), image segmentation (Fouquier et al, 2012), image retrieval (Chen and Cheng, 2009), image fusion (Han et al, 2013), and image thumbnailing (Marchesotti et al, 2009). …”
Section: Introductionmentioning
confidence: 99%